Journal of Computer Applications ›› 2020, Vol. 40 ›› Issue (10): 2804-2810.DOI: 10.11772/j.issn.1001-9081.2020020237

• Artificial intelligence • Previous Articles     Next Articles

Improved AdaNet based on adaptive learning rate optimization

LIU Ran1,2,3,4, LIU Yu1,2,3,4, GU Jinguang1,2,3,4   

  1. 1. College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan Hubei 430065, China;
    2. Key Laboratory of Intelligent Information Processing and Real-time Industrial System in Hubei Province;(Wuhan University of Science and Technology), Wuhan Hubei 430065, China;
    3. Institute of Big Data Science and Engineering, Wuhan University of Science and Technology, Wuhan Hubei 430065, China;
    4. Key Laboratory of Rich-media Knowledge Organization and Service of Digital Publishing Content, National Press and Publication Administration(Wuhan University of Science and Technology), Beijing 100038, China
  • Received:2020-03-05 Revised:2020-05-25 Online:2020-10-10 Published:2020-06-24
  • Supported by:
    This work is partially supported by the National Natural Science Foundation of China (U1836118, 61673004), the New Generation Information Technology Innovation Project of the Ministry of Education (2018A03025), the Major Plan of the National Social Science Foundation of China (11&ZD189).

基于自适应学习率优化的AdaNet改进

刘然1,2,3,4, 刘宇1,2,3,4, 顾进广1,2,3,4   

  1. 1. 武汉科技大学 计算机科学与技术学院, 武汉 430065;
    2. 智能信息处理与实时工业系统湖北省重点实验室(武汉科技大学), 武汉 430065;
    3. 武汉科技大学 大数据科学与工程研究院, 武汉 430065;
    4. 国家新闻出版署富媒体数字出版内容组织与知识服务重点实验室(武汉科技大学), 北京 100038
  • 通讯作者: 刘宇
  • 作者简介:刘然(1996-),男,湖北天门人,硕士研究生,主要研究方向:机器学习、深度学习、自动机器学习;刘宇(1980-),男,湖北武汉人,副教授,博士,主要研究方向:知识工程、智能系统、分布式计算;顾进广(1974-),男,湖北武汉人,教授,博士,CCF会员,主要研究方向:语义Web、新型网格计算、智能信息处理。
  • 基金资助:
    国家自然科学基金资助项目(U1836118,61673004);教育部新一代信息技术创新项目(2018A03025);国家社会科学基金重大计划项目(11&ZD189)。

Abstract: AdaNet (Adaptive structural learning of artificial neural Networks) is a neural architecture search framework based on Boosting ensemble learning, which can create high-quality models through integrated subnets. The difference between subnets generated by the existing AdaNet is not significant, which limits the reduction of generalization error in ensemble learning. In the two steps of AdaNet:setting subnet network weights and integrating subnets, Adagrad, RMSProp (Root Mean Square Prop), Adam, RAdam (Rectified Adam) and other adaptive learning rate methods were used to improve the existing optimization algorithms in AdaNet. The improved optimization algorithms were able to provide different degrees of learning rate scaling for different dimensional parameters, resulting in a more dispersed weight distribution, so as to increase the diversity of subnets generated by AdaNet, thereby reducing the generalization error of ensemble learning. The experimental results show that on the three datasets:MNIST (Mixed National Institute of Standards and Technology database), Fashion-MNIST and Fashion-MNIST with Gaussian noise, the improved optimization algorithms can improve the search speed of AdaNet, and more diverse subnets generated by the method can improve the performance of the ensemble model. For the F1 value, which is an index to evaluate the model performance, compared with the original method, the improved methods have the largest improvement of 0.28%, 1.05% and 1.10% on the three datasets.

Key words: AdaNet, Neural Architecture Search (NAS), ensemble learning, adaptive learning rate method, Automated Machine Learning (AutoML)

摘要: 人工神经网络的自适应结构学习(AdaNet)是基于Boosting集成学习的神经结构搜索框架,可通过集成子网创建高质量的模型。现有的AdaNet所产生的子网之间的差异性不显著,因而限制了集成学习中泛化误差的降低。在AdaNet设置子网网络权重和集成子网的两个步骤中,使用Adagrad、RMSProp、Adam、RAdam等自适应学习率方法来改进现有AdaNet中的优化算法。改进后的优化算法能够为不同维度参数提供不同程度的学习率缩放,得到更分散的权重分布,以增加AdaNet产生子网的多样性,从而降低集成学习的泛化误差。实验结果表明,在MNIST(Mixed National Institute of Standards and Technology database)、Fashion-MNIST、带高斯噪声的Fashion-MNIST这三个数据集上,改进后的优化算法能提升AdaNet的搜索速度,而且该方法产生的更加多样性的子网能提升集成模型的性能。在F1值这一评估模型性能的指标上,改进后的方法相较于原方法,在三种数据集上的最大提升幅度分别为0.28%、1.05%和1.10%。

关键词: AdaNet, 神经架构搜索, 集成学习, 自适应学习率方法, 自动机器学习

CLC Number: