Journal of Computer Applications ›› 2019, Vol. 39 ›› Issue (6): 1619-1625.DOI: 10.11772/j.issn.1001-9081.2018112246

• Artificial intelligence • Previous Articles     Next Articles

Denoising autoencoder based extreme learning machine

LAI Jie, WANG Xiaodan, LI Rui, ZHAO Zhenchong   

  1. College of Air and Missile Defense, Air Force Engineering University, Xi'an Shaanxi 710051, China
  • Received:2018-11-09 Revised:2018-12-18 Online:2019-06-17 Published:2019-06-10
  • Supported by:
    This work is partially supported by the National Natural Science Foundation of China (61876189, 61806219).


来杰, 王晓丹, 李睿, 赵振冲   

  1. 空军工程大学 防空反导学院, 西安 710051
  • 通讯作者: 王晓丹
  • 作者简介:来杰(1994-),男,四川简阳人,硕士研究生,主要研究方向:机器学习、智能信息处理;王晓丹(1966-),女,陕西汉中人,教授,博士,主要研究方向:机器学习、智能信息处理;李睿(1992-),男,山西岢岚人,博士研究生,主要研究方向:机器学习、智能信息处理;赵振冲(1990-),男,河南周口人,博士研究生,主要研究方向:机器学习、智能信息处理。
  • 基金资助:

Abstract: In order to solve the problem that parameter random assignment reduces the robustness of the algorithm and the performance is significantly affected by noise of Extreme Learning Machine (ELM), combining Denoising AutoEncoder (DAE) with ELM algorithm, a DAE based ELM (DAE-ELM) algorithm was proposed. Firstly, a denoising autoencoder was used to generate the input data, input weight and hidden layer parameters of ELM. Then, the hidden layer output was obtained through ELM to complete the training of classifier. On the one hand, the advantages of DAE were inherited by the algorithm, which means the features extracted automatically were more representative and robust and were impervious to noise. On the other hand, the randomness of parameter assignment of ELM was overcome and the robustness of the algorithm was improved. The experimental results show that, compared to ELM, Principal Component Analysis ELM (PCA-ELM), SAA-2, the classification error rate of DAE-ELM at least decreases 5.6% on MNIST, 3.0% on Fashion MINIST, 2.0% on Rectangles and 12.7% on Convex.

Key words: Extreme Learning Machine (ELM), deep leaning, Denoising AutoEncoder (DAE), feature extraction, feature reduction, robustness

摘要: 针对极限学习机算法(ELM)参数随机赋值降低算法鲁棒性及性能受噪声影响显著的问题,将去噪自编码器(DAE)与ELM算法相结合,提出了基于去噪自编码器的极限学习机算法(DAE-ELM)。首先,通过去噪自编码器产生ELM的输入数据、输入权值与隐含层参数;然后,以ELM求得隐含层输出权值,完成对分类器的训练。该算法一方面继承了DAE的优点,自动提取的特征更具代表性与鲁棒性,对于噪声有较强的抑制作用;另一方面克服了ELM参数赋值的随机性,增强了算法鲁棒性。实验结果表明,在不含噪声影响下DAE-ELM相较于ELM、PCA-ELM、SAA-2算法,其分类错误率在MNIST数据集中至少下降了5.6%,在Fashion MNIST数据集中至少下降了3.0%,在Rectangles数据集中至少下降了2.0%,在Convex数据集中至少下降了12.7%。

关键词: 极限学习机, 深度学习, 去噪自编码器, 特征提取, 特征降维, 鲁棒性

CLC Number: