Journal of Computer Applications ›› 2020, Vol. 40 ›› Issue (5): 1260-1265.DOI: 10.11772/j.issn.1001-9081.2019111977

• Artificial intelligence • Previous Articles     Next Articles

CNN model compression based on activation-entropy based layer-wise iterative pruning strategy

CHEN Chengjun, MAO Yingchi, WANG Yichao   

  1. College of Computer and Information, Hohai University, Nanjing Jiangsu 211100, China
  • Received:2019-11-21 Revised:2020-02-12 Online:2020-05-10 Published:2020-05-15
  • Contact: MAO Yingchi, born in 1976, Ph. D., professor. Her research interests include internet of things, distributed data processing.
  • About author:CHEN Chengjun, born in 1996, M. S. candidate. His research interests include artificial intelligence, distributed data processing.MAO Yingchi, born in 1976, Ph. D., professor. Her research interests include internet of things, distributed data processing.WANG Yichao, born in 1994, M. S. His research interests include distributed data processing.
  • Supported by:

    This work is partially supported by the “13th Five Year Plan” National Key Research and Development Program of China (2018YFC0407105), the Key Research and Development Program of Huaneng Group (HNKJ17-21).

基于激活-熵的分层迭代剪枝策略的CNN模型压缩

陈程军, 毛莺池, 王绎超   

  1. 河海大学 计算机与信息学院,南京 211100
  • 通讯作者: 毛莺池(1976—)
  • 作者简介:陈程军(1996—),男,江苏南通人,硕士研究生,CCF会员,主要研究方向:人工智能、分布式数据处理; 毛莺池(1976—),女,上海人,教授,博士,CCF会员,主要研究方向:物联网、分布式数据处理; 王绎超(1994—),男,山西介休人,硕士,主要研究方向:分布式数据处理。
  • 基金资助:

    “十三五”国家重点研发计划项目(2018YFC0407105);华能集团重点研发项目(HNKJ17-21)。

Abstract:

Since the existing pruning strategies of the Convolutional Neural Network (CNN) model are different and have general effects, an Activation-Entropy based Layer-wise Iterative Pruning (AE-LIP) strategy was proposed to reduce the parameter amount of the model while ensuring the accuracy of the model within a controllable range. Firstly, combined with the neuronal activation value and information entropy, a weight evaluation criteria based on activation-entropy was constructed, and the weight importance score was calculated. Secondly, the pruning was performed layer by layer, the weights were sorted according to the importance score, and the pruning number in each layer was combined to filter out the weights to be pruned and set them to zero. Finally, the model was fine-tuned, and the above process was repeated until the iteration ended. The experimental results show that the activation-entropy based layer-wise iterative pruning strategy makes the AlexNet model compressed 87.5%, and the corresponding accuracy is reduced by 2.12 percentage points, which is 1.54 percentage points higher than that of the magnitude-based weight pruning strategy and 0.91 percentage points higher than that of the correlation-based weight pruning strategy; the strategy makes VGG-16 model compressed 84.1%, and the corresponding accuracy is reduced by 2.62 percentage points, which is 0.62 and 0.27 percentage points higher than those of the two above strategies. It can be seen that the proposed strategy reduces the size of the CNN model effectively while ensuring the accuracy of the model, and is helpful for the deployment of CNN model on mobile devices with limited storage.

Key words: mobile cloud computing, neuronal activation value, information entropy, iterative pruning, model compression

摘要:

针对卷积神经网络(CNN)模型现有剪枝策略各尽不同和效果一般的情况,提出了基于激活-熵的分层迭代剪枝(AE-LIP)策略,保证模型精度在可控范围内的同时缩减模型的参数量。首先,结合神经元激活值和信息熵,构建基于激活-熵的权重评判准则,计算权值重要性得分;然后,逐层剪枝,根据重要性得分对权值排序,并结合各层剪枝数量筛选出待剪枝权重并将其设置为0;最后,微调模型,重复上述过程,直至迭代结束。实验结果表明,采用基于激活-熵的分层迭代剪枝策略:AlexNet模型压缩了87.5%;相应的准确率下降了2.12个百分点,比采用基于幅度的权重剪枝策略提高了1.54个百分点,比采用基于相关性的权重剪枝策略提高0.91个百分点。VGG-16模型压缩了84.1%;相应的准确率下降了2.62个百分点,比采用上述两个对比策略分别提高了0.62个百分点和0.27个百分点。说明所提策略在保证模型精确度下有效缩减了CNN模型的大小,有助于CNN模型在存储受限的移动设备上的部署。

关键词: 移动云计算, 神经元激活值, 信息熵, 迭代剪枝, 模型压缩

CLC Number: