《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (1): 30-36.DOI: 10.11772/j.issn.1001-9081.2021112020

所属专题: 人工智能

• 人工智能 • 上一篇    下一篇

基于贝叶斯优化的无标签网络剪枝算法

高媛媛1, 余振华1, 杜方1,2, 宋丽娟1,2   

  1. 1.宁夏大学 信息工程学院,银川 750021
    2.宁夏大数据与人工智能省部共建协同创新中心(宁夏大学),银川 750021
  • 收稿日期:2021-11-29 修回日期:2022-05-03 发布日期:2022-06-06
  • 通讯作者: 高媛媛(1996—),女,陕西榆林人,硕士,主要研究方向:深度学习、模型压缩yy_Gao@nxu.edu.cn
  • 作者简介:高媛媛(1996—),女,陕西榆林人,硕士,主要研究方向:深度学习、模型压缩;余振华(1989—),男,青海海东人,副教授,博士,主要研究方向:机器学习、计算机视觉;杜方(1974—),女,宁夏银川人,教授,博士,研究方向:大数据管理、人工智能;宋丽娟(1978—),女,宁夏石嘴山人,副教授,博士,研究方向:图像处理、计算机视觉;
  • 基金资助:
    宁夏自然科学基金资助项目(2018A0899)。

Unlabeled network pruning algorithm based on Bayesian optimization

GAO Yuanyuan1, YU Zhenhua1, DU Fang1,2, SONG Lijuan1,2   

  1. 1.School of Information Engineering, Ningxia University, Yinchuan Ningxia 750021, China
    2.Collaborative Innovation Center for Ningxia Big Data and Artificial Intelligence Co-founded by Ningxia Municipality and Ministry of Education(Ningxia University), Yinchuan Ningxia 750021, China
  • Received:2021-11-29 Revised:2022-05-03 Online:2022-06-06
  • Contact: GAO Yuanyuan, born in 1996, M. S. Her research interests include deep learning, model compression.
  • About author:GAO Yuanyuan, born in 1996, M. S. Her research interests include deep learning, model compression;YU Zhenhua, born in 1989, Ph. D., associate professor. His research interests include machine learning, computer vision;DU Fang, born in 1974, Ph. D., professor. Her research interests include big data management, artificial intelligence;SONG Lijuan, born in 1978, Ph. D., associate professor. Her research interests include image processing, computer vision;
  • Supported by:
    This work is partially supported by Ningxia Natural Science Foundation (2018A0899).

摘要: 针对深度神经网络(DNN)的参数和计算量过大问题,提出一种基于贝叶斯优化的无标签网络剪枝算法。首先,利用全局剪枝策略来有效避免以逐层方式修剪而导致的模型次优压缩率;其次,在网络剪枝过程中不依赖数据样本标签,并通过最小化剪枝网络与基线网络输出特征的距离对网络每层的压缩率进行优化;最后,利用贝叶斯优化算法寻找网络每一层的最优剪枝率,以提高子网搜索的效率和精度。实验结果表明,使用所提算法在CIFAR-10数据集上对VGG-16网络进行压缩,参数压缩率为85.32%,每秒浮点运算次数(FLOPS)压缩率为69.20%,而精度损失仅为0.43%。可见,所提算法可以有效地压缩DNN模型,且压缩后的模型仍能保持良好的精度。

关键词: 深度神经网络, 模型压缩, 网络剪枝, 网络结构搜索, 贝叶斯优化

Abstract: To deal with too many parameters and too much computation in Deep Neural Networks (DNNs), an unlabeled neural network pruning algorithm based on Bayesian optimization was proposed. Firstly, based on a global pruning strategy, the sub-optimal compression ratio of the model caused by layer-by-layer pruning was avoided effectively. Secondly, the pruning process was independent on the labels of data samples, and the compression ratios of all layers were optimized by minimizing the distance between the output features of pruning and baseline networks. Finally, the Bayesian optimization algorithm was adopted to find the optimal compression ratio of each layer, thereby improving the efficiency and accuracy of sub-network search. Experimental results show that when compressing VGG-16 network by the proposed algorithm on CIFAR-10 dataset, the parameter compression ratio is 85.32%, and the Floating Point of Operations (FLOPS) compression ratio is 69.20% with only 0.43% accuracy loss. Therefore, the DNN model can be compressed effectively by the proposed algorithm, and the compressed model can still maintain good accuracy.

Key words: Deep Neural Network (DNN), model compression, network pruning, network structure search, Bayesian optimization

中图分类号: