[1] SIMONYAN K,ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL].[2020-10-24]. https://arxiv.org/pdf/1409.1556.pdf. [2] DENIL M,SHAKIBI B,DINH L,et al. Predicting parameters in deep learning[C]//Proceedings of the 2013 26th International Conference on Neural Information Processing Systems. Red Hook:Curran Associates Inc.,2013:2148-2156. [3] 雷杰, 高鑫, 宋杰, 等. 深度网络模型压缩综述[J]. 软件学报, 2018, 29(2):251-266.(LEI J,GAO X,SONG J,et al. Survey of deep neural network model compression[J]. Journal of Software,2018,29(2):251-266.) [4] 李江昀, 赵义凯, 薛卓尔, 等. 深度神经网络模型压缩综述[J]. 工程科学学报, 2019, 41(10):1229-1239.(LI J Y,ZHAO Y K, XUE Z E,et al. A survey of model compression for deep neural networks[J]. Chinese Journal of Engineering,2019,41(10):1229-1239.) [5] 林景栋, 吴欣怡, 柴毅, 等. 卷积神经网络结构优化综述[J]. 自动化学报, 2020, 46(1):24-37.(LIN J D,WU X Y,CHAI Y,et al. Structure optimization of convolutional neural networks:a survey[J]. Acta Automatica Sinica,2020,46(1):24-37.) [6] HAN S,MAO H,DALLY W J. Deep compression:compressing deep neural networks with pruning, trained quantization and Huffman coding[EB/OL].[2020-10-19]. https://arxiv.org/pdf/1510.00149.pdf. [7] 巩凯强, 张春梅, 曾光华. 卷积神经网络模型剪枝结合张量分解压缩方法[J]. 计算机应用, 2020, 40(11):3146-3151.(GONG K Q,ZHANG C M,ZENG G H. Convolution neural network model compression method based on pruning and tensor decomposition[J]. Journal of Computer Applications,2020,40(11):3146-3151.) [8] 王忠锋, 徐志远, 宋纯贺, 等. 基于梯度的深度网络剪枝算法[J]. 计算机应用, 2020, 40(5):1253-1259.(WANG Z F,XU Z Y, SONG C H,et al. Gradient-based deep network pruning algorithm[J]. Journal of Computer Applications,2020,40(5):1253-1259.) [9] LI H,KADAV A,DURDANOVIC I,et al. Pruning filters for efficient ConvNets[EB/OL].[2020-10-21]. https://arxiv.org/pdf/1608.08710.pdf. [10] LIU Z,LI J,SHEN Z,et al. Learning efficient convolutional networks through network slimming[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway:IEEE,2017:2755-2763. [11] HE Y, KANG G, DONG X, et al. Soft filter pruning for accelerating deep convolutional neural networks[C]//Proceedings of the 2018 27th International Joint Conference on Artificial Intelligence. Menlo Park:AAAI Press,2018:2234-2240. [12] GUO Y W,YAO A B,CHEN Y R. Dynamic network surgery for efficient DNNs[EB/OL].[2020-11-01]. https://arxiv.org/pdf/1608.04493.pdf. [13] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout:a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research,2014,15:1929-1958. [14] GAO X T,ZHAO Y R,DUDZIAK Ł,et al. Dynamic channel pruning:feature boosting and suppression[EB/OL].[2020-10-03]. https://arxiv.org/pdf/1810.05331.pdf. [15] HUA W Z,ZHOU Y,DE SA C,et al. Channel gating neural networks[EB/OL].[2020-10-07]. https://arxiv.org/pdf/1805.12549.pdf. [16] IOFFE S,SZEGEDY C. Batch normalization:accelerating deep network training by reducing internal covariate shift[EB/OL].[2020-10-23]. https://arxiv.org/pdf/1502.03167.pdf. [17] GLOROT X,BORDES A,BENGIO Y. Deep sparse rectifier neural networks[C]//Proceedings of the 2011 14th International Conference on Artificial Intelligence and Statistics. New York:JMLR. org,2011:315-323. [18] ZHOU B L,KHOSLA A,LAPEDRIZA A,et al. Learning deep features for discriminative localization[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE,2016:2921-2929. [19] KRIZHEVSKY A,NAIR V,HINTON G. CIFAR-10 and CIFAR-100 datasets[DS/OL].[2020-10-28]. http://www.cs.toronto.edu/~kriz/cifar.html. [20] MOLCHANOV P, TYREE S, KARRAS T, et al. Pruning convolutional neural networks for resource efficient inference[EB/OL].[2020-11-02]. https://arxiv.org/pdf/1611.06440.pdf. [21] ZHAO Y R,GAO X T,MULLINS R,et al. Mayo:a framework for auto-generating hardware friendly deep neural networks[C]//Proceedings of the 20182nd International Workshop on Embedded and Mobile Deep Learning. New York:ACM,2018:25-30. |