[1] RUMELHART D E,HINTON G E,WILLIAMS R J. Learning representations by back-propagating errors[J]. Nature,1986,323(6088):533-536. [2] REED R. Pruning algorithms-a survey[J]. IEEE Transactions on Neural Networks,1993,4(5):740-747. [3] MOZER M C,SMOLENSKY P. Skeletonization:a technique for trimming the fat from a network via relevance assessment[C]//Proceedings of the 1st International Conference on Neural Information Processing Systems. San Francisco, CA:Morgan Kaufmann Publishers Inc.,1988:107-115. [4] LECUN Y,DENKER J S,SOLLA S A. Optimal brain damage[C]//Proceedings of the 2nd International Conference on Neural Information Processing Systems. San Francisco, CA:Morgan Kaufmann Publishers Inc.,1989:598-605. [5] JADERBERG M, VEDALDI A, ZISSERMAN A. Speeding-up convolutional neural networks with low rank expansions[EB/OL].[2019-12-04]. https://arxiv.org/pdf/1405.3866.pdf. [6] ZHANG X, ZOU J, MING X, et al. Efficient and accurate approximations of nonlinear convolutional networks[C]//Proceedings of the 2015 IEEE Conference on Computer Vison and Pattern Recognition. Piscataway:IEEE,2015:1984-1992. [7] LIU B,WANG M,FOROOSH H,et al. Sparse convolutional neural networks[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2015:806-814. [8] HAN S,POOL J,TRAN J,et al. Learning both weights and connections for efficient neural network[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems. Cambridge:MIT Press,2015:1135-1143. [9] LEBEDEV V,LEMPITSKY V. Fast ConvNets using group-wise brain damage[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2016:2554-2564. [10] YUAN M,LIN Y. Model selection and estimation in regression with grouped variables[J]. Journal of the Royal Statistical Society Series B(Statistical Methodology),2006,68(1):49-67. [11] MOLCHANOV P, TYREE S, KARRAS T, et al. Pruning convolutional neural networks for resource efficient inference[EB/OL].[2019-06-30]. https://arxiv.org/pdf/1611.06440.pdf. [12] LI H,KADAV A,DURDANOVIC I,et al. Pruning filters for efficient ConvNets[EB/OL].[2019-06-30]. https://arxiv.org/pdf/1608.08710.pdf. [13] HE Y,LIN J,LIU Z,et al. AWC:autoML for modelcompression and acceleration on mobile devices[C]//Proceedings of the 2018 European Conference on Computer Vision,LNCS 11211. Cham:Springer,2018:815-832. [14] DENTON E,ZAREMBA W,BRUNA J,et al. Exploiting linear structure within convolutional networks for efficient evaluation[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge:MIT Press,2014:1269-1277. [15] LEBEDEV V,GANIN Y,RAKHUBA M,et al. Speeding-up convolutional neural networks using fine-tuned CP-decomposition[EB/OL].[2019-07-11]. https://arxiv.org/pdf/1412.6553.pdf. [16] TAI C,XIAO T,ZHANG Y,et al. Convolutional neural networks with low-rank regularization[EB/OL].[2019-07-11]. https://arxiv.org/pdf/1511.06067.pdf. [17] LIU Z,LI J,SHEN Z,et al. Learning efficient convolutional networks through network slimming[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway:IEEE,2017:2755-2763. [18] 李小伟. 轻量级深度学习目标检测算法研究及系统设计[D]. 合肥:安徽大学,2019:20-58.(LI X W. Algorithm research and system design of lightweight deep learning object detection[D]. Hefei:Anhui University,2019:20-58.) [19] MITTAL D,BHARDWAJ S,KHAPRA M M,et al. Recovering from random pruning:on the plasticity of deep convolutional neural networks[EB/OL].[2019-09-12]. https://arxiv.org/pdf/1801.10447.pdf. [20] 吴进, 吴汉宁, 刘安, 等. 一种基于Lasso回归与SVD融合的深度学习模型压缩方法[J]. 电讯技术,2019,59(5):495-500. (WU J, WU H N, LIU A, et al. A deep learning modelcompression method based on Lasso regression and SVD fusion[J]. Telecommunication Engineering,2019,59(5):495-500.) [21] DENIL M,SHAKIBI B,DINH L,et al. Predicting parameters in deep learning[C]//Proceedings of the 26th International Conference on Neural Information Proceedings Systems. Red Hook,NY:Curran Associates Inc.,2013:2148-2156. |