1 KRIZHEVSKY A , SUTSKEVER I , HINTON G E . ImageNet classification with deep convolutional neural networks[C]// Proceedings of the 2012 International Conference on Neural Information Processing Systems. New York: Curran Associates Inc., 2012: 1097-1105.
2 WANG M , DENG W . Deep face recognition: a survey[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1804.06655.pdf.
3 HE K , ZHANG X , REN S , et al . Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778.
4 SIMONYAN K , ZISSERMAN A . Two-stream convolutional networks for action recognition in videos[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014: 568-576.
5 DENG L , YU D . Deep learning: methods and applications[J]. Foundations and Trends in Signal Processing, 2014, 7(3/4): 197-387.
6 SZEGEDY C , LIU W , JIA Y , et al . Going deeper with convolutions[EB/OL]. [2019-07-11].http://www.arxiv.org/pdf/1409.4842.pdf.
7 SIMONYAN K , ZISSERMAN A . Very deep convolutional networks for large-scale image recognition[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1409.1556.pdf.
8 XUE J , LI J , YU D , et al . Singular value decomposition based low-footprint speaker adaptation and personalization for deep neural network[C]// Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2014: 6359-6363.
9 KIM Y D , PARK E , YOO S, et al . Compression of deep convolutional neural networks for fast and low power mobile applications[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1511.06530.pdf.
10 TAI C , XIAO T , ZHANG Y , et al . Convolutional neural networks with low-rank regularization[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1511.06067.pdf.
11 ZHU C , HAN S , MAO H , et al . Trained ternary quantization[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1612.01064.pdf.
12 ZHOU A , YAO A , GUO Y , et al . Incremental network quantization: Towards lossless CNNs with low-precision weights[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1702.03044.pdf.
13 ZHANG X , ZOU J , HE K , et al . Accelerating very deep convolutional networks for classification and detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10):1943-1955.
14 KIM J , PARK S , KWAK N . Paraphrasing complex network: network compression via factor transfer[C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems.Moutreal,Canada: Neural Information Processing Systems Foundation, Inc., 2018: 2760-2769.
15 ANIL R , PEREYRA G , PASSOS A , et al . Large scale distributed neural network training through online distillation[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1804.03235.pdf.
16 HE Y , KANG G , DONG X , et al . Soft filter pruning for accelerating deep convolutional neural networks[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1808.06866.pdf.
17 靳丽蕾, 杨文柱, 王思乐, 等 . 一种用于卷积神经网络压缩的混合剪枝方法[J]. 小型微型计算机系统, 2018, 39(12): 38-43. JIN L L , YANG W Z , WANG S L , et al .Mixed pruning method for convolutional neural network compression[J]. Journal of Chinese Computer Systems, 2018, 39(12): 38-43.
18 CHANDAKKAR P S , LI Y , DING P L K , et al . Strategies for re-training a pruned neural network in an edge computing paradigm[C]// Proceedings of the 2017 IEEE International Conference on Edge Computing. Piscataway: IEEE, 2017: 244-247.
19 HAN S , LIU X , MAO H , et al . EIE: efficient inference engine on compressed deep neural network[J]. ACM SIGARCH Computer Architecture News, 2016, 44(3): 243-254.
20 HAN S , KANG J , MAO H , et al . ESE: efficient speech recognition engine with sparse LSTM on FPGA[C]// Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. New York: ACM, 2017: 75-84.
21 WEN W , XU C , WU C , et al . Coordinating filters for faster deep neural networks[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 658-666.
22 YU X , LIU T , WANG X , et al . On compressing deep models by low rank and sparse decomposition[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 67-76.
23 COURBARIAUX M , HUBARA I , SOUDRY D , et al . Binarized neural networks: training deep neural networks with weights and activations constrained to +1 or - 1 [EB/OL]. [2019-07-11].https://arxiv.org/pdf/1602.02830.pdf.
24 RASTEGARI M , ORDONEZ V , REDMON J , et al . XNOR-Net: ImageNet classification using binary convolutional neural networks[C]// Proceedings of the 2016 European Conference on Computer Vision, LNCS 9908. Cham: Springer, 2016: 525-542.
25 LI F , ZHANG B , LIU B . Ternary weight networks[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1605.04711.pdf.
26 HINTON G , VINYALS O , DEAN J . Distilling the knowledge in a neural network[EB/OL]. [2019-07-11].http://serve.3ezy.com/arxiv.org/pdf/1503.02531.
27 ROMERO A , BALLAS N , KAHOU S E , et al . FitNets: hints for thin deep nets[EB/OL]. [2019-07-11].http://www.algomagic.com/readings?reading=1412.6550.pdf.
28 KORATTIKARA A , RATHOD V , MURPHY K P , et al . Bayesian dark knowledge[C]// Proceedings of the 28th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2015: 3438-3446.
29 HAN S , POOL J , TRAN J , et al . Learning both weights and connections for efficient neural network[C]// Proceedings of the 28th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2015: 1135-1143.
30 SUN Y , WANG X , TANG X . Sparsifying neural network connections for face recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 4856-4864.
31 HAN S , MAO H , DALLY W J . Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1510.00149.pdf.
32 IANDOLA F N , HAN S , MOSKEWICZ M W , et al . SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1602.07360.pdf.
33 SRIVASTAVA N , HINTON G , KRIZHEVSKY A , et al . Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15: 1929-1958.
34 IOFFE S , SZEGEDY C . Batch normalization: accelerating deep network training by reducing internal covariate shift[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1502.03167.pdf.
35 LI H , KADAV A , DURDANOVIC I , et al . Pruning filters for efficient ConvNets[EB/OL]. [2019-07-11].https://arxiv.org/pdf/1608.08710.pdf.
36 HE Y , LIU P , WANG Z , et al . Filter pruning via geometric median for deep convolutional neural networks acceleration[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4335-4344. |