[1] Datafloq. Self-driving cars will create 2 petabytes of data, what are the big data opportunities for the car industry?[EB/OL].[2016-12-03]. https://datafloq.com/read/self-driving-cars-create-2-petabytes-data-annually/172. [2] LeCUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11):2278-2324. [3] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//NIPS'12:Proceedings of the 25th International Conference on Neural Information Processing Systems. North Miami Beach, FL, USA:Curran Associates, 2012:1097-1105. [4] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL].[2018-01-06]. http://www.robots.ox.ac.uk:5000/~vgg/publications/2015/Simonyan15/simonyan15.pdf. [5] COATES A, HUVAL B, WANG T, et al. Deep learning with COTS HPC systems[C]//ICML'13:Proceedings of the 30th International Conference on International Conference on Machine Learning. Atlanta, GA:JMLR, 2013, 28:Ⅲ-1337-Ⅲ-1345. [6] DENIL M, SHAKIBI B, DINH L, et al. Predicting parameters in deep learning[C]//NIPS'13:Proceedings of the 2013 International Conference on Neural Information Processing Systems. North Miami Beach, FL, USA:Curran Associates, 2013:2148-2156. [7] LeCUN Y, DENKER J S, SOLLA S A. Optimal brain damage[C]//NIPS'89:Proceedings of the 2nd International Conference on Neural Information Processing Systems. Cambridge, MA:MIT Press, 1989:598-605. [8] HASSIBI B, STORK D G. Second order derivatives for network pruning:optimal brain surgeon[C]//NIPS'93:Proceedings of the 1993 Advances in Neural Information Processing Systems. San Francisco, CA:Morgan Kaufmann Publishers, 1993:164-171. [9] MOLCHANOV P, TYREE S, KARRAS T, et al. Pruning convolutional neural networks for resource efficient inference[EB/OL].[2018-01-08]. https://users.aalto.fi/~ailat1/publications/molchanov2017iclr_paper.pdf. [10] HAN S, MAO H, DALLY W J. Deep compression:compressing deep neural networks with pruning, trained quantization and Huffman coding[J]. Fiber, 2015, 56(4):3-7. [11] VANHOUCKE V, SENIOR A, MAO M Z. Improving the speed of neural networks on CPUs[EB/OL].[2018-01-08]. http://www.audentia-gestion.fr/Recherche-Research-Google/37631.pdf. [12] HWANG K, SUNG W. Fixed-point feedforward deep neural network design using weights +1, 0, and -1[C]//Proceedings of the 2014 IEEE Workshop on Signal Processing Systems. Piscataway, NJ:IEEE, 2014:1-6. [13] GONG Y, LIU L, YANG M, et al. Compressing deep convolutional networks using vector quantization[EB/OL].[2018-01-08]. http://pdfs.semanticscholar.org/e7bf/9803705f2eb608db1e59e5c7636a3f171916.pdf. [14] CHEN W, WILSON J T, TYREE S, et al. Compressing neural networks with the hashing trick[EB/OL].[2018-01-08]. https://www.cse.wustl.edu/~ychen/public/ICML15.pdf. [15] COURBARIAUX M, BENGIO Y, DAVID J P. BinaryConnect:training deep neural networks with binary weights during propagations[C]//NIPS'15:Proceedings of the 28th International Conference on Neural Information Processing Systems. Cambridge, MA:MIT Press, 2015:3123-3131. [16] HAN S, POOL J, TRAN J, et al. Learning both weights and connections for efficient neural networks[C]//NIPS'15:Proceedings of the 28th International Conference on Neural Information Processing Systems. Cambridge, MA:MIT Press, 2015, 1:1135-1143. [17] JIA Y, SHELHAMER E, DONAHUE J, et al. Caffe:convolutional architecture for fast feature embedding[C]//CIKM'14:Proceedings of the 201422nd ACM International Conference on Multimedia. New York:ACM, 2014:675-678. [18] GYSEL P, MOTAMEDI M, GHIASI S. Hardware-oriented approximation of convolutional neural networks[EB/OL].[2018-01-11]. https://arxiv.org/pdf/1604.03168v2.pdf. [19] HAMMERSTROM D. A VLSI architecture for high-performance, low-cost, on-chip learning[C]//IJCNN'90:Proceedings of the 1990 International Joint Conference on Neural Networks. Piscataway, NJ:IEEE, 1990:537-544. [20] WILLIAMSON D. Dynamically scaled fixed point arithmetic[C]//PACRIM'91:Proceedings of the 1991 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing. Piscataway, NJ:IEEE, 1991:315-318. [21] GLOROT X, BORDES A, BENGIO Y. Deep sparse rectifier neural networks[EB/OL].[2018-01-11]. http://www.utc.fr/~bordesan/dokuwiki/_media/en/glorot10nipsworkshop.pdf. [22] COURBARIAUX M, BENGIO Y, DAVID J P. Training deep neural networks with low precision multiplications[EB/OL].[2018-01-11]. http://xueshu.baidu.com/s?wd=paperuri%3A%2863058c088857b36e18a39426c453de17%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Farxiv.org%2Fpdf%2F1412.7024.pdf&ie=utf-8&sc_us=6828366373690791903. [23] GUPTA S, AGRAWAL A, GOPALAKRISHNAN K, et al. Deep learning with limited numerical precision[EB/OL].[2018-01-11]. http://proceedings.mlr.press/v37/gupta15.pdf. [24] IANDOLA F N, HAN S, MOSKEWICZ M W, et al. SqueezeNet:AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size[EB/OL].[2018-01-11]. http://xueshu.baidu.com/s?wd=paperuri%3A%288e9f88ec46614851387705d9ecf44163%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Farxiv.org%2Fabs%2F1602.07360v3&ie=utf-8&sc_us=2842338737931550299. [25] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[EB/OL].[2018-01-14]. http://www.robots.ox.ac.uk/~vgg/rg/papers/deepres.pdf. [26] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout:a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15(1):1929-1958. |