[1] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553):436-444. [2] ERHAN D, BENGIO Y, COURVILLE A, et al. Why does unsupervised pre-training help deep learning?[J]. Journal of Machine Learning Research, 2010, 11(3):625-660. [3] MOHAMED A R, DAHL G E, HINTON G. Acoustic modeling using deep belief networks[J]. IEEE Transactions on Audio Speech & Language Processing, 2012, 20(1):14-22. [4] WALID R, LASFAR A. Handwritten digit recognition using sparse deep architectures[C]//Proceedings of the 20149th International Conference on Intelligent Systems:Theories and Applications. Piscataway, NJ:IEEE, 2014:1-6. [5] BU S, LIU Z, HAN J, et al. Learning high-level feature by deep belief networks for 3-D model retrieval and recognition[J]. IEEE Transactions on Multimedia, 2014, 16(8):2154-2167. [6] SARIKAYA R, HINTON G E, DEORAS A. Application of deep belief networks for natural language understanding[J]. IEEE/ACM Transactions on Audio Speech & Language Processing, 2014, 22(4):778-784. [7] HINTON G. A practical guide to training restricted Boltzmann machines[EB/OL].[2016-12-12]. http://www.csri.utoronto.ca/~hinton/absps/guideTR.pdf. [8] HINTON G E. Training products of experts by minimizing contrastive divergence[J]. Neural Computation, 2002, 14(8):1771-1800. [9] 潘广源,柴伟,乔俊飞.DBN网络的深度确定方法[J].控制与决策,2015,30(2):256-260.(PAN G Y, CHAI W, QIAO J F. Calculation for depth of deep belief network[J]. Control and Decision, 2015, 30(2):256-260.) [10] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout:a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15(1):1929-1958. [11] RANZATO M, BOUREAU Y L, LECUN Y. Sparse feature learning for deep belief networks[J]. Advances in Neural Information Processing Systems, 2007, 20:1185-1192. [12] 胡振,傅昆,张长水.基于深度学习的作曲家分类问题[J].计算机研究与发展,2014,51(9):1945-1954.(HU Z, FU K, ZHANG C S. Audio classical composer identification by deep neural network[J]. Journal of Computer Research and Development, 2014, 51(9):1945-1954.) [13] BENGIO Y, LAMBLIN P, POPOVICI D, et al. Greedy layer-wise training of deep networks[C]//Proceedings of the 19th International Conference on Neural Information Processing Systems. Cambridge, MA:MIT Press, 2006:153-160. [14] VINCENT P, LAROCHELLE H, BENGIO Y, et al. Extracting and composing robust features with denoising autoencoders[C]//Proceedings of the 25th International Conference on Machine Learning. New York:ACM, 2008:1096-1103. [15] SALAKHUTDINOV R, HINTON G. Deep Boltzmann machines[J]. Journal of Machine Learning Research, 2009, 5(2):1967-2006. [16] BENGIO Y, COURVILLE A C, VINCENT P. Unsupervised feature learning and deep learning:a review and new perspectives[EB/OL].[2016-12-22]. http://docs.huihoo.com/deep-learning/Representation-Learning-A-Review-and-New-Perspectives-v1.pdf. [17] VINCENT P, LAROCHELLE H, LAJOIE I, et al. Stacked denoising autoencoders:learning useful representations in a deep network with a local denoising criterion[J]. Journal of Machine Learning Research, 2010, 11:3371-3408. [18] LIU Y, ZHOU S, CHEN Q. Discriminative deep belief networks for visual data classification[J]. Pattern Recognition, 2011, 44(10/11):2287-2296. [19] XIE J, XU L, CHEN E. Image denoising and inpainting with deep neural networks[EB/OL].[2016-11-27]. http://staff.ustc.edu.cn/~linlixu/papers/nips12.pdf. [20] DENG L. A tutorial survey of architectures, algorithms, and applications for deep learning[EB/OL].[2017-01-10]. https://www.cambridge.org/core/services/aop-cambridge-core/content/view/S2048770314000043. [21] HINTON G E, SALAKHUTDINOV R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786):504-507. |