[1] LECUN Y, BENGIO Y, HINTON G E, et al. Deep learning [J]. Nature, 2015, 521(7553): 436-444. [2] HINTON G E, OSINDERO S, TEH Y W. A fast learning algorithm for deep belief nets [J]. Neural Computation, 2006, 18(7): 1527-1554. [3] LAROCHELLE H, ERHAN D, COURVILLE A, et al. An empirical evaluation of deep architectures on problems with many factors of variation [C]// ICML '07: Proceedings of the 2007 24th International Conference on Machine Learning. New York: ACM, 2007: 473-480. [4] KEYVANRAD M A, HOMAYOUNPOUR M M. Deep belief network training improvement using elite samples minimizing free energy [EB/OL]. [2015-11-22]. http://xueshu.baidu.com/s?wd=paperuri%3A%282e0ed0ef0b45da606b7629105f1f17ed%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Farxiv.org%2Fpdf%2F1411.4046v1&ie=utf-8&sc_us=758889154434608003. [5] LIU Y, ZHOU S, CHEN Q. Discriminative deep belief networks for visual data classification [J]. Pattern Recognition, 2011, 44(10/11): 2287-2296. [6] HINTON G E, SALAKHUTDINOV R. Reducing the dimensionality of data with neural networks [J]. Science, 2016, 313(5786): 504-507. [7] 丁锋,萧德云,丁韬.多新息随机梯度辨识方法[J].控制理论与应用,2003,20(6):870-874.(DING F, XIAO D Y, DING T. Multi-innovation stochastic gradient identification methods [J]. Control Theory and Applications, 2003, 20(6): 870-874.) [8] DING F. Several multi-innovation identification methods [J]. Digital Signal Processing, 2010, 20(4): 1027-1039. [9] HINTON G E. A practical guide to training restricted Boltzmann machines [C]// Neural Networks: Tricks of the Trade, LNCS 7700. Berlin: Springer, 2012: 599-619. [10] SWERSKY K, CHEN B, MARLIN B, et al. A tutorial on stochastic approximation algorithms for training restricted Boltzmann machines and deep belief nets [C]// Proceedings of the 2010 Information Theory and Applications Workshop. Piscataway, NJ: IEEE, 2010:1-10. [11] 丁洁,谢莉,丁锋. 非均匀采样系统多新息随机梯度辨识性能分析[J].控制与决策,2011,26(9):1338-1342.(DING J, XIE L, DING F. Performance analysis of multi-innovation stochastic gradient identification for non-uniformly sampled systems [J]. Control and Decision, 2011, 26(9): 1338-1342.) [12] 丁锋.系统辨识(6):多新息辨识理论与方法[J].南京信息工程大学学报,2012,4(1):1-28.(DING F. System identification. Part F: multi-innovation identification theory and methods [J]. Journal of Nanjing University of Information Science and Technology, 2012, 4(1):1-28.) [13] 丁锋,杨家本.衰减激励条件下确定性系统多新息辨识的收敛性分析[J].清华大学学报(自然科学版),1998,38(9):111-115.(DING F, YANG J B. Convergence of multi-innovation identification under attenuating excitation conditions for deterministic systems [J]. Journal of Tsinghua University (Science and Technology), 1998,38(9):111-115.) [14] LEE H, EKANADHAM C, NG A Y. Sparse deep belief net model for visual area V2 [EB/OL]. [2015-11-26]. http://web.eecs.umich.edu/~honglak/nips07-sparseDBN.pdf. [15] KRIZHEVSKY A, Learning multiple layers of features from tiny images [D]. Toronto: University of Toronto, 2009:17. [16] LECUN Y, CORTES C. The MNIST database of handwritten digits [DB/OL]. [2011-12-20]. http://yann.lecun.com/exdb/mnist/index.html. [17] LI F F, FERGUS R, PERONA P. Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories [C]// CVPRW '04: Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2004,12: 178. [18] KEYVANRAD M A, HOMAYOUNPOUR M M. A brief survey on deep belief networks and introducing a new object oriented Matlab toolbox (DeeBNetV2.2) [EB/OL]. [2015-11-06]. https://www.researchgate.net/publication/264790642_A_brief_survey_on_deep_belief_networks_and_introducing_a_new_object_oriented_MATLAB_toolbox_DeeBNet_V20. [19] COLLOBERT R, SINZ F, WESTON J, et al. Large scale transductive SVMs [J]. Journal of Machine Learning Research, 2006, 7: 1687-1712. [20] HAGAN M T, DEMUTH H B, BEALE M. Neural Network Design [M]. Beijing: China Machine Press, 2002: 357. [21] SALAKHUTDINOV R, HINTON G E. Learning a nonlinear embedding by preserving class neighbourhood structure [J]. Journal of Machine Learning Research, 2007, 2: 412-419. [22] TIELEMAN T, HINTON G. Using fast weights to improve persistent contrastive divergence [C]// ICML'09: Proceedings of the 26th Annual International Conference on Machine Learning. New York: ACM, 2009:1033-1040. |