[1] XING E P, HO Q, DAI W, et al. Petuum:a new platform for distributed machine learning on big data[J]. IEEE Transactions on Big Data, 2015, 1(2):49-67. [2] YUAN J, GAO F, HO Q, et al. LightLDA:big topic models on modest computer clusters[C]//Proceedings of the 24th International Conference on World Wide Web. Republic and Canton of Geneva, Switzerland:International World Wide Web Conferences Steering Committee, 2015:1351-1361. [3] DEAN J, CORRADO G S, MONGA R, et al. Large scale distributed deep networks[EB/OL].[2016-03-08]. http://robotics.stanford.edu/~ang/papers-tofile/large_deep_networks_nips2012.pdf. [4] TIWARY C. Learning Apache Mahout[M]. Birmingham, UK:Packt Publishing Limited, 2015:5-23. [5] MENG X, BRADLEY J, YUVAZ B, et al. MLlib:machine learning in Apache spark[J]. Journal of Machine Learning Research, 2016, 17(34):1-7. [6] LIU Z, ZHANG Y, CHANG E Y, et al. PLDA+:parallel latent Dirichlet allocation with data placement and pipeline processing[J]. ACM Transactions on Intelligent Systems and Technology, 2011, 2(3):389-396. [7] LI M, ZHOU L, YANG Z, et al. Parameter server for distributed machine learning[EB/OL].[2016-02-03]. http://www-cgi.cs.cmu.edu/~muli/file/ps.pdf. [8] LI M, ANDERSEN D G, PARK J W, et al. Scaling distributed machine learning with the parameter server[C]//Proceedings of the 11th USENIX Conference on Operating Systems Design and Implementation. Berkeley, CA:USENIX Association, 2014:583-598. [9] GERBESSIOTIS A V, VALIANT L G. Direct bulk-synchronous parallel algorithms[J]. Journal of Parallel and Distributed Computing, 1994, 22(2):251-267. [10] HO Q, CIPAR J, CUI H, et al. More effective distributed ML via a stale synchronous parallel parameter server[J]. Advances in Neural Information Processing Systems, 2013, 2013:1223-1231. [11] BOTTOU L. Large-scale machine learning with stochastic gradient descent[M]//Proceedings of COMPSTAT'2010. Berlin:Springer, 2010:177-186. [12] HOFFMAN M D, BLEI D M, BACH F. Online learning for latent Dirichlet allocation[EB/OL].[2016-02-11]. http://people.ee.duke.edu/~lcarin/HoffmanBleiBach2010b.pdf. [13] BLEI D M, NG A Y, JORDAN M I. Latent Dirichlet allocation[J]. Journal of Machine Learning Research, 2003, 3:993-1022. [14] DUMAIS S T. Latent semantic analysis[J]. Annual Review of Information Science and Technology, 2004, 38(1):188-230. [15] HOFMANN T. Probabilistic latent semantic analysis[C]//Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence. San Francisco, CA:Morgan Kaufmann Publishers, 1999:289-296. [16] HEINRICH G. Parameter estimation for text analysis[EB/OL].[2016-03-11]. http://xueshu.baidu.com/s?wd=paperuri%3A%2855e8317b50e8bbb9f73d3a216c428eb3%29&filter=sc_long_sign&tn=SE_xueshusource_2kduw22v&sc_vurl=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Bjsessionid%3DAE05C3DD1ABBF2752A764EA48EAF0497%3Fdoi%3D10.1.1.149.1327%26rep%3Drep1%26type%3Dpdf&ie=utf-8&sc_us=885567970698054705. [17] ASUNCION A, WELLING M, SMYTH P, et al. On smoothing and inference for topic models[C]//Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence. Arlington, Virginia:AUAI Press, 2009:27-34. [18] 石晶,范猛,李万龙.基于LDA模型的主题分析[J].自动化学报,2009,35(12):1586-1592.(SHI J, FAN M, LI W L. Topic analysis based on LDA model[J]. Acta Automatica Sinica, 2009, 35(12):1586-1592.) |