[1] WANG J, LUO S, LI Y. A multi-view regularization method for semi-supervised learning[C]//Proceedings of the 2010 International Symposium on Neural Networks. Berlin, Heidelberg:Springer, 2010:444-449. [2] EATON E, DESJARDINS M, JACOB S. Multi-view clustering with constraint propagation for learning with an incomplete mapping between views[C]//Proceedings of the 19th ACM International Conference on Information and Knowledge Management. New York:ACM, 2010:389-398. [3] SUN T, CHEN S, YANG J, et al. A novel method of combined feature extraction for recognition[C]//Proceedings of the 8th IEEE International Conference on Data Mining. Piscataway, NJ:IEEE, 2008:1043-1048. [4] HARDOON D, SZEDMAK S, SHAWE-TAYLOR J. Canonical correlation analysis:an overview with application to learning methods[J]. Neural Computation, 2004, 16(12):2639-2664. [5] 周旭东, 陈晓红, 陈松灿. 增强组合特征判别性的典型相关分析[J]. 模式识别与人工智能, 2012, 25(2):285-291. (ZHOU X D, CHEN X H, CHEN S C. Combined-feature-discriminability enhanced canonical correlation analysis[J]. Pattern Recognition and Artificial Intelligence, 2012, 25(2):285-291.) [6] XING X, WANG K, YAN T, et al. Complete canonical correlation analysis with application to multi-view gait recognition[J]. Pattern Recognition, 2016, 50(C):107-117. [7] KROGH A, VEDELSBY J. Neural network ensembles, cross validation, and active learning[C]//Proceedings of the 7th International Conference on Neural Information Processing Systems. Cambridge, MA:MIT Press, 1995:231-238. [8] XIE F, FAN H, LI Y, et al. Melanoma classification on dermoscopy images using a neural network ensemble model[J]. IEEE Transactions on Medical Imaging, 2017, 36(3):849-858. [9] PAN S, WU J, ZHU X, et al. Graph ensemble boosting for imbalanced noisy graph stream classification[J]. IEEE Transactions on Cybernetics, 2015, 45(5):954-968. [10] GUO H X, LI Y J, LI Y N, et al. BPSO-Adaboost-KNN ensemble learning algorithm for multi-class imbalanced data classification[J]. Engineering Applications of Artificial Intelligence, 2016, 49(C):176-193. [11] REMYA K R, RAMY J S. Using weighted majority voting classifier combination for relation classification in biomedical texts[C]//Proceedings of the 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT). Piscataway, NJ:IEEE, 2014:1205-1209. [12] ZHANG J, WU Y, BAI J, et al. Automatic sleep stage classification based on sparse deep belief net and combination of multiple classifiers[J]. Transactions of the Institute of Measurement and Control, 2016, 38(4):435-451. [13] SAKR S, ELSHAWI R, AHMED A M, et al. Comparison of machine learning techniques to predict all-cause mortality using fitness data:the Henry ford exercIse testing (FIT) project[J]. BMC Medical Informatics and Decision Making, 2017, 17(1):174. [14] SUBASI A, ALICKOVIC E, KEVRIC J. Diagnosis of chronic kidney disease by using random forest[M]//Proceedings of the 2017 International Conference on Medical and Biological Engineering. Singapore:Springer, 2017:589-594. [15] SUN Q S, ZENG S G, LIU Y, et al. A new method of feature fusion and its application in image recognition[J]. Pattern Recognition, 2005, 38(12):2437-2448. [16] CAI D, HE X, HAN J. Semi-supervised discriminant analysis[C]//Proceedings of the 2007 IEEE 11th International Conference on Computer Vision. Piscataway, NJ:IEEE, 2007:1-7. [17] WONG T T. Parametric methods for comparing the performance of two classification algorithms evaluated by k-fold cross validation on multiple data sets[J]. Pattern Recognition, 2017, 65:97-107. [18] DOMINGOS P, PAZZANI M. On the optimality of the simple Bayesian classifier under zero-one loss[J]. Machine Learning, 1997, 29(2):103-130. [19] XUE J H, TITTERINGTON D M. Comment on "on discriminative vs. generative classifiers:a comparison of logistic regression and naive Bayes"[J]. Neural Processing Letters, 2008, 28(3):169-187. [20] EVERITT B S, DUNN G. Principal Components Analysis[M]. 2nd Edition. Berlin:Springer, 2001:48-73. |