[1] MASTELINI S M,SANTANA E J,COSTA V G T D,et al. Benchmarking multi-target regression methods[C]//Proceedings of the 7th Brazilian Conference on Intelligent Systems. Piscataway:IEEE,2018:396-401. [2] YAN Y, RICCI E, SUBRAMANIAN R, et al. A multi-task learning framework for head pose estimation under target motion[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2016,38(6):1070-1083. [3] ZHEN X T,WANG Z J,ISLAM A,et al. Multi-scale deep networks and regression forests for direct bi-ventricular volume estimation[J]. Medical Image Analysis,2015,30:120-129. [4] KOCEV D,DŽEROSKI S,WHITE M D,et al. Using single-and multi-target regression trees and ensembles to model a compound index of vegetation condition[J]. Ecological Modelling,2009,220(8):1159-1168. [5] ZHEN X T,YU M Y,ZHENG F,et al. Multitarget sparse latent regression[J]. IEEE Transactions on Neural Networks and Learning Systems,2018,29(5):1575-1586. [6] CHEN J H,LIU J,YE J P Learning incoherent sparse and low-rank patterns from multiple tasks[J]. ACM Transactions on Knowledge Discovery from Data,2012,5(4):No. 22. [7] ZHANG M L, ZHOU Z H. A review on multi-label learning algorithms[J]. IEEE Transactions on Knowledge and Data Engineering,2014,26(8):1819-1837. [8] FÜRNKRANZ J,HÜLLERMEIER E,LOZA MENCÍA E,et al. Multilabel classification via calibrated label ranking[J]. Machine Learning,2008,73(2):133-153. [9] READ J,PFAHRINGER B,HOLMES G,et al. Classifier chains for multi-label classification[J]. Machine Learning,2011,85(3):No. 333. [10] ZHANG M L,ZHOU Z H. ML-KNN:a lazy learning approach to multi-label learning[J]. Pattern Recognition,2007,40(7):2038-2048. [11] XU J H. An efficient multi-label support vector machine with a zero label[J]. Expert Systems with Applications,2012,39(5):4796-4804. [12] SUN K W,LEE C H,XIE X F. MLHN:a hypernetwork model for multi-label classification[J]. International Journal of Pattern Recognition and Artificial Intelligence, 2015, 29(6):No. 1550020. [13] SUN K W,LEE C H,WANG J. Multilabel classification via coevolutionary multilabel hypernetwork[J]. IEEE Transactions on Knowledge and Data Engineering,2016,28(9):2438-2451. [14] SUN K W,LEE C H. Addressing class-imbalance in multi-label learning via two-stage multi-label hypernetwork[J]. Neurocomputing,2017,266:375-389. [15] 王进, 刘彬, 孙开伟, 等. 基于标签关联的多标签演化超网络[J]. 电子学报,2018,46(4):1012-1018.(WANG J,LIU B, SUN K W,et al. Multi-label evolutionary hypernetwork based on label correlations[J]. Acta Electronica Sinica,2018,46(4):1012-1018.) [16] SPYROMITROS-XIOUFIS E,TSOUMAKAS G,GROVES W,et al. Multi-target regression via input space expansion:treating targets as inputs[J]. Machine Learning,2016,104(1):55-98. [17] ZHEN X T,YU M Y,HE X F,et al. Multi-target regression via robust low-rank learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2018,40(2):497-504. [18] LI H Q,ZHANG W,CHEN Y,et al. A novel multi-target regression frame-work for time-series prediction of drug efficacy[J]. Scientific Reports,2017,7:No. 40652. [19] MELKI G,CANO A,KECMAN V,et al. Multi-target support vector regression via correlation regressor chains[J]. Information Sciences,2017,415/416:53-69. [20] 陈知良. 基于目标特定特征的多目标回归方法及应用[D]. 重庆:重庆邮电大学,2019:19-27.(CHEN Z L. Method and application for multi-target regression via target specific features[D]. Chongqing:Chongqing University of Posts and Telecommunications,2019:19-27.) [21] AHO T,ŽENKO B,DŽEROSKI S,et al. Multi-target regression with rule ensembles[J]. Journal of Machine Learning Research, 2012,13:2367-2407. [22] OSIJNIK A, DŽEROSKI S, KOCEV D. Option predictive clustering trees for multi-target regression[C]//Proceedings of the 2016 International Conference on Discovery Science,LNCS 9956. Cham:Springer,2016:118-133. [23] 毕曦文, 纪明宇, 吴鹏, 等. 个性化高校新闻分类推荐的应用研究[J]. 计算机应用与软件,2019,36(7):218-223.(BI X W, JI M Y,WU P,et al. Research and application of personalized college news classification and recommendation[J]. Computer Applications and Software,2019,36(7):218-223.) [24] NIE F P,HUANG H,DING C. Low-rank matrix recovery via efficient Schatten p-norm minimization[C]//Proceedings of the 26th AAAI Conference on Artificial Intelligence. Palo Alto,CA:AAAI Press,2012:655-661. [25] DEMŠAR J. Statistical comparisons of classifiers over multiple data sets[J]. Journal of Machine Learning Research,2006,7:1-30. |