《计算机应用》唯一官方网站 ›› 2022, Vol. 42 ›› Issue (5): 1375-1382.DOI: 10.11772/j.issn.1001-9081.2021050706
所属专题: 人工智能
收稿日期:
2021-05-06
修回日期:
2021-09-07
接受日期:
2021-09-16
发布日期:
2022-03-08
出版日期:
2022-05-10
通讯作者:
包永春
作者简介:
包永春(1996—),男,山东菏泽人,硕士研究生,主要研究方向:机器学习、数据挖掘、人工智能 baoyongchun2014@163.com基金资助:
Yongchun BAO1(), Jianchen ZHANG2, Shouxin DU1, Junjun ZHANG1
Received:
2021-05-06
Revised:
2021-09-07
Accepted:
2021-09-16
Online:
2022-03-08
Published:
2022-05-10
Contact:
Yongchun BAO
About author:
BAO Yongchun, born in 1996,M. S. candidate. His research interests include machine learning,data mining,artificial intelligence.Supported by:
摘要:
传统的多标签分类算法是以二值标签预测为基础的,而二值标签由于仅能指示数据是否具有相关类别,所含语义信息较少,无法充分表示标签语义信息。为充分挖掘标签空间的语义信息,提出了一种基于非负矩阵分解和稀疏表示的多标签分类算法(MLNS)。该算法结合非负矩阵分解与稀疏表示技术,将数据的二值标签转化为实值标签,从而丰富标签语义信息并提升分类效果。首先,对标签空间进行非负矩阵分解以获得标签潜在语义空间,并将标签潜在语义空间与原始特征空间结合以形成新的特征空间;然后,对此特征空间进行稀疏编码来获得样本间的全局相似关系;最后,利用该相似关系重构二值标签向量,从而实现二值标签与实值标签的转化。在5个标准多标签数据集和5个评价指标上将所提算法与MLBGM、ML2、LIFT和MLRWKNN等算法进行对比。实验结果表明,所提MLNS在多标签分类中优于对比的多标签分类算法,在50%的案例中排名第一,在76%的案例中排名前二,在全部的案例中排名前三。
中图分类号:
包永春, 张建臣, 杜守信, 张军军. 基于非负矩阵分解与稀疏表示的多标签分类算法[J]. 计算机应用, 2022, 42(5): 1375-1382.
Yongchun BAO, Jianchen ZHANG, Shouxin DU, Junjun ZHANG. Multi-label classification algorithm based on non-negative matrix factorization and sparse representation[J]. Journal of Computer Applications, 2022, 42(5): 1375-1382.
数据集 | |S| | D(S) | 领域 | |||
---|---|---|---|---|---|---|
emotions | 593 | 72 | 6 | 1.868 | 0.311 | 音频 |
image | 2 000 | 294 | 5 | 1.236 | 0.247 | 图像 |
yeast | 2 417 | 103 | 14 | 4.237 | 0.303 | 生物 |
Corel5k | 5 000 | 499 | 374 | 3.522 | 0.009 | 图像 |
Slashdot | 24 072 | 1 079 | 291 | 4.151 | 0.014 | 文本 |
表1 数据集属性
Tab. 1 Dataset properties
数据集 | |S| | D(S) | 领域 | |||
---|---|---|---|---|---|---|
emotions | 593 | 72 | 6 | 1.868 | 0.311 | 音频 |
image | 2 000 | 294 | 5 | 1.236 | 0.247 | 图像 |
yeast | 2 417 | 103 | 14 | 4.237 | 0.303 | 生物 |
Corel5k | 5 000 | 499 | 374 | 3.522 | 0.009 | 图像 |
Slashdot | 24 072 | 1 079 | 291 | 4.151 | 0.014 | 文本 |
算法 | one-error↓ | ||||
---|---|---|---|---|---|
emotions | image | yeast | Corel5k | Slashdot | |
MLNS | 0.253±0.018 | 0.250±0.021 | 0.225±0.013 | 0.658±0.008 | 0.516±0.016 |
MLBGM | 0.354±0.030 | 0.314±0.049 | 0.285±0.029 | 0.708±0.022 | 0.443±0.029 |
ML2 | 0.261±0.045 | 0.260±0.027 | 0.246±0.034 | 0.647±0.007 | 0.510±0.022 |
MLRWKNN | 0.287±0.006 | 0.256±0.010 | 0.227±0.012 | 0.679±0.007 | 0.549±0.036 |
LIFT | 0.251±0.027 | 0.276±0.026 | 0.226±0.021 | 0.706±0.012 | 0.533±0.016 |
算法 | coverage↓ | ||||
emotions | image | yeast | Corel5k | Slashdot | |
MLNS | 0.282±0.034 | 0.159±0.010 | 0.448±0.005 | 0.304±0.004 | 0.434±0.012 |
MLBGM | 0.365±0.012 | 0.192±0.007 | 0.511±0.026 | 0.547±0.014 | 0.441±0.036 |
ML2 | 0.292±0.044 | 0.164±0.009 | 0.461±0.016 | 0.372±0.017 | 0.615±0.012 |
MLRWKNN | 0.398±0.034 | 0.231±0.024 | 0.576±0.039 | 0.435±0.028 | 0.452±0.022 |
LIFT | 0.271±0.023 | 0.172±0.013 | 0.454±0.017 | 0.313±0.008 | 0.503±0.024 |
算法 | ranking loss↓ | ||||
emotions | image | yeast | Corel5k | Slashdot | |
MLNS | 0.145±0.028 | 0.131±0.010 | 0.165±0.008 | 0.101±0.001 | 0.357±0.007 |
MLBGM | 0.202±0.016 | 0.159±0.026 | 0.181±0.020 | 0.237±0.009 | 0.436±0.016 |
ML2 | 0.153±0.013 | 0.136±0.012 | 0.175±0.015 | 0.163±0.011 | 0.363±0.007 |
MLRWKNN | 0.137±0.031 | 0.204±0.024 | 0.161±0.041 | 0.203±0.017 | 0.359±0.024 |
LIFT | 0.144±0.016 | 0.148±0.012 | 0.164±0.013 | 0.131±0.006 | 0.367±0.010 |
算法 | average precise↑ | ||||
emotions | image | yeast | Corel5k | Slashdot | |
MLNS | 0.818±0.021 | 0.838±0.009 | 0.770±0.008 | 0.292±0.007 | 0.514±0.010 |
MLBGM | 0.762±0.029 | 0.725±0.037 | 0.684±0.027 | 0.212±0.021 | 0.477±0.018 |
ML2 | 0.816±0.021 | 0.832±0.014 | 0.759±0.020 | 0.297±0.010 | 0.521±0.012 |
MLRWKNN | 0.792±0.027 | 0.739±0.032 | 0.762±0.043 | 0.291±0.021 | 0.492±0.030 |
LIFT | 0.824±0.024 | 0.820±0.018 | 0.768±0.018 | 0.280±0.004 | 0.486±0.009 |
算法 | Mac-F1↑ | ||||
emotions | image | yeast | Corel5k | Slashdot | |
MLNS | 0.672±0.021 | 0.660±0.024 | 0.425±0.030 | 0.126±0.028 | 0.316±0.010 |
MLBGM | 0.652±0.049 | 0.618±0.041 | 0.482±0.024 | 0.117±0.022 | 0.139±0.032 |
ML2 | 0.656±0.015 | 0.652±0.013 | 0.438±0.017 | 0.108±0.010 | 0.216±0.017 |
MLRWKNN | 0.621±0.025 | 0.540±0.031 | 0.403±0.022 | 0.121±0.038 | 0.285±0.026 |
LIFT | 0.651±0.025 | 0.624±0.013 | 0.377±0.019 | 0.104±0.020 | 0.132±0.025 |
表2 不同算法在多标签数据集上的性能(均值±标准差)
Tab. 2 Performance of different algorithms on multi-label datasets (mean value±standard deviation)
算法 | one-error↓ | ||||
---|---|---|---|---|---|
emotions | image | yeast | Corel5k | Slashdot | |
MLNS | 0.253±0.018 | 0.250±0.021 | 0.225±0.013 | 0.658±0.008 | 0.516±0.016 |
MLBGM | 0.354±0.030 | 0.314±0.049 | 0.285±0.029 | 0.708±0.022 | 0.443±0.029 |
ML2 | 0.261±0.045 | 0.260±0.027 | 0.246±0.034 | 0.647±0.007 | 0.510±0.022 |
MLRWKNN | 0.287±0.006 | 0.256±0.010 | 0.227±0.012 | 0.679±0.007 | 0.549±0.036 |
LIFT | 0.251±0.027 | 0.276±0.026 | 0.226±0.021 | 0.706±0.012 | 0.533±0.016 |
算法 | coverage↓ | ||||
emotions | image | yeast | Corel5k | Slashdot | |
MLNS | 0.282±0.034 | 0.159±0.010 | 0.448±0.005 | 0.304±0.004 | 0.434±0.012 |
MLBGM | 0.365±0.012 | 0.192±0.007 | 0.511±0.026 | 0.547±0.014 | 0.441±0.036 |
ML2 | 0.292±0.044 | 0.164±0.009 | 0.461±0.016 | 0.372±0.017 | 0.615±0.012 |
MLRWKNN | 0.398±0.034 | 0.231±0.024 | 0.576±0.039 | 0.435±0.028 | 0.452±0.022 |
LIFT | 0.271±0.023 | 0.172±0.013 | 0.454±0.017 | 0.313±0.008 | 0.503±0.024 |
算法 | ranking loss↓ | ||||
emotions | image | yeast | Corel5k | Slashdot | |
MLNS | 0.145±0.028 | 0.131±0.010 | 0.165±0.008 | 0.101±0.001 | 0.357±0.007 |
MLBGM | 0.202±0.016 | 0.159±0.026 | 0.181±0.020 | 0.237±0.009 | 0.436±0.016 |
ML2 | 0.153±0.013 | 0.136±0.012 | 0.175±0.015 | 0.163±0.011 | 0.363±0.007 |
MLRWKNN | 0.137±0.031 | 0.204±0.024 | 0.161±0.041 | 0.203±0.017 | 0.359±0.024 |
LIFT | 0.144±0.016 | 0.148±0.012 | 0.164±0.013 | 0.131±0.006 | 0.367±0.010 |
算法 | average precise↑ | ||||
emotions | image | yeast | Corel5k | Slashdot | |
MLNS | 0.818±0.021 | 0.838±0.009 | 0.770±0.008 | 0.292±0.007 | 0.514±0.010 |
MLBGM | 0.762±0.029 | 0.725±0.037 | 0.684±0.027 | 0.212±0.021 | 0.477±0.018 |
ML2 | 0.816±0.021 | 0.832±0.014 | 0.759±0.020 | 0.297±0.010 | 0.521±0.012 |
MLRWKNN | 0.792±0.027 | 0.739±0.032 | 0.762±0.043 | 0.291±0.021 | 0.492±0.030 |
LIFT | 0.824±0.024 | 0.820±0.018 | 0.768±0.018 | 0.280±0.004 | 0.486±0.009 |
算法 | Mac-F1↑ | ||||
emotions | image | yeast | Corel5k | Slashdot | |
MLNS | 0.672±0.021 | 0.660±0.024 | 0.425±0.030 | 0.126±0.028 | 0.316±0.010 |
MLBGM | 0.652±0.049 | 0.618±0.041 | 0.482±0.024 | 0.117±0.022 | 0.139±0.032 |
ML2 | 0.656±0.015 | 0.652±0.013 | 0.438±0.017 | 0.108±0.010 | 0.216±0.017 |
MLRWKNN | 0.621±0.025 | 0.540±0.031 | 0.403±0.022 | 0.121±0.038 | 0.285±0.026 |
LIFT | 0.651±0.025 | 0.624±0.013 | 0.377±0.019 | 0.104±0.020 | 0.132±0.025 |
评价指标 | 临界值( | |
---|---|---|
one-error | 21.882 | 3.007 |
coverage | 26.638 | |
ranking loss | 24.065 | |
average precise | 26.753 | |
Mac-F1 | 24.064 |
表3 Friedman检验和各项评价指标的临界值
Tab. 3 Friedman test and critical value of each evaluation metric
评价指标 | 临界值( | |
---|---|---|
one-error | 21.882 | 3.007 |
coverage | 26.638 | |
ranking loss | 24.065 | |
average precise | 26.753 | |
Mac-F1 | 24.064 |
1 | GIBAJA E, VENTURA S. A tutorial on multilabel learning [J]. ACM Computing Surveys, 2015, 47(3): Article No.52. 10.1145/2716262 |
2 | LI C S, WEI F, YAN J C, et al. A self-paced regularization framework for multilabel learning [J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(6): 2660-2666. 10.1109/tnnls.2017.2697767 |
3 | BURKHARDT S, KRAMER S. Online multi-label dependency topic models for text classification [J]. Machine Learning, 2018, 107(5): 859-886. 10.1007/s10994-017-5689-6 |
4 | BARUTCUOGLU Z, SCHAPIRE R E, TROYANSKAYA O G. Hierarchical multi-label prediction of gene function [J]. Bioinformatics, 2006, 22(7): 830-836. 10.1093/bioinformatics/btk048 |
5 | WEINER M F, LIPTON A M. The Dementias: Diagnosis, Treatment, and Research [M]. 3rd ed. Washington, DC: American Psychiatric Association Publishing, 2003: 1-25. |
6 | FU H Z, CHENG J, XU Y W, et al. Joint optic disc and cup segmentation based on multi-label deep network and polar transformation [J]. IEEE Transactions on Medical Imaging, 2018, 37(7): 1597-1605. 10.1109/tmi.2018.2791488 |
7 | ZHUANG N, YAN Y, CHEN S, et al. Multi-label learning based deep transfer neural network for facial attribute classification [J]. Pattern Recognition, 2018, 80: 225-240. 10.1016/j.patcog.2018.03.018 |
8 | WU B Y, JIA F, LIU W, et al. Multi-label learning with missing labels using mixed dependency graphs [J]. International Journal of Computer Vision, 2018, 126(8): 875-896. 10.1007/s11263-018-1085-3 |
9 | LIU A A, SHAO Z, WONG Y, et al. LSTM-based multi-label video event detection [J]. Multimedia Tools and Applications, 2019, 78(1): 677-695. 10.1007/s11042-017-5532-x |
10 | ZHANG M L, ZHOU Z H. A review on multi-label learning algorithms [J]. IEEE Transactions on Knowledge and Data Engineering, 2014, 26(8): 1819-1837. 10.1109/tkde.2013.39 |
11 | BOUTELL M R, LUO J B, SHEN X P, et al. Learning multi-label scene classification [J]. Pattern Recognition, 2004, 37(9): 1757-1771. 10.1016/j.patcog.2004.03.009 |
12 | READ J, PFAHRINGER B, HOLMES G, et al. Classifier chains for multi-label classification [J]. Machine Learning, 2011, 85(3): 333-360. 10.1007/s10994-011-5256-5 |
13 | FÜRNKRANZ J, HÜLLEMEIER E. Pairwise preference learning and ranking [C]// Proceedings of the 2003 European Conference on Machine Learning, LNCS 2837. Berlin: Springer, 2003: 145-156. |
14 | FÜRNKRANZ J, HÜLLEMEIER E, LOZA MENCÍA E, et al. Multilabel classification via calibrated label ranking [J]. Machine Learning, 2008, 73(2): 133-153. 10.1007/s10994-008-5064-8 |
15 | ELISSEEFF A, WESTON J. A kernel method for multi-labelled classification [C]// Proceedings of the 2001 14th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2001: 681-687. 10.7551/mitpress/1120.003.0092 |
16 | ZHANG M L, ZHOU Z H. ML-KNN: a lazy learning approach to multi-label learning [J]. Pattern Recognition, 2007, 40(7): 2038-2048. 10.1016/j.patcog.2006.12.019 |
17 | ZHANG M L, ZHOU Z H. Multilabel neural networks with applications to functional genomics and text categorization [J]. IEEE Transactions on Knowledge and Data Engineering, 2006, 18(10): 1338-1351. 10.1109/tkde.2006.162 |
18 | UEDA N, SAITO K. Parametric mixture models for multi-labeled text [C]// Proceedings of the 2002 15th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2002: 737-744. 10.1145/775047.775140 |
19 | ZHU S H, JI X, XU W, et al. Multi-labelled classification using maximum entropy method [C]// Proceedings of the 2005 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2005: 274-281. 10.1145/1076034.1076082 |
20 | LOZA MENCÍA E, FÜRNKRANZ J. Efficient pairwise multilabel classification for large-scale problems in the legal domain [C]// Proceedings of the 2008 Joint European Conference on Machine Learning and Knowledge Discovery in Databases, LNCS 5212. Berlin: Springer, 2008: 50-65. |
21 | HÜLLERMEIER E, FÜRNKRANZ J, CHENG W W, et al. Label ranking by learning pairwise preferences [J]. Artificial Intelligence, 2008, 172(16/17): 1897-1916. 10.1016/j.artint.2008.08.002 |
22 | GHAMRAWI N, McCALLUM A. Collective multi-label classification [C]// Proceedings of the 2005 14th ACM International Conference on Information and Knowledge Management. New York: ACM, 2005: 195-200. 10.1145/1099554.1099591 |
23 | ZHANG M L, ZHANG K. Multi-label learning by exploiting label dependency [C]// Proceedings of the 2010 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2010: 999-1008. 10.1145/1835804.1835930 |
24 | LEE D D, SEUNG H S. Algorithms for non-negative matrix factorization [C]// Proceedings of the 2000 13th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2000: 535-541. |
25 | NYQUIST H. Certain topics in telegraph transmission theory [J].Proceedings of the IEEE, 2002, 90(2): 280-305. 10.1109/5.989875 |
26 | OLSHAUSEN B A, FIELD D J. Sparse coding with an overcomplete basis set: a strategy employed by V1? [J]. Vision Research, 1997, 37(23): 3311-3325. 10.1016/s0042-6989(97)00169-7 |
27 | WRIGHT J, YANG A Y, GANESH A, et al. Robust face recognition via sparse representation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(2): 210-227. 10.1109/tpami.2008.79 |
28 | 宋相法,焦李成.基于稀疏表示的多标记学习算法[J].模式识别与人工智能,2012,25(1):124-129. 10.3969/j.issn.1003-6059.2012.01.017 |
SONG X F, JIAO L C. A multi-label learning algorithm based on sparse representation [J]. Pattern Recognition and Artificial Intelligence, 2012, 25(1): 124-129. 10.3969/j.issn.1003-6059.2012.01.017 | |
29 | PAPADIMITRIOU C H, RAGHAVAN P, TAMAKI H, et al. Latent semantic indexing: a probabilistic analysis [J]. Journal of Computer and System Sciences, 2000, 61(2): 217-235. 10.1006/jcss.2000.1711 |
30 | DING C, LI T, PENG W, et al. Orthogonal nonnegative matrix tri-factorizations for clustering [C]// Proceedings of the 2006 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2006: 126-135. 10.1145/1150402.1150420 |
31 | TIBSHIRANI R. Regression shrinkage and selection via the lasso:a retrospective [J]. Journal of the Royal Statistical Society, Series B (Statistical Methodology), 2011, 73(3): 273-282. 10.1111/j.1467-9868.2011.00771.x |
32 | BOYD S, PARIKH N, CHU E, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers [J]. Foundations and Trends in Machine Learning, 2010, 3(1): 1-122. 10.1561/2200000016 |
33 | CHUNG W, KIM J, LEE H, et al. General dimensional multiple-output support vector regressions and their multiple kernel learning [J]. IEEE Transactions on Cybernetics, 2015, 45(11): 2572-2584. 10.1109/tcyb.2014.2377016 |
34 | TUIA D, VERRELST J, ALONSO L, et al. Multioutput support vector regression for remote sensing biophysical parameter estimation [J]. IEEE Geoscience and Remote Sensing Letters, 2011, 8(4): 804-808. 10.1109/lgrs.2011.2109934 |
35 | 李兆玉,王纪超,雷曼,等.基于引力模型的多标签分类算法[J].计算机应用,2018,38(10):2807-2811, 2821. 10.11772/j.issn.1001-9081.2018040813 |
LI Z Y, WANG J C, LEI M, et al. Multi-label classification algorithm based on gravitational model [J]. Journal of Computer Application, 2018, 38(10):2807-2811, 2821. 10.11772/j.issn.1001-9081.2018040813 | |
36 | HOU P, GENG X, ZHANG M L. Multi-label manifold learning [C]// Proceedings of the 2016 30th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2016: 1680-1686. |
37 | ZHANG M L, WU L. LIFT: multi-label learning with label-specific features [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(1): 107-120. 10.1109/tpami.2014.2339815 |
38 | WANG Z W, WANG S K, WAN B T, et al. A novel multi-label classification algorithm based on K-nearest neighbor and random walk [J]. International Journal of Distributed Sensor Networks, 2020, 16(3): 1-17. 10.1177/1550147720911892 |
39 | DEMŠAR J. Statistical comparisons of classifiers over multiple data sets [J]. Journal of Machine Learning Research, 2006, 7: 1-30. |
40 | DUNN O J. Multiple comparisons among means [J]. Journal of the American Statistical Association, 1961, 56(293): 52-64. 10.1080/01621459.1961.10482090 |
[1] | 陈学斌, 任志强, 张宏扬. 联邦学习中的安全威胁与防御措施综述[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1663-1672. |
[2] | 姚梓豪, 栗远明, 马自强, 李扬, 魏良根. 基于机器学习的多目标缓存侧信道攻击检测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1862-1871. |
[3] | 佘维, 李阳, 钟李红, 孔德锋, 田钊. 基于改进实数编码遗传算法的神经网络超参数优化[J]. 《计算机应用》唯一官方网站, 2024, 44(3): 671-676. |
[4] | 郑毅, 廖存燚, 张天倩, 王骥, 刘守印. 面向城区的基于图去噪的小区级RSRP估计方法[J]. 《计算机应用》唯一官方网站, 2024, 44(3): 855-862. |
[5] | 李博, 黄建强, 黄东强, 王晓英. 基于异构平台的稀疏矩阵向量乘自适应计算优化[J]. 《计算机应用》唯一官方网站, 2024, 44(12): 3867-3875. |
[6] | 陈学斌, 屈昌盛. 面向联邦学习的后门攻击与防御综述[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3459-3469. |
[7] | 孙仁科, 皇甫志宇, 陈虎, 李仲年, 许新征. 神经架构搜索综述[J]. 《计算机应用》唯一官方网站, 2024, 44(10): 2983-2994. |
[8] | 柴汶泽, 范菁, 孙书魁, 梁一鸣, 刘竟锋. 深度度量学习综述[J]. 《计算机应用》唯一官方网站, 2024, 44(10): 2995-3010. |
[9] | 尹春勇, 周永成. 双端聚类的自动调整聚类联邦学习[J]. 《计算机应用》唯一官方网站, 2024, 44(10): 3011-3020. |
[10] | 崔昊阳, 张晖, 周雷, 杨春明, 李波, 赵旭剑. 有序规范实数对多相似度K最近邻分类算法[J]. 《计算机应用》唯一官方网站, 2023, 43(9): 2673-2678. |
[11] | 钟静, 林晨, 盛志伟, 张仕斌. 基于汉明距离的量子K-Means算法[J]. 《计算机应用》唯一官方网站, 2023, 43(8): 2493-2498. |
[12] | 蓝梦婕, 蔡剑平, 孙岚. 非独立同分布数据下的自正则化联邦学习优化方法[J]. 《计算机应用》唯一官方网站, 2023, 43(7): 2073-2081. |
[13] | 黄晓辉, 杨凯铭, 凌嘉壕. 基于共享注意力的多智能体强化学习订单派送[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1620-1624. |
[14] | 郝劭辰, 卫孜钻, 马垚, 于丹, 陈永乐. 基于高效联邦学习算法的网络入侵检测模型[J]. 《计算机应用》唯一官方网站, 2023, 43(4): 1169-1175. |
[15] | 孙晓飞, 朱静远, 陈斌, 游恒志. 融合多模态数据的药物合成反应的虚拟筛选[J]. 《计算机应用》唯一官方网站, 2023, 43(2): 622-629. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||