《计算机应用》唯一官方网站 ›› 2026, Vol. 46 ›› Issue (4): 1023-1033.DOI: 10.11772/j.issn.1001-9081.2025050536
• 人工智能 • 下一篇
收稿日期:2025-05-16
修回日期:2025-06-27
接受日期:2025-07-15
发布日期:2025-08-01
出版日期:2026-04-10
通讯作者:
陈学斌
作者简介:姜志(2000—),男,山东青岛人,硕士研究生,CCF会员,主要研究方向:联邦学习、机器学习基金资助:
Zhi JIANG1, Xuebin CHEN1(
), Changyin LUO1,2, Ziye ZHEN1
Received:2025-05-16
Revised:2025-06-27
Accepted:2025-07-15
Online:2025-08-01
Published:2026-04-10
Contact:
Xuebin CHEN
About author:JIANG Zhi, born in 2000, M. S. candidate. His research interests include federated learning, machine learning.Supported by:摘要:
针对联邦学习中数据异质、梯度易陷入局部最优及计算?通信开销偏高等问题,面向Kolmogorov-Arnold网络(KAN)提出一种“关键边筛选?早停遗传进化?局部微调”的混合训练框架——KB-GA-KAN。首先依据核函数幅度和激活敏感度在客户端动态选取关键边,并仅对这些边的核系数进行遗传进化,全局搜索优良初始解;然后引入早停判据,结合进化与本地随机梯度下降(SGD)实现协同优化。在5个非独立同分布(Non-IID)数据集上的实验结果表明:相较于纯梯度训练的KAN,KB-GA-KAN在相同通信轮数下的测试准确率平均提高了1.34%,收敛轮数减少了42%,并以轻微的额外计算代价提升了异构场景的鲁棒性。核函数的可视化结果进一步验证了KB-GA-KAN对模型可解释性的促进作用。可见,KB-GA-KAN能为隐私受限条件下的高效随机梯度下降KAN训练提供兼顾准确率、收敛速度与计算成本的新途径。
中图分类号:
姜志, 陈学斌, 罗长银, 甄子业. 联邦学习中改进Kolmogorov-Arnold网络的混合优化框架[J]. 计算机应用, 2026, 46(4): 1023-1033.
Zhi JIANG, Xuebin CHEN, Changyin LUO, Ziye ZHEN. Hybrid optimization framework for improving Kolmogorov-Arnold network in federated learning[J]. Journal of Computer Applications, 2026, 46(4): 1023-1033.
| 方法 | Fashion-MNIST | SVHN | CIFAR-10 |
|---|---|---|---|
| FedAvg | 87.84±0.46 | 86.21±0.68 | 66.93±0.21 |
| FedProx | 87.97±0.28 | 86.35±0.43 | 67.01±0.03 |
| SCAFFOLD | 88.06±0.06 | 86.42±0.59 | 67.53±0.70 |
| Fedl1 | 88.31±0.33 | 86.43±0.61 | 67.20±0.73 |
| FedDANE | 72.81±1.26 | 67.69±2.31 | 62.27±3.23 |
| MOON | 87.84±0.46 | 84.03±0.72 | 69.35±0.11 |
| KB-GA-KAN | 88.34±0.13 | 87.02±0.54 | 69.71±0.15 |
表1 不同数据集上不同方法的准确率 (%)
Tab. 1 Accuracies of different methods on different datasets
| 方法 | Fashion-MNIST | SVHN | CIFAR-10 |
|---|---|---|---|
| FedAvg | 87.84±0.46 | 86.21±0.68 | 66.93±0.21 |
| FedProx | 87.97±0.28 | 86.35±0.43 | 67.01±0.03 |
| SCAFFOLD | 88.06±0.06 | 86.42±0.59 | 67.53±0.70 |
| Fedl1 | 88.31±0.33 | 86.43±0.61 | 67.20±0.73 |
| FedDANE | 72.81±1.26 | 67.69±2.31 | 62.27±3.23 |
| MOON | 87.84±0.46 | 84.03±0.72 | 69.35±0.11 |
| KB-GA-KAN | 88.34±0.13 | 87.02±0.54 | 69.71±0.15 |
| 方法 | MNIST | CIFAR-100 | ||||||
|---|---|---|---|---|---|---|---|---|
| FedAvg | 97.67±0.23 | 97.80±0.08 | 98.45±0.06 | 98.58±0.10 | 25.17±0.49 | 25.82±0.61 | 25.97±0.66 | 26.07±0.58 |
| FedProx | 97.77±0.22 | 98.04±0.22 | 98.62±0.11 | 98.70±0.09 | 25.20±0.48 | 25.78±0.60 | 26.06±0.59 | 26.12±0.76 |
| SCAFFOLD | 97.53±0.67 | 97.69±0.60 | 98.56±0.13 | 98.70±0.09 | 24.68±0.25 | 24.98±0.52 | 25.67±0.84 | 26.12±0.76 |
| Fedl1 | 97.73±0.10 | 97.86±0.01 | 98.52±0.11 | 98.64±0.08 | 26.06±0.11 | 26.31±0.55 | 26.40±0.14 | 26.63±0.16 |
| FedDANE | 96.49±1.96 | 96.92±0.83 | 97.13±0.54 | 97.91±0.13 | 22.32±0.69 | 22.43±0.25 | 22.79±0.38 | 22.93±0.23 |
| MOON | 97.41±0.23 | 97.75±0.46 | 98.58±0.09 | 98.69±0.05 | 24.65±0.53 | 25.72±0.67 | 25.81±0.81 | 26.24±0.64 |
| KB-GA-KAN | 97.79±0.18 | 97.91±0.39 | 98.79±0.08 | 98.85±0.07 | 26.57±0.52 | 26.83±0.25 | 26.09±0.14 | 27.02±0.09 |
表2 两个数据集上不同β下各方法的准确率 (%)
Tab. 2 Accuracies of different methods with different β on two datasets
| 方法 | MNIST | CIFAR-100 | ||||||
|---|---|---|---|---|---|---|---|---|
| FedAvg | 97.67±0.23 | 97.80±0.08 | 98.45±0.06 | 98.58±0.10 | 25.17±0.49 | 25.82±0.61 | 25.97±0.66 | 26.07±0.58 |
| FedProx | 97.77±0.22 | 98.04±0.22 | 98.62±0.11 | 98.70±0.09 | 25.20±0.48 | 25.78±0.60 | 26.06±0.59 | 26.12±0.76 |
| SCAFFOLD | 97.53±0.67 | 97.69±0.60 | 98.56±0.13 | 98.70±0.09 | 24.68±0.25 | 24.98±0.52 | 25.67±0.84 | 26.12±0.76 |
| Fedl1 | 97.73±0.10 | 97.86±0.01 | 98.52±0.11 | 98.64±0.08 | 26.06±0.11 | 26.31±0.55 | 26.40±0.14 | 26.63±0.16 |
| FedDANE | 96.49±1.96 | 96.92±0.83 | 97.13±0.54 | 97.91±0.13 | 22.32±0.69 | 22.43±0.25 | 22.79±0.38 | 22.93±0.23 |
| MOON | 97.41±0.23 | 97.75±0.46 | 98.58±0.09 | 98.69±0.05 | 24.65±0.53 | 25.72±0.67 | 25.81±0.81 | 26.24±0.64 |
| KB-GA-KAN | 97.79±0.18 | 97.91±0.39 | 98.79±0.08 | 98.85±0.07 | 26.57±0.52 | 26.83±0.25 | 26.09±0.14 | 27.02±0.09 |
| 方法 | Fashion-MNIST | SVHN | CIFAR-10 | |||
|---|---|---|---|---|---|---|
| 轮数 | 速度 | 轮数 | 速度 | 轮数 | 速度 | |
| FedAvg | 50 | 1.00× | 50 | 1.00× | 50 | 1.00× |
| FedProx | 49 | 1.02× | 48 | 1.04× | 49 | 1.02× |
| SCAFFOLD | 47 | 1.06× | 47 | 1.06× | 45 | 1.11× |
| Fedl1 | 48 | 1.04× | 44 | 1.13× | 47 | 1.06× |
| FedDANE | 52 | 0.96× | 64 | 0.78× | 55 | 0.91× |
| MOON | 36 | 1.38× | 38 | 1.31× | 16 | 3.13× |
| KB-GA-KAN | 17 | 2.94× | 21 | 2.38× | 13 | 3.84× |
表3 不同方法的通信效率
Tab. 3 Communication efficiencies of different methods
| 方法 | Fashion-MNIST | SVHN | CIFAR-10 | |||
|---|---|---|---|---|---|---|
| 轮数 | 速度 | 轮数 | 速度 | 轮数 | 速度 | |
| FedAvg | 50 | 1.00× | 50 | 1.00× | 50 | 1.00× |
| FedProx | 49 | 1.02× | 48 | 1.04× | 49 | 1.02× |
| SCAFFOLD | 47 | 1.06× | 47 | 1.06× | 45 | 1.11× |
| Fedl1 | 48 | 1.04× | 44 | 1.13× | 47 | 1.06× |
| FedDANE | 52 | 0.96× | 64 | 0.78× | 55 | 0.91× |
| MOON | 36 | 1.38× | 38 | 1.31× | 16 | 3.13× |
| KB-GA-KAN | 17 | 2.94× | 21 | 2.38× | 13 | 3.84× |
| 数据集 | 方法 | E=10, B=32 | E=10, B=128 | E=5, B=32 | |||
|---|---|---|---|---|---|---|---|
| 20%客户端 | 50%客户端 | 20%客户端 | 50%客户端 | 20%客户端 | 50%客户端 | ||
| MNIST | FedAvg | 50(1×) | 50(1×) | 50(1×) | 50(1×) | 50(1×) | 50(1×) |
| FedProx | 49 | 37 | 48 | 39 | 55 | 46 | |
| SCAFFOLD | 47 | 29 | 47 | 30 | 49 | 34 | |
| Fedl1 | 43 | 27 | 45 | 31 | 47 | 33 | |
| FedDANE | 41 | 34 | 45 | 36 | 45 | 36 | |
| MOON | 36 | 29 | 40 | 30 | 41 | 37 | |
| KB-GA-KAN | 20 | 13 | 25 | 17 | 22 | 15 | |
| CIFAR-100 | FedAvg | 50(1×) | 50(1×) | 50(1×) | 50(1×) | 50(1×) | 50(1×) |
| FedProx | 49 | 39 | 49 | 44 | 49 | 48 | |
| SCAFFOLD | 46 | 40 | 45 | 33 | 44 | 29 | |
| Fedl1 | 48 | 30 | 44 | 31 | 42 | 39 | |
| FedDANE | 42 | 36 | 47 | 40 | 39 | 35 | |
| MOON | 39 | 22 | 43 | 39 | 40 | 27 | |
| KB-GA-KAN | 23 | 14 | 28 | 24 | 25 | 16 | |
表4 两个数据集上达到目标准确率所需通信轮数
Tab. 4 Number of communication rounds required to achieve target accuracy on two datasets
| 数据集 | 方法 | E=10, B=32 | E=10, B=128 | E=5, B=32 | |||
|---|---|---|---|---|---|---|---|
| 20%客户端 | 50%客户端 | 20%客户端 | 50%客户端 | 20%客户端 | 50%客户端 | ||
| MNIST | FedAvg | 50(1×) | 50(1×) | 50(1×) | 50(1×) | 50(1×) | 50(1×) |
| FedProx | 49 | 37 | 48 | 39 | 55 | 46 | |
| SCAFFOLD | 47 | 29 | 47 | 30 | 49 | 34 | |
| Fedl1 | 43 | 27 | 45 | 31 | 47 | 33 | |
| FedDANE | 41 | 34 | 45 | 36 | 45 | 36 | |
| MOON | 36 | 29 | 40 | 30 | 41 | 37 | |
| KB-GA-KAN | 20 | 13 | 25 | 17 | 22 | 15 | |
| CIFAR-100 | FedAvg | 50(1×) | 50(1×) | 50(1×) | 50(1×) | 50(1×) | 50(1×) |
| FedProx | 49 | 39 | 49 | 44 | 49 | 48 | |
| SCAFFOLD | 46 | 40 | 45 | 33 | 44 | 29 | |
| Fedl1 | 48 | 30 | 44 | 31 | 42 | 39 | |
| FedDANE | 42 | 36 | 47 | 40 | 39 | 35 | |
| MOON | 39 | 22 | 43 | 39 | 40 | 27 | |
| KB-GA-KAN | 23 | 14 | 28 | 24 | 25 | 16 | |
| 早停阈值 | 准确率/% | 时间成本/s |
|---|---|---|
| 10-2 | 86.1 | 12.1 |
| 10-3 | 86.7 | 13.6 |
| 10-4 | 87.2 | 15.3 |
| 10-5 | 87.7 | 17.1 |
| 10-6 | 88.1 | 19.7 |
| 10-7 | 88.3 | 23.4 |
| 10-8 | 88.4 | 29.6 |
表5 不同早停阈值下的模型准确率和时间成本
Tab. 5 Model accuracies and time costs under different early stopping thresholds
| 早停阈值 | 准确率/% | 时间成本/s |
|---|---|---|
| 10-2 | 86.1 | 12.1 |
| 10-3 | 86.7 | 13.6 |
| 10-4 | 87.2 | 15.3 |
| 10-5 | 87.7 | 17.1 |
| 10-6 | 88.1 | 19.7 |
| 10-7 | 88.3 | 23.4 |
| 10-8 | 88.4 | 29.6 |
| 方法 | 轻量KAN-Lite | 中型KAN-Base | 重型KAN-Deep |
|---|---|---|---|
| FedAvg/Prox/MOON | 51.2 | 78.4 | 102.4 |
| Fedl1 | 50.1 | 71.6 | 91.5 |
| SCAFFOLD | 76.8 | 96.4 | 142.0 |
| FedDANE | 153.6 | 233.6 | 293.8 |
| KB-GA-KAN | 29.6 | 45.1 | 60.2 |
表6 不同方法的通信数据量 (GB)
Tab. 6 Communication data sizes of different methods
| 方法 | 轻量KAN-Lite | 中型KAN-Base | 重型KAN-Deep |
|---|---|---|---|
| FedAvg/Prox/MOON | 51.2 | 78.4 | 102.4 |
| Fedl1 | 50.1 | 71.6 | 91.5 |
| SCAFFOLD | 76.8 | 96.4 | 142.0 |
| FedDANE | 153.6 | 233.6 | 293.8 |
| KB-GA-KAN | 29.6 | 45.1 | 60.2 |
| 方法 | 是否筛选关键边 | GA优化范围 | 是否有早停机制 | 收敛轮数 | 准确率/% | 时间开销/s |
|---|---|---|---|---|---|---|
| BP-KAN | 否 | GA不参与 | 否 | 48 | 86.42±0.29 | 13.70 |
| 全参数GA-KAN | 否 | 全参数 | 否 | 56 | 87.71±0.08 | 35.90 |
| 无早停KB-GA-KAN | 是 | Key Edge | 否 | 23 | 87.78±0.27 | 22.95 |
| KB-GA-KAN | 是 | Key Edge | 是 | 23 | 87.75±0.13 | 15.30 |
表7 KB-GA-KAN核心模块的消融实验结果
Tab. 7 Ablation experimental results of KB-GA-KAN core modules
| 方法 | 是否筛选关键边 | GA优化范围 | 是否有早停机制 | 收敛轮数 | 准确率/% | 时间开销/s |
|---|---|---|---|---|---|---|
| BP-KAN | 否 | GA不参与 | 否 | 48 | 86.42±0.29 | 13.70 |
| 全参数GA-KAN | 否 | 全参数 | 否 | 56 | 87.71±0.08 | 35.90 |
| 无早停KB-GA-KAN | 是 | Key Edge | 否 | 23 | 87.78±0.27 | 22.95 |
| KB-GA-KAN | 是 | Key Edge | 是 | 23 | 87.75±0.13 | 15.30 |
| [1] | FAUZI M A, YANG B, BLOBEL B. Comparative analysis between individual, centralized and federated learning for smartwatch-based stress detection[J]. Journal of Personalized Medicine, 2022, 12(10): No.1584. |
| [2] | KANDATI D R, GADEKALLU T R. Genetic clustered federated learning for COVID-19 detection[J]. Electronics, 2022, 11(17): No.2714. |
| [3] | YANG F, ZHANG Q, JI X, et al. Machine learning applications in drug repurposing[J]. Interdisciplinary Sciences: Computational Life Sciences, 2022, 14(1): 15-21. |
| [4] | LIU J C, GOETZ J, SEN S, et al. Learning from others without sacrificing privacy: simulation comparing centralized and federated machine learning on mobile health data[J]. JMIR mHealth and uHealth, 2021, 9(3): No.e23728. |
| [5] | SILVA P R, VINAGRE J, GAMA J. Towards federated learning: an overview of methods and applications[J]. WIREs: Data Mining and Knowledge Discovery, 2023, 13(2): No.e1486. |
| [6] | LI L, FAN Y, TSE M, et al. A review of applications in federated learning[J]. Computers and Industrial Engineering, 2020, 149: No.106854. |
| [7] | ALAM T, GUPTA R. Federated learning and its role in the privacy preservation of IoT devices[J]. Future Internet, 2022, 14(9): No.246. |
| [8] | LI J, TONG X, LIU J, et al. An efficient federated learning system for network intrusion detection[J]. IEEE Systems Journal, 2023, 17(2): 2455-2464. |
| [9] | DENG L. The MNIST database of handwritten digit images for machine learning research [best of the Web][J]. IEEE Signal Processing Magazine, 2012, 29(6): 141-142. |
| [10] | LAI Y C, LIN J Y, LIN Y D, et al. Two-phase defense against poisoning attacks on federated learning-based intrusion detection[J]. Computers and Security, 2023, 129: No.103205. |
| [11] | FU DS, HUANG J, HAZRA D, et al. Enhancing sports image data classification in federated learning through genetic algorithm-based optimization of base architecture[J]. PLoS ONE, 2024, 19(7): e0303462. |
| [12] | TANKARD C. What the GDPR means for businesses[J]. Network Security, 2016, 2016(6): 5-8. |
| [13] | LI M, XU P, HU J, et al. From challenges and pitfalls to recommendations and opportunities: implementing federated learning in healthcare[J]. Medical Image Analysis, 2025, 101: 103497. |
| [14] | KOUTSOUBIS N, WAQAS A, YILMAZ Y, et al. Privacy-preserving federated learning and uncertainty quantification in medical imaging[J]. Radiology: Artificial Intelligence, 2025, 7(4): e240637. |
| [15] | McMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]// Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2017: 1273-1282. |
| [16] | LI T, SAHU AK, ZAHEER M, et al. Federated optimization in heterogeneous networks[EB/OL]. [2025-04-21].. |
| [17] | KARIMIREDDY SP, KALE S, MOHRI M, et al. SCAFFOLD: stochastic controlled averaging for federated learning[C]// Proceedings of the 37th International Conference on Machine Learning. New York: ACM, 2020: 5132-5143. |
| [18] | MOTHUKURI V, PARIZI R M, POURIYEH S, et al. A survey on security and privacy of federated learning[J]. Future Generation Computer Systems, 2021, 115: 619-640. |
| [19] | O’HERRIN J K, FOST N, KUDSK K A. Health Insurance Portability Accountability Act (HIPAA) regulations[J]. Annals of Surgery, 2004, 239(6): 772-778. |
| [20] | PENG B, CHI M, LIU C. Non-IID federated learning via random exchange of local feature maps for textile IIoT secure computing[J]. SCIENCE CHINA Information Sciences, 2022, 65(7): No.170302. |
| [21] | ALBSHAIER L, ALMARRI S, ALBUALI A. Federated learning for cloud and edge security: a systematic review of challenges and AI opportunities[J]. Electronics, 2025, 14(5): No.1019. |
| [22] | ABDULRAHMAN S, TOUT H, OULD-SLIMANE H, et al. A survey on federated learning: the journey from centralized to distributed on-site learning and beyond[J]. IEEE Internet of Things Journal, 2021, 8(7): 5476-5497. |
| [23] | ALEDHARI M, RAZZAK R, PARIZI R M, et al. Federated learning: a survey on enabling technologies, protocols and applications[J]. IEEE Access, 2020, 8: 140699-140725. |
| [24] | QAMMAR A, DING J, NING H. Federated learning attack surface: taxonomy, cyber defences, challenges and future directions[J]. Artificial Intelligence Review, 2022, 55(5): 3569-3606. |
| [25] | LUO C, JIANG Z, LI F, et al. Federated sparsity algorithm based on single cycle dynamic linear cyclic and norm constraint[J]. IEEE Internet of Things Journal, 2026(Early Access): 1-1. |
| [26] | LI T, SAHU A K, ZAHEER M, et al. FedDANE: a federated Newton-type method[C]// Proceedings of the 2019 53rd Asilomar Conference on Signals, Systems, and Computers. Piscataway: IEEE, 2019: 1227-1231. |
| [27] | LI Q, HE B, SONG D. Model-contrastive federated learning[C] // Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 10708-10717. |
| [28] | KARUNAMURTHY A, VIJAYAN K, KSHIRSAGAR P R, et al. An optimal federated learning-based intrusion detection for IoT environment[J]. Scientific Reports, 2025, 15(1): No.8696. |
| [29] | YURDEM B, KUZLU M, GULLU M K, et al. Federated learning: overview, strategies, applications, tools and future directions[J]. Heliyon, 2024, 10(19): No.e38137. |
| [30] | YE M, FANG X, DU B, et al. Heterogeneous federated learning: state-of-the-art and research challenges[J]. ACM Computing Surveys, 2023, 56(3): 1-44. |
| [1] | 平欢, 夏战国, 刘思诚, 刘奇翰, 李春磊. 基于多层联邦学习的终端数据隐私保护方案[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 830-838. |
| [2] | 王磊, 周文轩, 贾柠晖, 屈志昊. 面向隐私敏感物联网数据的联邦学习双向通信压缩[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 887-898. |
| [3] | 马凯光, 陈学斌, 菅银龙, 王柳, 高远. 基于混合序列模型与联邦类平衡算法的网络入侵检测[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 857-866. |
| [4] | 郗恩康, 范菁, 金亚东, 董华, 俞浩, 孙伊航. 联邦学习在隐私安全领域面临的威胁综述[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 798-808. |
| [5] | 钟琪, 张淑芬, 张镇博, 菅银龙, 景忠瑞. 面向联邦学习的投毒攻击检测与防御机制[J]. 《计算机应用》唯一官方网站, 2026, 46(2): 445-457. |
| [6] | 张珂嘉, 方志军, 周南润, 史志才. 基于模型预分配与自蒸馏的个性化联邦学习方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 10-20. |
| [7] | 俞浩, 范菁, 孙伊航, 金亚东, 郗恩康, 董华. 边缘异构下的联邦分割学习优化方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 33-42. |
| [8] | 菅银龙, 陈学斌, 景忠瑞, 钟琪, 张镇博. 联邦学习中基于条件生成对抗网络的数据增强方案[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 21-32. |
| [9] | 俞浩, 范菁, 孙伊航, 董华, 郗恩康. 联邦学习统计异质性综述[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2737-2746. |
| [10] | 张博瀚, 吕乐, 荆军昌, 刘栋. 基于遗传算法的属性网络社区隐藏方法[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2817-2826. |
| [11] | 苏锦涛, 葛丽娜, 肖礼广, 邹经, 王哲. 联邦学习中针对后门攻击的检测与防御方案[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2399-2408. |
| [12] | 葛丽娜, 王明禹, 田蕾. 联邦学习的高效性研究综述[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2387-2398. |
| [13] | 张宏扬, 张淑芬, 谷铮. 面向个性化与公平性的联邦学习算法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2123-2131. |
| [14] | 张一鸣, 曹腾飞. 基于本地漂移和多样性算力的联邦学习优化算法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1447-1454. |
| [15] | 范亚州, 李卓. 能耗约束下分层联邦学习模型质量优化的节点协作机制[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1589-1594. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||