《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (8): 2387-2398.DOI: 10.11772/j.issn.1001-9081.2024081119
• 2024年全国开放式分布与并行计算学术年会 (DPCS 2024) • 下一篇
收稿日期:
2024-08-09
修回日期:
2024-08-23
接受日期:
2024-09-02
发布日期:
2024-09-12
出版日期:
2025-08-10
通讯作者:
葛丽娜
作者简介:
王明禹(1999—),男,吉林松原人,硕士研究生,CCF会员,主要研究方向:联邦学习、信息安全基金资助:
Lina GE1,2,3(), Mingyu WANG1,3, Lei TIAN1,3
Received:
2024-08-09
Revised:
2024-08-23
Accepted:
2024-09-02
Online:
2024-09-12
Published:
2025-08-10
Contact:
Lina GE
About author:
WANG Mingyu, born in 1999, M. S. candidate. His research interests include federated learning, information security.Supported by:
摘要:
联邦学习作为一个分布式机器学习框架,解决了数据孤岛问题,对个人及企业的隐私保护起到了重要作用。然而,由于联邦学习的特点,效率问题(尤其是高昂的成本)仍旧是目前急需解决的,这一现状仍不尽如人意。因此,全面调研并总结当前主流的关于联邦学习高效性的研究。首先,回顾高效联邦学习的背景,包括它的由来以及核心思想,并解释联邦学习的概念和分类;其次,论述基于联邦学习而产生的高效性问题,并将它们分为异构性问题、个性化问题和通信代价问题;再次,在此基础上详细分析并论述高效性问题的解决方案,并将高效联邦学习研究分为模型压缩优化方法以及通信优化方法这2个类别后进行调研;继次,通过对比分析,总结各联邦学习方法的优缺点,并阐述目前高效联邦学习中仍存在的挑战;最后,给出高效联邦学习领域未来的研究方向。
中图分类号:
葛丽娜, 王明禹, 田蕾. 联邦学习的高效性研究综述[J]. 计算机应用, 2025, 45(8): 2387-2398.
Lina GE, Mingyu WANG, Lei TIAN. Review of research on efficiency of federated learning[J]. Journal of Computer Applications, 2025, 45(8): 2387-2398.
文献 | 主要内容 | 本文的不同 |
---|---|---|
文献[ | 提出并分类联邦学习通信问题中的挑战和解决方案,但只讨论了压缩方案中的稀疏化和量化方法 | 与该综述相比,本文详细讨论知识蒸馏、剪枝和量化3种方法在联邦学习压缩优化方法中的应用,并给出了更多的相关工作 |
文献[ | 专注于如何优化联邦学习中的客户端选择技术的效率 | 与该综述相比,本文更全面地介绍联邦学习的效率优化技术,讨论其高效性目前的挑战,并分为异质性、个性化和通信成本3个方面,详细分析和讨论解决这些效率问题的方法 |
文献[ | 与本文类似,对联邦学习中的效率问题及其解决方案进行全面讨论,并分为通信效率、通信环境和通信资源分配3个领域 | 与该综述相比,本文对联邦学习的效率优化方法基于不同观点进行分类,并提出4个挑战以及对未来趋势的更详细展望 |
表1 相关文献综述与本文出发点的差异
Tab. 1 Summary of relevant review literature and differences between them and starting points of this paper
文献 | 主要内容 | 本文的不同 |
---|---|---|
文献[ | 提出并分类联邦学习通信问题中的挑战和解决方案,但只讨论了压缩方案中的稀疏化和量化方法 | 与该综述相比,本文详细讨论知识蒸馏、剪枝和量化3种方法在联邦学习压缩优化方法中的应用,并给出了更多的相关工作 |
文献[ | 专注于如何优化联邦学习中的客户端选择技术的效率 | 与该综述相比,本文更全面地介绍联邦学习的效率优化技术,讨论其高效性目前的挑战,并分为异质性、个性化和通信成本3个方面,详细分析和讨论解决这些效率问题的方法 |
文献[ | 与本文类似,对联邦学习中的效率问题及其解决方案进行全面讨论,并分为通信效率、通信环境和通信资源分配3个领域 | 与该综述相比,本文对联邦学习的效率优化方法基于不同观点进行分类,并提出4个挑战以及对未来趋势的更详细展望 |
压缩优化方法 | 描述 | 优点 | 缺点 |
---|---|---|---|
知识蒸馏 | 通过使用已有的教师模型提供知识蒸馏训练学生模型 | 提高训练速度、提高模型性能、能够实现迁移学习 | 教师学生模型不匹配、模型深浅度不同会导致该方法实现困难 |
剪枝 | 裁剪掉神经网络中对输出结果影响较小和没有影响的权重和分枝,从而避免冗余的模型参数交换,使裁剪后的模型占用内存更小、处理速度更快 | 可以减小模型大小、简化模型结构、节省计算资源,并且能通过去除冗余参数从而增强模型的泛化能力,降低过拟合 | 若剪枝不当,可能会导致模型精度下降,另外需要人为去选择剪枝策略以及调整参数 |
量化 | 通过降低数据表示的精度减少存储、计算和通信开销的方法 | 通过压缩存储减小模型,在支持低精度运算的硬件上表现优秀 | 会损失精度、量化后参数调试较复杂、存在不兼容量化技术的模型 |
表2 联邦学习中的压缩优化方法的对比
Tab. 2 Comparison of compression optimization methods in federated learning
压缩优化方法 | 描述 | 优点 | 缺点 |
---|---|---|---|
知识蒸馏 | 通过使用已有的教师模型提供知识蒸馏训练学生模型 | 提高训练速度、提高模型性能、能够实现迁移学习 | 教师学生模型不匹配、模型深浅度不同会导致该方法实现困难 |
剪枝 | 裁剪掉神经网络中对输出结果影响较小和没有影响的权重和分枝,从而避免冗余的模型参数交换,使裁剪后的模型占用内存更小、处理速度更快 | 可以减小模型大小、简化模型结构、节省计算资源,并且能通过去除冗余参数从而增强模型的泛化能力,降低过拟合 | 若剪枝不当,可能会导致模型精度下降,另外需要人为去选择剪枝策略以及调整参数 |
量化 | 通过降低数据表示的精度减少存储、计算和通信开销的方法 | 通过压缩存储减小模型,在支持低精度运算的硬件上表现优秀 | 会损失精度、量化后参数调试较复杂、存在不兼容量化技术的模型 |
蒸馏技术 | 优点 | 缺点 |
---|---|---|
协同蒸馏[ | 具有较好的性能、泛化性和鲁棒性 | 训练时间久,调参困难,过度依赖模型质量 |
集成蒸馏[ | 在解决较大规模任务时性能较好 | 训练时间久,需要部署大量设备,调参较困难 |
自适应蒸馏[ | 蒸馏参数可动态调整、适应性强 | 复杂度高,调参困难 |
无数据蒸馏[ | 适用于资源受限或无任务数据的环境 | 目标任务需要与教师模型对其他任务的知识之间应该有一定的相关性, 否则效果可能有限 |
表3 基于知识蒸馏技术的高效联邦学习优化方案的性能比较
Tab. 3 Performance comparison of efficient federated learning optimization schemes based on knowledge distillation technology
蒸馏技术 | 优点 | 缺点 |
---|---|---|
协同蒸馏[ | 具有较好的性能、泛化性和鲁棒性 | 训练时间久,调参困难,过度依赖模型质量 |
集成蒸馏[ | 在解决较大规模任务时性能较好 | 训练时间久,需要部署大量设备,调参较困难 |
自适应蒸馏[ | 蒸馏参数可动态调整、适应性强 | 复杂度高,调参困难 |
无数据蒸馏[ | 适用于资源受限或无任务数据的环境 | 目标任务需要与教师模型对其他任务的知识之间应该有一定的相关性, 否则效果可能有限 |
技术 | 结合技术 | 描述 | 性能分析 |
---|---|---|---|
非结构化 剪枝 | 随机模型剪枝[ | 每次迭代中,服务器使用Dropout从全局模型中独立生成若干个子网,但具有异构的Dropout率,每个子网适应于指定信道的状态,子网被下载到相关联的设备中进行更新 | 减少通信开销和设备的计算负载,并且在过拟合的情况下优于后者,也优于具有相同的子网的联邦学习方案 |
无线联邦剪枝[ | 引入无线联邦学习的模型修剪,通过去除计算能力低或信道条件差的掉队者,以减小神经网络的规模 | 在给定的学习延迟预算下最小化收敛速度,增强了局部计算能力,并大幅降低学习延迟 | |
优化剪枝比和 频谱分配[ | 对联邦学习系统的收敛速度和学习延迟进行数学分析,并通过联合优化修剪率和频谱分配,提出一个优化问题 | 最大限度地提高了收敛速度,同时保证学习延迟,并提高无线联邦学习的性能 | |
结构化 剪枝 | Dropout(丢弃法)[ | 使用户在全局模型的较小子集上高效地进行本地训练,同时提供减少客户端到服务器通信和本地计算的功能 | 在不会降低最终模型的质量的基础上,减少通信数据和本地计算,从而允许更高容量的模型被传输 |
静态批处理规范化[ | 解决具有不同的计算和通信能力的异构客户端性能问题 | 没有引入任何额外的计算开销,并且很容易适应现有的应用,对动态变化的模型复杂度具有鲁棒性 | |
弹性模型收缩[ | 设计模型收缩以支持具有弹性计算成本的局部模型训练,设计梯度压缩以允许具有动态通信开销的参数传递,并用逐元素的方式进行增强的参数聚合 | 大幅减少训练延迟和能量消耗,并显著提高收敛的全局精度 |
表4 联邦学习中的结构化和非结构化剪枝技术比较
Tab. 4 Comparison of structured and unstructured pruning technologies in federated learning
技术 | 结合技术 | 描述 | 性能分析 |
---|---|---|---|
非结构化 剪枝 | 随机模型剪枝[ | 每次迭代中,服务器使用Dropout从全局模型中独立生成若干个子网,但具有异构的Dropout率,每个子网适应于指定信道的状态,子网被下载到相关联的设备中进行更新 | 减少通信开销和设备的计算负载,并且在过拟合的情况下优于后者,也优于具有相同的子网的联邦学习方案 |
无线联邦剪枝[ | 引入无线联邦学习的模型修剪,通过去除计算能力低或信道条件差的掉队者,以减小神经网络的规模 | 在给定的学习延迟预算下最小化收敛速度,增强了局部计算能力,并大幅降低学习延迟 | |
优化剪枝比和 频谱分配[ | 对联邦学习系统的收敛速度和学习延迟进行数学分析,并通过联合优化修剪率和频谱分配,提出一个优化问题 | 最大限度地提高了收敛速度,同时保证学习延迟,并提高无线联邦学习的性能 | |
结构化 剪枝 | Dropout(丢弃法)[ | 使用户在全局模型的较小子集上高效地进行本地训练,同时提供减少客户端到服务器通信和本地计算的功能 | 在不会降低最终模型的质量的基础上,减少通信数据和本地计算,从而允许更高容量的模型被传输 |
静态批处理规范化[ | 解决具有不同的计算和通信能力的异构客户端性能问题 | 没有引入任何额外的计算开销,并且很容易适应现有的应用,对动态变化的模型复杂度具有鲁棒性 | |
弹性模型收缩[ | 设计模型收缩以支持具有弹性计算成本的局部模型训练,设计梯度压缩以允许具有动态通信开销的参数传递,并用逐元素的方式进行增强的参数聚合 | 大幅减少训练延迟和能量消耗,并显著提高收敛的全局精度 |
技术 | 描述 | 性能分析 |
---|---|---|
梯度量化[ | 通过减小梯度的表示精度降低在梯度传递过程中的通信开销 | 解决了联邦学习中的通信和可扩展性挑战,实现更高的压缩比 |
低方差量化[ | 通过降低随机梯度方差,从而降低误差下限 | 可以收敛在更少的通信位中,并能够利用局部数据分布的异构性,减少不必要的传输 |
自适应量化[ | 通过多种变种量化方法,让压缩技术在联邦学习中适应性更强 | 能够自适应调整量化梯度,并在无线网络环境中也具有鲁棒性 |
表5 联邦学习中量化技术的描述及性能分析
Tab. 5 Description and performance analysis of quantization technologies in federated learning
技术 | 描述 | 性能分析 |
---|---|---|
梯度量化[ | 通过减小梯度的表示精度降低在梯度传递过程中的通信开销 | 解决了联邦学习中的通信和可扩展性挑战,实现更高的压缩比 |
低方差量化[ | 通过降低随机梯度方差,从而降低误差下限 | 可以收敛在更少的通信位中,并能够利用局部数据分布的异构性,减少不必要的传输 |
自适应量化[ | 通过多种变种量化方法,让压缩技术在联邦学习中适应性更强 | 能够自适应调整量化梯度,并在无线网络环境中也具有鲁棒性 |
通信优化方法 | 描述 | 优点 | 缺点 |
---|---|---|---|
客户端选择 | 客户端选择涉及确定哪些设备参与训练,以及在训练过程中如何有效地利用它们 | 动态调整参与训练的设备可以适应不同设备的状态和性能变化,从而提高训练效果 | 需要与每个设备进行通信以获取性能指标或其他信息,增加了通信开销。由于随机性,可能会导致一些设备被频繁选择,而其他设备被忽略 |
选择性更新 | 对参数更新适应性判别,并对被淘汰的参数进行再训练更新 | 可以减少需要传输的参数量,更有效地利用计算资源,从而减少了通信开销,特别是在带宽受限的情况下 | 可能导致一些设备的贡献被忽略,从而引入模型的偏差,特别是在不平衡的数据分布情况下 |
单轮次 联邦学习 | 一次性联邦学习的模型在本地上只更新一次,然后在中央服务器上合并,这些更新形成全局模型 | 在带宽受限或通信成本较高的情况降低通信开销,并能够更快地更新全局模型 | 一轮训练可能不足以使模型在全局上达到最佳性能 |
通信链路 优化 | 降低设备之间和设备与中央服务器间的通信开销,提高整体联邦学习系统的效率 | 优化通信链路有助于使联邦学习更加适应不同的网络条件,提高通信效率,减少延迟 | 一些通信链路优化可能需要较高性能的硬件支持,这可能不适用于所有设备 |
表6 联邦学习中的通信优化方法的对比
Tab. 6 Comparison of communication optimization methods in federated learning
通信优化方法 | 描述 | 优点 | 缺点 |
---|---|---|---|
客户端选择 | 客户端选择涉及确定哪些设备参与训练,以及在训练过程中如何有效地利用它们 | 动态调整参与训练的设备可以适应不同设备的状态和性能变化,从而提高训练效果 | 需要与每个设备进行通信以获取性能指标或其他信息,增加了通信开销。由于随机性,可能会导致一些设备被频繁选择,而其他设备被忽略 |
选择性更新 | 对参数更新适应性判别,并对被淘汰的参数进行再训练更新 | 可以减少需要传输的参数量,更有效地利用计算资源,从而减少了通信开销,特别是在带宽受限的情况下 | 可能导致一些设备的贡献被忽略,从而引入模型的偏差,特别是在不平衡的数据分布情况下 |
单轮次 联邦学习 | 一次性联邦学习的模型在本地上只更新一次,然后在中央服务器上合并,这些更新形成全局模型 | 在带宽受限或通信成本较高的情况降低通信开销,并能够更快地更新全局模型 | 一轮训练可能不足以使模型在全局上达到最佳性能 |
通信链路 优化 | 降低设备之间和设备与中央服务器间的通信开销,提高整体联邦学习系统的效率 | 优化通信链路有助于使联邦学习更加适应不同的网络条件,提高通信效率,减少延迟 | 一些通信链路优化可能需要较高性能的硬件支持,这可能不适用于所有设备 |
[1] | 刘萌,齐孟津,詹圳宇,等. 基于深度学习的图像-文本匹配研究综述[J]. 计算机学报, 2023, 46(11):2370-2399. |
LIU M, QI M J, ZHAN Z Y, et al. A survey on deep learning based image-text matching[J]. Chinese Journal of Computers, 2023, 46(11):2370-2399. | |
[2] | LI X, LI M, YAN P, et al. Deep learning attention mechanism in medical image analysis: basics and beyonds[J]. International Journal of Network Dynamics and Intelligence, 2023, 2(1): 93-116. |
[3] | CHEN W, WU A N, BILJECKI F. Classification of urban morphology with deep learning: application on urban vitality[J]. Computers, Environment and Urban Systems, 2021, 90: No.101706. |
[4] | GE L, LI H, WANG X, et al. A review of secure federated learning: privacy leak-age threats, protection technologies, challenges and future directions[J]. Neurocomputing, 2023, 561: No.126897. |
[5] | 穆旭彤,程珂,宋安霄,等. 抗拜占庭攻击的隐私保护联邦学习[J]. 计算机学报, 2024, 47(4):842-861. |
MU X T, CHENG K, SONG A X, et al. Privacy-preserving federated learning resistant to Byzantine attack[J]. Chinese Journal of Computers, 2024, 47(4):842-861. | |
[6] | ZHANG Z, LIU Q, HUANG Z, et al. Model inversion attacks against graph neural networks[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(9): 8729-8741. |
[7] | LI Q, DIAO Y, CHEN Q, et al. Federated learning on non-IID data silos: an experimental study[C]// Proceedings of the 38th IEEE International Conference on Data Engineering. Piscataway: IEEE, 2022: 965-978. |
[8] | Regulation EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and Directiverepealing 95/46 /EC (General Data Protection Regulation) (Text with EEA relevance)[EB/OL]. [2024-06-14].. |
[9] | McMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]// Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2017: 1273-1282. |
[10] | LI T, SAHU A K, TALWALKAR A, et al. Federated learning: challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50-60. |
[11] | GAFNI T, SHLEZINGER N, COHEN K, et al. Federated learning: a signal processing perspective[J]. IEEE Signal Processing Magazine, 2022, 39(3): 14-41. |
[12] | 宋凌云,刘至臻,张炀,等. 基于异构图中多层次图结构的级联图卷积网络[J]. 软件学报, 2024, 35(11): 5179-5195. |
SONG L Y, LIU Z Z, ZHANG Y, et al. Cascade graph convolution network based on multi-level graph structures in heterogeneous graph[J]. Journal of Software, 2024, 35(11): 5179-5195. | |
[13] | MARTÍNEZ BELTRÁN E T, PÉREZ M Q, SÁNCHEZ P M S, et al. Decentralized federated learning: fundamentals, state of the art, frameworks, trends, and challenges[J]. IEEE Communications Surveys and Tutorials, 2023, 25(4): 2983-3013. |
[14] | ZHANG C, XIE Y, BAI H, et al. A survey on federated learning[J]. Knowledge-Based Systems, 2021, 216: No.106775. |
[15] | SHAHID O, POURIYEH S, PARIZI R M, et al. Communication efficiency in federated learning: achievements and challenges[EB/OL]. [2024-06-14].. |
[16] | JIANG Z, WANG W, LI B, et al. Towards efficient synchronous federated training: a survey on system optimization strategies[J]. IEEE Transactions on Big Data, 2023, 9(2): 437-454. |
[17] | ZHAO Z, MAO Y, LIU Y, et al. Towards efficient communications in federated learning: a contemporary survey[J]. Journal of the Franklin Institute, 2023, 360(12): 8669-8703. |
[18] | YANG Q, LIU Y, CHEN T, et al. Federated machine learning: concept and applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): No.12. |
[19] | McMAHAN B, RAMAGE D. Federated learning: collaborative machine learning without centralized training data[EB/OL]. [2024-06-14].. |
[20] | ZHU H, ZHANG H, JIN Y. From federated learning to federated neural architecture search: a survey[J]. Complex and Intelligent Systems, 2021, 7(2): 639-657. |
[21] | ZHANG J, GUO S, QU Z, et al. Adaptive vertical federated learning on unbalanced features[J]. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(12): 4006-4018. |
[22] | FENG S, LI B, YU H, et al. Semi-supervised federated heterogeneous transfer learning[J]. Knowledge-Based Systems, 2022, 252: No.109384. |
[23] | BERGHOUT T, BENTRCIA T, FERRAG M A, et al. A heterogeneous federated transfer learning approach with extreme aggregation and speed[J]. Mathematics, 2022, 10(19): No.3528. |
[24] | LIU Y, KANG Y, XING C, et al. A secure federated transfer learning framework[J]. IEEE Intelligent Systems, 2020, 35(4): 70-82. |
[25] | YE M, FANG X, DU B, et al. Heterogeneous federated learning: state-of-the-art and research challenges[J]. ACM Computing Surveys, 2024, 56(3): No.79. |
[26] | LUO B, XIAO W, WANG S, et al. Tackling system and statistical heterogeneity for federated learning with adaptive client sampling[C]// Proceedings of the 2022 IEEE Conference on Computer Communications. Piscataway: IEEE, 2022: 1739-1748. |
[27] | LI T, SAHU A K, ZAHEER M, et al. Federated optimization in heterogeneous networks[EB/OL]. [2024-06-14].. |
[28] | HUANG W, YE M, DU B. Learn from others and be yourself in heterogeneous federated learning[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 10133-10143. |
[29] | CHEN C, LIU Y, MA X, et al. CaLFAT: calibrated federated adversarial training with label skewness[C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 3569-3581. |
[30] | KAIROUZ P, McMAHAN H B, AVENT B, et al. Advances and open problems in federated learning[J]. Foundations and Trends in Machine Learning, 2021, 14(1/2): 1-210. |
[31] | ZHOU T, ZHANG J, TSANG D H K. FedFA: federated learning with feature anchors to align features and classifiers for heterogeneous data[J]. IEEE Transactions on Mobile Computing, 2024, 23(6): 6731-6742. |
[32] | YANG S, PARK H, BYUN J, et al. Robust federated learning with noisy labels[J]. IEEE Intelligent Systems, 2022, 37(2): 35-43. |
[33] | ZHANG J, LI Z, LI B, et al. Federated learning with label distribution skew via logits calibration[C]// Proceedings of the 39th International Conference on Machine Learning. New York: JMLR.org, 2022: 26311-26329. |
[34] | TAN A Z, YU H, CUI L, et al. Towards personalized federated learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(12): 9587-9603. |
[35] | CHEN D, YAO L, GAO D, et al. Efficient personalized federated learning via sparse model-adaptation[C]// Proceedings of the 40th International Conference on Machine Learning. New York: JMLR.org, 2023: 5234-5256. |
[36] | WANG Y, LIN L, CHEN J. Communication-efficient adaptive federated learning[C]// Proceedings of the 39th International Conference on Machine Learning. New York: JMLR.org, 2022: 22802-22838. |
[37] | CHEN M, SHLEZINGER N, POOR H V, et al. Communication-efficient federated learning[J]. Proceedings of the National Academy of Sciences of the United States of America, 2021, 118(17): No.e2024789118. |
[38] | SAFARYAN M, SHULGIN E, RICHTÁRIK P. Uncertainty principle for communication compression in distributed and federated learning and the search for an optimal compressor[J]. Information and Inference: A Journal of the IMA, 2022, 11(2): 557-580. |
[39] | YE H, LIANG L, LI G Y. Decentralized federated learning with unreliable communications[J]. IEEE Journal of Selected Topics in Signal Processing, 2022, 16(3): 487-500. |
[40] | HE C, ANNAVARAM M, AVESTIMEHR S. Group knowledge transfer: federated learning of large CNNs at the edge[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 14068-14080. |
[41] | LI C, LI G, VARSHNEY P K. Communication-efficient federated learning based on compressed sensing[J]. IEEE Internet of Things Journal, 2021, 8(20): 15531-15541. |
[42] | SHAH S M, LAU V K N. Model compression for communication efficient federated learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(9): 5937-5951. |
[43] | HADDADPOUR F, KAMANI M M, MOKHTARI A, et al. Federated learning with compression: unified analysis and sharp guarantees[C]// Proceedings of the 24th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2021: 2350-2358. |
[44] | HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. [2024-06-14].. |
[45] | JEONG E, OH S, KIM H, et al. Communication-efficient on-device machine learning: federated distillation and augmentation under non-IID private data[EB/OL]. [2024-06-14].. |
[46] | ANIL R, PEREYRA G, PASSOS A, et al. Large scale distributed neural network training through online distillation[EB/OL]. [2024-06-14].. |
[47] | WU C, WU F, LYU L, et al. Communication-efficient federated learning via knowledge distillation[J]. Nature Communications, 2022, 13: No.2032. |
[48] | HAO W, EL-KHAMY M, LEE J, et al. Towards fair federated learning with zero-shot data augmentation[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2021: 3305-3314. |
[49] | NI X, SHEN X, ZHAO H. Federated optimization via knowledge codistillation[J]. Expert Systems with Applications, 2022, 191: No.116310. |
[50] | CHEN Z, YANG H H, QUEK T Q S, et al. Spectral co-distillation for personalized federated learning[C]// Proceedings of the 37th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2023: 8757-8773. |
[51] | LICHTARGE J, AMID E, KUMAR S, et al. Heterogeneous federated learning using knowledge codistillation[EB/OL]. [2024-06-14].. |
[52] | CHEN H Y, CHAO W L. FedBE: making Bayesian model ensemble applicable to federated learning[EB/OL]. [2024-06-14].. |
[53] | LIN T, KONG L, STICH S U, et al. Ensemble distillation for robust model fusion in federated learning[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 2351-2363. |
[54] | GONG X, SHARMA A, KARANAM S, et al. Preserving privacy in federated learning with ensemble cross-domain knowledge distillation[C]// Proceedings of the 36th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2022: 11891-11899. |
[55] | GONG X, SONG L, VEDULA R, et al. Federated learning with privacy-preserving ensemble attention distillation[J]. IEEE Transactions on Medical Imaging, 2022, 42(7): 2057-2067. |
[56] | SUI D, CHEN Y, ZHAO J, et al. FedED: federated learning via ensemble distillation for medical relation extraction[C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 2118-2128. |
[57] | ILHAN F, SU G, LIU L. ScaleFL: resource-adaptive federated learning with heterogeneous clients[C]// Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 24532-24541. |
[58] | YU T, BAGDASARYAN E, SHMATIKOV V. Salvaging federated learning by local adaptation[EB/OL]. [2024-06-14].. |
[59] | TANG J, DING X, HU D, et al. FedRAD: heterogeneous federated learning via relational adaptive distillation[J]. Sensors, 2023, 23(14): No.6518. |
[60] | WU Z, SUN S, WANG Y, et al. Exploring the distributed knowledge congruence in proxy-data-free federated distillation[J]. ACM Transactions on Intelligent Systems and Technology, 2024, 15(2): No.28. |
[61] | ZHU Z, HONG J, ZHOU J. Data-free knowledge distillation for heterogeneous federated learning[C]// Proceedings of the 38th International Conference on Machine Learning. New York: JMLR.org, 2021: 12878-12889. |
[62] | ZHANG L, SHEN L, DING L, et al. Fine-tuning global model via data-free knowledge distillation for non-IID federated learning[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 10164-10173. |
[63] | LUO K, WANG S, FU Y, et al. DFRD: data-free robustness distillation for heterogeneous federated learning[C]// Proceedings of the 37th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2023: 17854-17866. |
[64] | CHEN Y, LU W, QIN X, et al. MetaFed: federated learning among federations with cyclic knowledge distillation for personalized healthcare[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(11): 16671-16682. |
[65] | GONG X, SHARMA A, KARANAM S, et al. Ensemble attention distillation for privacy-preserving federated learning[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 15056-15066. |
[66] | LEE G, JEONG M, SHIN Y, et al. Preservation of the global knowledge by not-true distillation in federated learning[C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 38461-38474. |
[67] | REED R. Pruning algorithms — a survey[J]. IEEE Transactions on Neural Networks, 1993, 4(5): 740-747. |
[68] | KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]// Proceedings of the 25th International Conference on Neural Information Processing Systems — Volume 1. Red Hook: Curran Associates Inc., 2012: 1097-1105. |
[69] | CHEN Z, YI W, SHIN H, et al. Adaptive model pruning for communication and computation efficient wireless federated learning[J]. IEEE Transactions on Wireless Communications, 2024, 23(7): 7582-7598. |
[70] | WEN D, JEON K J, HUANG K. Federated dropout — a simple approach for enabling federated learning on resource constrained devices[J]. IEEE Wireless Communications Letters, 2022, 11(5): 923-927. |
[71] | LIN R, XIAO Y, YANG T J, et al. Federated pruning: improving neural network efficiency with federated learning[C]// Proceedings of the INTERSPEECH 2022. [S.l.]: International Speech Communication Association, 2022: 1701-1705. |
[72] | JIANG Y, WANG S, VALLS V, et al. Model pruning enables efficient federated learning on edge devices[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(12): 10374-10386. |
[73] | XU W, FANG W, DING Y, et al. Accelerating federated learning for IoT in big data analytics with pruning, quantization and selective updating[J]. IEEE Access, 2021, 9: 38457-38466. |
[74] | VAHIDIAN S, MORAFAH M, LIN B. Personalized federated learning by structured and unstructured pruning under data heterogeneity[C]// Proceedings of the IEEE 41st International Conference on Distributed Computing Systems Workshops. Piscataway: IEEE, 2021: 27-34. |
[75] | LIU S, YU G, YIN R, et al. Joint model pruning and device selection for communication-efficient federated edge learning[J]. IEEE Transactions on Communications, 2022, 70(1): 231-244. |
[76] | LIU S, YU G, YIN R, et al. Adaptive network pruning for wireless federated learning[J]. IEEE Wireless Communications Letters, 2021, 10(7): 1572-1576. |
[77] | CALDAS S, KONEČNÝ J, McMAHAN H B, et al. Expanding the reach of federated learning by reducing client resource requirements[EB/OL]. [2024-06-14].. |
[78] | DIAO E, DING J, TAROKH V. HeteroFL: computation and communication efficient federated learning for heterogeneous clients[EB/OL]. [2024-06-14].. |
[79] | LI P, CHENG G, HUANG X, et al. AnycostFL: efficient on-demand federated learning over heterogeneous edge devices[C]// Proceedings of the 2023 IEEE Conference on Computer Communications. Piscataway: IEEE, 2023: 1-10. |
[80] | LIU L, ZHANG J, SONG S, et al. Hierarchical federated learning with quantization: convergence analysis and system design[J]. IEEE Transactions on Wireless Communications, 2023, 22(1): 2-18. |
[81] | REISIZADEH A, MOKHTARI A, HASSANI H, et al. FedPAQ: a communication-efficient federated learning method with periodic averaging and quantization[C]// Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2020: 2021-2031. |
[82] | OH Y, LEE N, JEON Y S, et al. Communication-efficient federated learning via quantized compressed sensing[J]. IEEE Transactions on Wireless Communications, 2023, 22(2): 1087-1100. |
[83] | JHUNJHUNWALA D, GADHIKAR A, JOSHI G, et al. Adaptive quantization of model updates for communication-efficient federated learning[C]// Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2021: 3110-3114. |
[84] | MAO Y, ZHAO Z, YAN G, et al. Communication-efficient federated learning with adaptive quantization[J]. ACM Transactions on Intelligent Systems and Technology, 2022, 13(4): No.67. |
[85] | HÖNIG R, ZHAO Y, MULLINS R. DAdaQuant: doubly-adaptive quantization for communication-efficient federated learning[C]// Proceedings of the 39th International Conference on Machine Learning. New York: JMLR.org, 2022: 8852-8866. |
[86] | CHEN R, LI L, XUE K, et al. Energy efficient federated learning over heterogeneous mobile devices via joint design of weight quantization and wireless transmission[J]. IEEE Transactions on Mobile Computing, 2023, 22(12): 7451-7465. |
[87] | ZHANG W, YANG D, WU W, et al. Optimizing federated learning in distributed industrial IoT: a multi-agent approach[J]. IEEE Journal on Selected Areas in Communications, 2021, 39(12): 3688-3703. |
[88] | FU L, ZHANG H, GAO G, et al. Client selection in federated learning: principles, challenges, and opportunities[J]. IEEE Internet of Things Journal, 2023, 10(24): 21811-21819. |
[89] | NISHIO T, YONETANI R. Client selection for federated learning with heterogeneous resources in mobile edge[C]// Proceedings of the 2019 IEEE International Conference on Communications. Piscataway: IEEE, 2019: 1-7. |
[90] | ZHANG H, XIE Z, ZAREI R, et al. Adaptive client selection in resource constrained federated learning systems: a deep reinforcement learning approach[J]. IEEE Access, 2021, 9: 98423-98432. |
[91] | LI Z, HE Y, YU H, et al. Data heterogeneity-robust federated learning via group client selection in industrial IoT[J]. IEEE Internet of Things Journal, 2022, 9(18): 17844-17857. |
[92] | WANG L, WANG W, LI B. CMFL: mitigating communication overhead for federated learning[C]// Proceedings of the IEEE 39th International Conference on Distributed Computing Systems. Piscataway: IEEE, 2019: 954-964. |
[93] | TAO Z, LI Q. eSGD: communication efficient distributed deep learning on the edge[C]// Proceedings of the 2018 USENIX Workshop on Hot Topics in Edge Computing. Berkeley: USENIX Association, 2018: 1-6. |
[94] | HERZOG A, SOUTHAM R, BELARBI O, et al. Selective updates and adaptive masking for communication-efficient federated learning[J]. IEEE Transactions on Green Communications and Networking, 2024, 8(2): 852-864. |
[95] | GUHA N, TALWALKAR A, SMITH V. One-shot federated learning[EB/OL]. [2024-06-14].. |
[96] | ZHOU Y, PU G, MA X, et al. Distilled one-shot federated learning[EB/OL]. [2024-06-14].. |
[97] | ZHANG J, CHEN C, LI B, et al. DENSE: data-free one-shot federated learning[C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 21414-21428. |
[98] | ZHANG X, ZHU X, WANG J, et al. Federated learning with adaptive communication compression under dynamic bandwidth and unreliable networks[J]. Information Sciences, 2020, 540: 242-262. |
[99] | SALEHI M, HOSSAIN E. Federated learning in unreliable and resource-constrained cellular wireless networks[J]. IEEE Transactions on Communications, 2021, 69(8): 5136-5151. |
[100] | MAO Y, ZHAO Z, YANG M, et al. SAFARI: sparsity-enabled federated learning with limited and unreliable communications[J]. IEEE Transactions on Mobile Computing, 2024, 23(5): 4819-4831. |
[101] | LIU W, CHEN L, ZHANG W. Decentralized federated learning: Balancing communication and computing costs[J]. IEEE Transactions on Signal and Information Processing over Networks, 2022, 8: 131-143. |
[102] | YU R, LI P. Toward resource-efficient federated learning in mobile edge computing[J]. IEEE Network, 2021, 35(1): 148-155. |
[103] | LI J, RAKIN A S, CHEN X, et al. ResSFL: a resistance transfer framework for defending model inversion attack in split federated learning[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 10184-10192. |
[104] | GONG X, CHEN Y, WANG Q, et al. Backdoor attacks and defenses in federated learning: state-of-the-art, taxonomy, and future directions[J]. IEEE Wireless Communications, 2023, 30(2): 114-121. |
[105] | ZHU J, WU J, BASHIR A K, et al. Privacy-preserving federated learning of remote sensing image classification with dishonest-majority[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2023, 16: 4685-4698. |
[106] | QU L, ZHOU Y, LIANG P P, et al. Rethinking architecture design for tackling data heterogeneity in federated learning[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 10051-10061. |
[107] | MU X, SHEN Y, CHENG K, et al. FedProc: prototypical contrastive federated learning on non-IID data[J]. Future Generation Computer Systems, 2023, 143: 93-104. |
[108] | LAI F, DAI Y, SINGAPURAM S S, et al. FedScale: benchmarking model and system performance of federated learning at scale[C]// Proceedings of the 39th International Conference on Machine Learning. New York: JMLR.org, 2022: 11814-11827. |
[109] | OUADRHIRI A EL, ABDELHADI A. Differential privacy for deep and federated learning: a survey[J]. IEEE Access, 2022, 10: 22359-22380. |
[1] | 苏锦涛, 葛丽娜, 肖礼广, 邹经, 王哲. 联邦学习中针对后门攻击的检测与防御方案[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2399-2408. |
[2] | 廖炎华, 鄢元霞, 潘文林. 基于YOLOv9的交通路口图像的多目标检测算法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2555-2565. |
[3] | 晏燕, 李飞飞, 吕雅琴, 冯涛. 安全高效的混洗差分隐私频率估计方法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2600-2611. |
[4] | 彭鹏, 蔡子婷, 刘雯玲, 陈才华, 曾维, 黄宝来. 基于CNN和双向GRU混合孪生网络的语音情感识别方法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2515-2521. |
[5] | 张硕, 孙国凯, 庄园, 冯小雨, 王敬之. 面向区块链节点分析的eclipse攻击动态检测方法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2428-2436. |
[6] | 索晋贤, 张丽萍, 闫盛, 王东奇, 张雅雯. 可解释的深度知识追踪方法综述[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2043-2055. |
[7] | 王震洲, 郭方方, 宿景芳, 苏鹤, 王建超. 面向智能巡检的视觉模型鲁棒性优化方法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2361-2368. |
[8] | 张宏扬, 张淑芬, 谷铮. 面向个性化与公平性的联邦学习算法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2123-2131. |
[9] | 齐巧玲, 王啸啸, 张茜茜, 汪鹏, 董永峰. 基于元学习的标签噪声自适应学习算法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2113-2122. |
[10] | 赵小阳, 许新征, 李仲年. 物联网应用中的可解释人工智能研究综述[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2169-2179. |
[11] | 花天辰, 马晓宁, 智慧. 基于浅层人工神经网络的可移植执行恶意软件静态检测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1911-1921. |
[12] | 李岚皓, 严皓钧, 周号益, 孙庆赟, 李建欣. 基于神经网络的多尺度信息融合时间序列长期预测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1776-1783. |
[13] | 王文鹏, 秦寅畅, 师文轩. 工业缺陷检测无监督深度学习方法综述[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1658-1670. |
[14] | 李雪莹, 杨琨, 涂国庆, 刘树波. 基于局部增强的时序数据对抗样本生成方法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1573-1581. |
[15] | 高改梅, 杜苗莲, 刘春霞, 杨玉丽, 党伟超, 邸国霞. 基于SM2可链接环签名的联盟链隐私保护方法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1564-1572. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||