《计算机应用》唯一官方网站 ›› 2026, Vol. 46 ›› Issue (3): 887-898.DOI: 10.11772/j.issn.1001-9081.2025040436
收稿日期:2025-04-22
修回日期:2025-07-09
接受日期:2025-07-11
发布日期:2025-07-22
出版日期:2026-03-10
通讯作者:
屈志昊
作者简介:王磊(1978—),男,江苏扬州人,高级工程师,硕士,主要研究方向:电力物联网、边缘智能基金资助:
Lei WANG1, Wenxuan ZHOU2, Ninghui JIA2, Zhihao QU2(
)
Received:2025-04-22
Revised:2025-07-09
Accepted:2025-07-11
Online:2025-07-22
Published:2026-03-10
Contact:
Zhihao QU
About author:WANG Lei, born in 1978, M. S., senior engineer. His research interests include electric power internet of things, edge intelligence.Supported by:摘要:
针对隐私敏感物联网(IoT)场景中联邦学习面临巨大通信开销和梯度反演隐私泄露风险问题,提出一种名为QPR(Quantization and Pull Reduction)的两阶段联邦学习通信压缩框架。首先,将训练节点通过梯度量化技术压缩本地梯度后上传至服务器,以降低梯度传输的开销;其次,引入基于概率阈值的延迟模型下载机制(lazy pulling)以降低模型的同步频率,训练节点以预设概率同步全局模型,而其余迭代中复用本地历史模型;最后,通过严格的理论分析确保QPR在收敛速度上达到与标准无通信压缩联邦学习——联邦平均(FedAvg)算法相同的渐进阶次,且具备随训练节点数增多而线性加速的特性,从而保证系统的可延展性。实验结果表明,QPR在多个基准数据集和机器学习模型上均能显著提升通信效率。以ResNet18模型在CIFAR-10数据集上的训练任务为例,QPR在不损失模型精度的前提下,与无压缩的FedAvg算法相比,最高实现了8.27的通信加速比。
中图分类号:
王磊, 周文轩, 贾柠晖, 屈志昊. 面向隐私敏感物联网数据的联邦学习双向通信压缩[J]. 计算机应用, 2026, 46(3): 887-898.
Lei WANG, Wenxuan ZHOU, Ninghui JIA, Zhihao QU. Federated learning with two-pass communication compression for privacy-sensitive IoT data[J]. Journal of Computer Applications, 2026, 46(3): 887-898.
| 精度/% | QPR | DQSGD | QSGD | PR |
|---|---|---|---|---|
| 90 | 7.30× | 4.97× | 1.68× | 1.55× |
| 80 | 6.99× | 6.04× | 1.89× | 1.49× |
| 70 | 7.37× | 6.72× | 1.93× | 1.64× |
| 60 | 8.27× | 7.66× | 2.10× | 2.05× |
表1 不同压缩算法达到NSGD基准精度时的带宽节省倍数对比
Tab. 1 Comparison of bandwidth savings when different compression algorithms achieving NSGD benchmark accuracy
| 精度/% | QPR | DQSGD | QSGD | PR |
|---|---|---|---|---|
| 90 | 7.30× | 4.97× | 1.68× | 1.55× |
| 80 | 6.99× | 6.04× | 1.89× | 1.49× |
| 70 | 7.37× | 6.72× | 1.93× | 1.64× |
| 60 | 8.27× | 7.66× | 2.10× | 2.05× |
| 精度/% | 总通信时延/s | ||||
|---|---|---|---|---|---|
| NSGD | QPR | DQSGD | QSGD | PR | |
| 90 | 24 481.5 | 3 355.4 | 4 929.8 | 14 612.2 | 15 807.7 |
| 80 | 12 280.1 | 1 756.4 | 2 031.9 | 6 509.1 | 8 236.0 |
| 70 | 6 415.6 | 870.8 | 954.5 | 3 320.9 | 3 918.7 |
| 60 | 3 581.7 | 433.0 | 467.4 | 1 704.8 | 1 749.0 |
表2 不同压缩算法达到NSGD基准精度时的总通信时延
Tab. 2 Total communication delays when different compression algorithms achieving NSGD benchmark accuracy
| 精度/% | 总通信时延/s | ||||
|---|---|---|---|---|---|
| NSGD | QPR | DQSGD | QSGD | PR | |
| 90 | 24 481.5 | 3 355.4 | 4 929.8 | 14 612.2 | 15 807.7 |
| 80 | 12 280.1 | 1 756.4 | 2 031.9 | 6 509.1 | 8 236.0 |
| 70 | 6 415.6 | 870.8 | 954.5 | 3 320.9 | 3 918.7 |
| 60 | 3 581.7 | 433.0 | 467.4 | 1 704.8 | 1 749.0 |
| [1] | KOLOSKOVA A, STICH S U, JAGGI M. Sharper convergence guarantees for asynchronous SGD for distributed and federated learning [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 17202-17215. |
| [2] | 葛丽娜,王明禹,田蕾. 联邦学习的高效性研究综述[J]. 计算机应用, 2025, 45(8): 2387-2398. |
| GE L N, WANG M Y, TIAN L. Review of research on efficiency of federated learning [J]. Journal of Computer Applications, 2025, 45(8): 2387-2398. | |
| [3] | ALISTARH D, GRUBIC D, LI J Z, et al. QSGD: communication-efficient SGD via gradient quantization and encoding [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 1707-1718. |
| [4] | HÖNIG R, ZHAO Y, MULLINS R. DAdaQuant: doubly-adaptive quantization for communication-efficient federated learning [C]// Proceedings of the 39th International Conference on Machine Learning. New York: JMLR.org, 2022: 8852-8866. |
| [5] | ZHANG Z, WANG C. MIPD: an adaptive gradient sparsification framework for distributed DNNs training [J]. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(11): 3053-3066. |
| [6] | ZHANG J, SIMEONE O. LAGC: lazily aggregated gradient coding for straggler-tolerant and communication-efficient distributed learning [J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(3): 962-974. |
| [7] | 邱鑫源,叶泽聪,崔翛龙,等. 联邦学习通信开销研究综述[J]. 计算机应用, 2022, 42(2): 333-342. |
| QIU X Y, YE Z C, CUI X L, et al. Survey on communication overhead in federated learning[J]. Journal of Computer Applications, 2022, 42(2): 333-342. | |
| [8] | YU Y, WU J, HUANG L. Double quantization for communication-efficient distributed optimization [C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 4438-4449. |
| [9] | TANG H, LIAN X, YU C, et al. DoubleSqueeze: parallel stochastic gradient descent with double-pass error-compensated compression [C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 6155-6165. |
| [10] | 张瑞麟,杜晋华,尹浩. 跨设备联邦学习中的客户端选择算法[J]. 软件学报, 2024, 35(12): 5725-5740. |
| ZHANG R L, DU J H, YIN H. Client selection algorithm in cross-device federated learning [J]. Journal of Software, 2024, 35(12): 5725-5740. | |
| [11] | QIAO A, ARAGAM B, ZHANG B, et al. Fault tolerance in iterative-convergent machine learning [C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 5220-5230. |
| [12] | SAFARYAN M, SHULGIN E, RICHTÁRIK P. Uncertainty principle for communication compression in distributed and federated learning and the search for an optimal compressor [J]. Information and Inference: A Journal of the IMA, 2022, 11(2): 557-580. |
| [13] | WANG M, BODONHELYI A, BOZKIR E, et al. TurboSVM-FL: boosting federated learning through SVM aggregation for lazy clients[C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 15546-15554. |
| [14] | CRAWSHAW M, LIU M. Federated learning under periodic client participation and heterogeneous data: a new communication-efficient algorithm and analysis [C]// Proceedings of the 38th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2024: 8240-8299. |
| [15] | ZHANG W, ZHOU T, LU Q, et al. FedSL: a communication-efficient federated learning with split layer aggregation [J]. IEEE Internet of Things Journal, 2024, 11(9): 15587-15601. |
| [16] | QU Z, JIA N, YE B, et al. FedQClip: accelerating federated learning via quantized clipped SGD [J]. IEEE Transactions on Computers, 2025, 74(2): 717-730. |
| [17] | SEIDE F, FU H, DROPPO J, et al. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs [C]// Proceedings of the INTERSPEECH 2014. [S.l.]: International Speech Communication Association, 2014: 1058-1062. |
| [18] | WEN W, XU C, YAN F, et al. TernGrad: ternary gradients to reduce communication in distributed deep learning [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 1508-1518. |
| [19] | ZHOU Q, GUO S, LIU Y, et al. Hierarchical channel-spatial encoding for communication-efficient collaborative learning [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 5788-5801. |
| [20] | ZAKERINIA H, TALAEI S, NADIRADZE G, et al. Communication-efficient federated learning with data and client heterogeneity [C]// Proceedings of the 27th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2024: 3448-3456. |
| [21] | CHEN Y, VIKALO H, WANG C. Fed-QSSL: a framework for personalized federated learning under bitwidth and data heterogeneity [C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 11443-11452. |
| [22] | ZHANG P, XU L, MEI L, et al. Sketch-based adaptive communication optimization in federated learning [J]. IEEE Transactions on Computers, 2025, 74(1): 170-184. |
| [23] | ZHOU W, QU Z, LYU S H, et al. Mask-encoded sparsification: mitigating biased gradients in communication-efficient split learning[C]// Proceedings of the 27th European Conference on Artificial Intelligence. Amsterdam: IOS Press, 2024: 2806-2813. |
| [24] | ALBELAIHI R, ALASANDAGUTTI A, YU L, et al. Deep-reinforcement-learning-assisted client selection in nonorthogonal-multiple-access-based federated learning [J]. IEEE Internet of Things Journal, 2023, 10(17): 15515-15525. |
| [25] | KIM G, KIM J, HAN B. Communication-efficient federated learning with accelerated client gradient [C]// Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 12385-12394. |
| [26] | LI J, LIU Y, WANG W. FedNS: a fast sketching Newton-type algorithm for federated learning [C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 13509-13517. |
| [27] | GONG X, LI S, BAO Y, et al. Federated learning via input-output collaborative distillation [C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 22058-22066. |
| [28] | ELBAKARY A, ISSAID C B, SHEHAB M, et al. Fed-Sophia: a communication-efficient second-order federated learning algorithm[C]// Proceedings of the 2024 IEEE International Conference on Communications. Piscataway: IEEE, 2024: 950-955. |
| [29] | CHEN T, GIANNAKIS G, SUN T, et al. LAG: lazily aggregated gradient for communication-efficient distributed learning [C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2018: 5055-5065. |
| [30] | KIM D Y, HAN D J, SEO J, et al. Achieving lossless gradient sparsification via mapping to alternative space in federated learning[C]// Proceedings of the 41st International Conference on International Conference on Machine Learning. New York: JMLR.org, 2024: 23867-23900. |
| [31] | SHOKRI R, SHMATIKOV V. Privacy-preserving deep learning[C]// Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2015: 1310-1321. |
| [32] | TANG Z, HUANG J, YAN R, et al. Bandwidth-aware and overlap-weighted compression for communication-efficient federated learning [C]// Proceedings of the 53rd International Conference on Parallel Processing. New York: ACM, 2024: 866-875. |
| [33] | XIE W, LI H, MA J, et al. JointSQ: Joint sparsification-quantization for distributed learning [C]// Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 5778-5787. |
| [34] | KRIZHEVSKY A. Learning multiple layers of features from tiny images [R/OL]. [2025-03-06].. |
| [35] | 苏瑞,朱亚丽,朱晓荣. 面向6G网络的联邦学习高效训练优化方法[J/OL]. 物联网学报 [2025-03-08].. |
| SU R, ZHU Y L, ZHU X R. An efficient training optimization method for federated learning in 6G networks [J/OL]. Chinese Journal of Internet of Things [2025-03-08].. | |
| [36] | ZHENG S, SHEN C, CHEN X. Design and analysis of uplink and downlink communications for federated learning[J]. IEEE Journal on Selected Areas in Communications, 2021, 39(7): 2150-2167. |
| [37] | HOSMER D W, HOSMER T, LE CESSIE S, et al. A comparison of goodness-of-fit tests for the logistic regression model [J]. Statistics in Medicine, 1997, 16(9): 965-980. |
| [38] | KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks [C]// Proceedings of the 25th International Conference on Neural Information Processing Systems — Volume 1. Red Hook: Curran Associates Inc., 2012: 1097-1105. |
| [39] | KARIMIREDDY S P, REBJOCK Q, STICH S U, et al. Error feedback fixes SignSGD and other gradient compression schemes[C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 3252-3261. |
| [1] | 马凯光, 陈学斌, 菅银龙, 王柳, 高远. 基于混合序列模型与联邦类平衡算法的网络入侵检测[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 857-866. |
| [2] | 平欢, 夏战国, 刘思诚, 刘奇翰, 李春磊. 基于多层联邦学习的终端数据隐私保护方案[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 830-838. |
| [3] | 郗恩康, 范菁, 金亚东, 董华, 俞浩, 孙伊航. 联邦学习在隐私安全领域面临的威胁综述[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 798-808. |
| [4] | 何丽丽, 管新如, 张磊, 蒋胜, 蒋澄杰. 车联网位置隐私保护的全景与未来[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 809-820. |
| [5] | 钟琪, 张淑芬, 张镇博, 菅银龙, 景忠瑞. 面向联邦学习的投毒攻击检测与防御机制[J]. 《计算机应用》唯一官方网站, 2026, 46(2): 445-457. |
| [6] | 何金栋, 及宇轩, 陈天赐, 许恒铭, 耿技, 曹明生, 梁员宁. 基于知识图谱和大模型的非智能传感器的实体发现方法[J]. 《计算机应用》唯一官方网站, 2026, 46(2): 354-360. |
| [7] | 张珂嘉, 方志军, 周南润, 史志才. 基于模型预分配与自蒸馏的个性化联邦学习方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 10-20. |
| [8] | 樊娜, 罗闯, 张泽晖, 张梦瑶, 穆鼎. 基于改进生成对抗网络的车辆轨迹语义隐私保护机制[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 169-180. |
| [9] | 俞浩, 范菁, 孙伊航, 金亚东, 郗恩康, 董华. 边缘异构下的联邦分割学习优化方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 33-42. |
| [10] | 菅银龙, 陈学斌, 景忠瑞, 钟琪, 张镇博. 联邦学习中基于条件生成对抗网络的数据增强方案[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 21-32. |
| [11] | 翟社平, 朱鹏举, 杨锐, 刘佳一腾. 基于区块链的物联网身份管理系统[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2873-2881. |
| [12] | 张博瀚, 吕乐, 荆军昌, 刘栋. 基于遗传算法的属性网络社区隐藏方法[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2817-2826. |
| [13] | 俞浩, 范菁, 孙伊航, 董华, 郗恩康. 联邦学习统计异质性综述[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2737-2746. |
| [14] | 苏锦涛, 葛丽娜, 肖礼广, 邹经, 王哲. 联邦学习中针对后门攻击的检测与防御方案[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2399-2408. |
| [15] | 葛丽娜, 王明禹, 田蕾. 联邦学习的高效性研究综述[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2387-2398. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||