Journal of Computer Applications ›› 2026, Vol. 46 ›› Issue (3): 887-898.DOI: 10.11772/j.issn.1001-9081.2025040436
• Network and communications • Previous Articles Next Articles
Lei WANG1, Wenxuan ZHOU2, Ninghui JIA2, Zhihao QU2(
)
Received:2025-04-22
Revised:2025-07-09
Accepted:2025-07-11
Online:2025-07-22
Published:2026-03-10
Contact:
Zhihao QU
About author:WANG Lei, born in 1978, M. S., senior engineer. His research interests include electric power internet of things, edge intelligence.Supported by:通讯作者:
屈志昊
作者简介:王磊(1978—),男,江苏扬州人,高级工程师,硕士,主要研究方向:电力物联网、边缘智能基金资助:CLC Number:
Lei WANG, Wenxuan ZHOU, Ninghui JIA, Zhihao QU. Federated learning with two-pass communication compression for privacy-sensitive IoT data[J]. Journal of Computer Applications, 2026, 46(3): 887-898.
王磊, 周文轩, 贾柠晖, 屈志昊. 面向隐私敏感物联网数据的联邦学习双向通信压缩[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 887-898.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2025040436
| 精度/% | QPR | DQSGD | QSGD | PR |
|---|---|---|---|---|
| 90 | 7.30× | 4.97× | 1.68× | 1.55× |
| 80 | 6.99× | 6.04× | 1.89× | 1.49× |
| 70 | 7.37× | 6.72× | 1.93× | 1.64× |
| 60 | 8.27× | 7.66× | 2.10× | 2.05× |
Tab. 1 Comparison of bandwidth savings when different compression algorithms achieving NSGD benchmark accuracy
| 精度/% | QPR | DQSGD | QSGD | PR |
|---|---|---|---|---|
| 90 | 7.30× | 4.97× | 1.68× | 1.55× |
| 80 | 6.99× | 6.04× | 1.89× | 1.49× |
| 70 | 7.37× | 6.72× | 1.93× | 1.64× |
| 60 | 8.27× | 7.66× | 2.10× | 2.05× |
| 精度/% | 总通信时延/s | ||||
|---|---|---|---|---|---|
| NSGD | QPR | DQSGD | QSGD | PR | |
| 90 | 24 481.5 | 3 355.4 | 4 929.8 | 14 612.2 | 15 807.7 |
| 80 | 12 280.1 | 1 756.4 | 2 031.9 | 6 509.1 | 8 236.0 |
| 70 | 6 415.6 | 870.8 | 954.5 | 3 320.9 | 3 918.7 |
| 60 | 3 581.7 | 433.0 | 467.4 | 1 704.8 | 1 749.0 |
Tab. 2 Total communication delays when different compression algorithms achieving NSGD benchmark accuracy
| 精度/% | 总通信时延/s | ||||
|---|---|---|---|---|---|
| NSGD | QPR | DQSGD | QSGD | PR | |
| 90 | 24 481.5 | 3 355.4 | 4 929.8 | 14 612.2 | 15 807.7 |
| 80 | 12 280.1 | 1 756.4 | 2 031.9 | 6 509.1 | 8 236.0 |
| 70 | 6 415.6 | 870.8 | 954.5 | 3 320.9 | 3 918.7 |
| 60 | 3 581.7 | 433.0 | 467.4 | 1 704.8 | 1 749.0 |
| [1] | KOLOSKOVA A, STICH S U, JAGGI M. Sharper convergence guarantees for asynchronous SGD for distributed and federated learning [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 17202-17215. |
| [2] | 葛丽娜,王明禹,田蕾. 联邦学习的高效性研究综述[J]. 计算机应用, 2025, 45(8): 2387-2398. |
| GE L N, WANG M Y, TIAN L. Review of research on efficiency of federated learning [J]. Journal of Computer Applications, 2025, 45(8): 2387-2398. | |
| [3] | ALISTARH D, GRUBIC D, LI J Z, et al. QSGD: communication-efficient SGD via gradient quantization and encoding [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 1707-1718. |
| [4] | HÖNIG R, ZHAO Y, MULLINS R. DAdaQuant: doubly-adaptive quantization for communication-efficient federated learning [C]// Proceedings of the 39th International Conference on Machine Learning. New York: JMLR.org, 2022: 8852-8866. |
| [5] | ZHANG Z, WANG C. MIPD: an adaptive gradient sparsification framework for distributed DNNs training [J]. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(11): 3053-3066. |
| [6] | ZHANG J, SIMEONE O. LAGC: lazily aggregated gradient coding for straggler-tolerant and communication-efficient distributed learning [J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(3): 962-974. |
| [7] | 邱鑫源,叶泽聪,崔翛龙,等. 联邦学习通信开销研究综述[J]. 计算机应用, 2022, 42(2): 333-342. |
| QIU X Y, YE Z C, CUI X L, et al. Survey on communication overhead in federated learning[J]. Journal of Computer Applications, 2022, 42(2): 333-342. | |
| [8] | YU Y, WU J, HUANG L. Double quantization for communication-efficient distributed optimization [C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 4438-4449. |
| [9] | TANG H, LIAN X, YU C, et al. DoubleSqueeze: parallel stochastic gradient descent with double-pass error-compensated compression [C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 6155-6165. |
| [10] | 张瑞麟,杜晋华,尹浩. 跨设备联邦学习中的客户端选择算法[J]. 软件学报, 2024, 35(12): 5725-5740. |
| ZHANG R L, DU J H, YIN H. Client selection algorithm in cross-device federated learning [J]. Journal of Software, 2024, 35(12): 5725-5740. | |
| [11] | QIAO A, ARAGAM B, ZHANG B, et al. Fault tolerance in iterative-convergent machine learning [C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 5220-5230. |
| [12] | SAFARYAN M, SHULGIN E, RICHTÁRIK P. Uncertainty principle for communication compression in distributed and federated learning and the search for an optimal compressor [J]. Information and Inference: A Journal of the IMA, 2022, 11(2): 557-580. |
| [13] | WANG M, BODONHELYI A, BOZKIR E, et al. TurboSVM-FL: boosting federated learning through SVM aggregation for lazy clients[C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 15546-15554. |
| [14] | CRAWSHAW M, LIU M. Federated learning under periodic client participation and heterogeneous data: a new communication-efficient algorithm and analysis [C]// Proceedings of the 38th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2024: 8240-8299. |
| [15] | ZHANG W, ZHOU T, LU Q, et al. FedSL: a communication-efficient federated learning with split layer aggregation [J]. IEEE Internet of Things Journal, 2024, 11(9): 15587-15601. |
| [16] | QU Z, JIA N, YE B, et al. FedQClip: accelerating federated learning via quantized clipped SGD [J]. IEEE Transactions on Computers, 2025, 74(2): 717-730. |
| [17] | SEIDE F, FU H, DROPPO J, et al. 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs [C]// Proceedings of the INTERSPEECH 2014. [S.l.]: International Speech Communication Association, 2014: 1058-1062. |
| [18] | WEN W, XU C, YAN F, et al. TernGrad: ternary gradients to reduce communication in distributed deep learning [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 1508-1518. |
| [19] | ZHOU Q, GUO S, LIU Y, et al. Hierarchical channel-spatial encoding for communication-efficient collaborative learning [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 5788-5801. |
| [20] | ZAKERINIA H, TALAEI S, NADIRADZE G, et al. Communication-efficient federated learning with data and client heterogeneity [C]// Proceedings of the 27th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2024: 3448-3456. |
| [21] | CHEN Y, VIKALO H, WANG C. Fed-QSSL: a framework for personalized federated learning under bitwidth and data heterogeneity [C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 11443-11452. |
| [22] | ZHANG P, XU L, MEI L, et al. Sketch-based adaptive communication optimization in federated learning [J]. IEEE Transactions on Computers, 2025, 74(1): 170-184. |
| [23] | ZHOU W, QU Z, LYU S H, et al. Mask-encoded sparsification: mitigating biased gradients in communication-efficient split learning[C]// Proceedings of the 27th European Conference on Artificial Intelligence. Amsterdam: IOS Press, 2024: 2806-2813. |
| [24] | ALBELAIHI R, ALASANDAGUTTI A, YU L, et al. Deep-reinforcement-learning-assisted client selection in nonorthogonal-multiple-access-based federated learning [J]. IEEE Internet of Things Journal, 2023, 10(17): 15515-15525. |
| [25] | KIM G, KIM J, HAN B. Communication-efficient federated learning with accelerated client gradient [C]// Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 12385-12394. |
| [26] | LI J, LIU Y, WANG W. FedNS: a fast sketching Newton-type algorithm for federated learning [C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 13509-13517. |
| [27] | GONG X, LI S, BAO Y, et al. Federated learning via input-output collaborative distillation [C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 22058-22066. |
| [28] | ELBAKARY A, ISSAID C B, SHEHAB M, et al. Fed-Sophia: a communication-efficient second-order federated learning algorithm[C]// Proceedings of the 2024 IEEE International Conference on Communications. Piscataway: IEEE, 2024: 950-955. |
| [29] | CHEN T, GIANNAKIS G, SUN T, et al. LAG: lazily aggregated gradient for communication-efficient distributed learning [C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2018: 5055-5065. |
| [30] | KIM D Y, HAN D J, SEO J, et al. Achieving lossless gradient sparsification via mapping to alternative space in federated learning[C]// Proceedings of the 41st International Conference on International Conference on Machine Learning. New York: JMLR.org, 2024: 23867-23900. |
| [31] | SHOKRI R, SHMATIKOV V. Privacy-preserving deep learning[C]// Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2015: 1310-1321. |
| [32] | TANG Z, HUANG J, YAN R, et al. Bandwidth-aware and overlap-weighted compression for communication-efficient federated learning [C]// Proceedings of the 53rd International Conference on Parallel Processing. New York: ACM, 2024: 866-875. |
| [33] | XIE W, LI H, MA J, et al. JointSQ: Joint sparsification-quantization for distributed learning [C]// Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 5778-5787. |
| [34] | KRIZHEVSKY A. Learning multiple layers of features from tiny images [R/OL]. [2025-03-06].. |
| [35] | 苏瑞,朱亚丽,朱晓荣. 面向6G网络的联邦学习高效训练优化方法[J/OL]. 物联网学报 [2025-03-08].. |
| SU R, ZHU Y L, ZHU X R. An efficient training optimization method for federated learning in 6G networks [J/OL]. Chinese Journal of Internet of Things [2025-03-08].. | |
| [36] | ZHENG S, SHEN C, CHEN X. Design and analysis of uplink and downlink communications for federated learning[J]. IEEE Journal on Selected Areas in Communications, 2021, 39(7): 2150-2167. |
| [37] | HOSMER D W, HOSMER T, LE CESSIE S, et al. A comparison of goodness-of-fit tests for the logistic regression model [J]. Statistics in Medicine, 1997, 16(9): 965-980. |
| [38] | KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks [C]// Proceedings of the 25th International Conference on Neural Information Processing Systems — Volume 1. Red Hook: Curran Associates Inc., 2012: 1097-1105. |
| [39] | KARIMIREDDY S P, REBJOCK Q, STICH S U, et al. Error feedback fixes SignSGD and other gradient compression schemes[C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 3252-3261. |
| [1] | Lili HE, Xinru GUAN, Lei ZHANG, Sheng JIANG, Chengjie JIANG. Panorama and future of location privacy protection in internet of vehicles [J]. Journal of Computer Applications, 2026, 46(3): 809-820. |
| [2] | Huan PING, Zhanguo XIA, Sicheng LIU, Qihan LIU, Chunlei LI. Terminal data privacy-preserving scheme based on hierarchical federated learning [J]. Journal of Computer Applications, 2026, 46(3): 830-838. |
| [3] | Kaiguang MA, Xuebin CHEN, Yinlong JIAN, Liu WANG, Yuan GAO. Network intrusion detection based on hybrid sequence model and federated class balance algorithm [J]. Journal of Computer Applications, 2026, 46(3): 857-866. |
| [4] | Jindong HE, Yuxuan JI, Tianci CHEN, Hengming XU, Ji GENG, Mingsheng CAO, Yuanning LIANG. Entity discovery method for non-intelligent sensors by integrating knowledge graph and large models [J]. Journal of Computer Applications, 2026, 46(2): 354-360. |
| [5] | Kejia ZHANG, Zhijun FANG, Nanrun ZHOU, Zhicai SHI. Personalized federated learning method based on model pre-assignment and self-distillation [J]. Journal of Computer Applications, 2026, 46(1): 10-20. |
| [6] | Na FAN, Chuang LUO, Zehui ZHANG, Mengyao ZHANG, Ding MU. Semantic privacy protection mechanism of vehicle trajectory based on improved generative adversarial network [J]. Journal of Computer Applications, 2026, 46(1): 169-180. |
| [7] | Hao YU, Jing FAN, Yihang SUN, Yadong JIN, Enkang XI, Hua DONG. Federated split learning optimization method under edge heterogeneity [J]. Journal of Computer Applications, 2026, 46(1): 33-42. |
| [8] | Sheping ZHAI, Pengju ZHU, Rui YANG, Jiayiteng LIU. Blockchain-based identity management system for internet of things [J]. Journal of Computer Applications, 2025, 45(9): 2873-2881. |
| [9] | Bohan ZHANG, Le LYU, Junchang JING, Dong LIU. Genetic algorithm-based community hiding method in attribute networks [J]. Journal of Computer Applications, 2025, 45(9): 2817-2826. |
| [10] | Jintao SU, Lina GE, Liguang XIAO, Jing ZOU, Zhe WANG. Detection and defense scheme for backdoor attacks in federated learning [J]. Journal of Computer Applications, 2025, 45(8): 2399-2408. |
| [11] | Lina GE, Mingyu WANG, Lei TIAN. Review of research on efficiency of federated learning [J]. Journal of Computer Applications, 2025, 45(8): 2387-2398. |
| [12] | Yan YAN, Feifei LI, Yaqin LYU, Tao FENG. Secure and efficient frequency estimation method based on shuffled differential privacy [J]. Journal of Computer Applications, 2025, 45(8): 2600-2611. |
| [13] | Lixiao ZHANG, Yao MA, Yuli YANG, Dan YU, Yongle CHEN. Large-scale IoT binary component identification based on named entity recognition [J]. Journal of Computer Applications, 2025, 45(7): 2288-2295. |
| [14] | Xiaoyang ZHAO, Xinzheng XU, Zhongnian LI. Research review on explainable artificial intelligence in internet of things applications [J]. Journal of Computer Applications, 2025, 45(7): 2169-2179. |
| [15] | Yiming ZHANG, Tengfei CAO. Federated learning optimization algorithm based on local drift and diversity computing power [J]. Journal of Computer Applications, 2025, 45(5): 1447-1454. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||