《计算机应用》唯一官方网站 ›› 2026, Vol. 46 ›› Issue (3): 887-898.DOI: 10.11772/j.issn.1001-9081.2025040436

• 网络与通信 • 上一篇    下一篇

面向隐私敏感物联网数据的联邦学习双向通信压缩

王磊1, 周文轩2, 贾柠晖2, 屈志昊2()   

  1. 1.国网江苏省电力有限公司 信息通信分公司,南京 210024
    2.水利部水利大数据重点实验室(河海大学),南京 211100
  • 收稿日期:2025-04-22 修回日期:2025-07-09 接受日期:2025-07-11 发布日期:2025-07-22 出版日期:2026-03-10
  • 通讯作者: 屈志昊
  • 作者简介:王磊(1978—),男,江苏扬州人,高级工程师,硕士,主要研究方向:电力物联网、边缘智能
    周文轩(1999—),男(回族),江苏扬州人,博士研究生,CCF会员,主要研究方向:联邦学习、分割学习
    贾柠晖(1998—),男,江苏无锡人,博士研究生,CCF会员,主要研究方向:联邦学习中的通信优化、联邦学习中的无梯度优化
  • 基金资助:
    国网江苏省电力有限公司科技项目(J2023077)

Federated learning with two-pass communication compression for privacy-sensitive IoT data

Lei WANG1, Wenxuan ZHOU2, Ninghui JIA2, Zhihao QU2()   

  1. 1.Information and Telecommunication Branch,State Grid Jiangsu Electric Power Company Limited,Nanjing Jiangsu 210024,China
    2.Key Laboratory of Water Big Data Technology of Ministry of Water Resources (Hohai University),Nanjing Jiangsu 211100,China
  • Received:2025-04-22 Revised:2025-07-09 Accepted:2025-07-11 Online:2025-07-22 Published:2026-03-10
  • Contact: Zhihao QU
  • About author:WANG Lei, born in 1978, M. S., senior engineer. His research interests include electric power internet of things, edge intelligence.
    ZHOU Wenxuan, born in 1999, Ph. D. candidate. His research interests include federated learning, split learning.
    JIA Ninghui, born in 1998, Ph. D. candidate. His research interests include communication optimization in federated learning, gradient-free optimization in federated learning.
  • Supported by:
    Science and Technology Project of State Grid Jiangsu Electric Power Company Limited(J2023077)

摘要:

针对隐私敏感物联网(IoT)场景中联邦学习面临巨大通信开销和梯度反演隐私泄露风险问题,提出一种名为QPR(Quantization and Pull Reduction)的两阶段联邦学习通信压缩框架。首先,将训练节点通过梯度量化技术压缩本地梯度后上传至服务器,以降低梯度传输的开销;其次,引入基于概率阈值的延迟模型下载机制(lazy pulling)以降低模型的同步频率,训练节点以预设概率同步全局模型,而其余迭代中复用本地历史模型;最后,通过严格的理论分析确保QPR在收敛速度上达到与标准无通信压缩联邦学习——联邦平均(FedAvg)算法相同的渐进阶次,且具备随训练节点数增多而线性加速的特性,从而保证系统的可延展性。实验结果表明,QPR在多个基准数据集和机器学习模型上均能显著提升通信效率。以ResNet18模型在CIFAR-10数据集上的训练任务为例,QPR在不损失模型精度的前提下,与无压缩的FedAvg算法相比,最高实现了8.27的通信加速比。

关键词: 联邦学习, 梯度量化, 两阶段压缩, 物联网, 隐私保护

Abstract:

To address the significant communication overhead and gradient inversion privacy leakage risks of federated learning in Internet of Things (IoT) scenarios, a two-pass communication compression framework named QPR (Quantization and Pull Reduction) was proposed. Firstly, through gradient quantization, training nodes were utilized to compress local gradients before uploading them to the server, thereby reducing gradient transmission overhead. Secondly, a probability threshold-based delayed model download mechanism (lazy pulling) was introduced to reduce model synchronization frequency, and the global model was synchronized by training nodes. For other iterations, locally historical models were reused. Finally, rigorous theoretical analysis was performed to confirm that QPR achieves the same progressive order as the standard federated learning algorithm without communication compression — Federated Average (FedAvg) and possesses the linear speedup property with increasing number of nodes, thereby ensuring system scalability. Experimental results demonstrate that QPR enhances communication efficiency on multiple benchmark datasets and machine learning models significantly. Taking the ResNet18 model training task on the CIFAR-10 dataset as an example, QPR achieves communication speedup ratio up to 8.27 compared to uncompressed FedAvg, without any loss in model accuracy.

Key words: Federated Learning (FL), gradient quantization, two-pass compression, Internet of Things (IoT), privacy protection

中图分类号: