Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Federated learning with two-pass communication compression for privacy-sensitive IoT data
Lei WANG, Wenxuan ZHOU, Ninghui JIA, Zhihao QU
Journal of Computer Applications    2026, 46 (3): 887-898.   DOI: 10.11772/j.issn.1001-9081.2025040436
Abstract24)   HTML0)    PDF (1136KB)(10)       Save

To address the significant communication overhead and gradient inversion privacy leakage risks of federated learning in Internet of Things (IoT) scenarios, a two-pass communication compression framework named QPR (Quantization and Pull Reduction) was proposed. Firstly, through gradient quantization, training nodes were utilized to compress local gradients before uploading them to the server, thereby reducing gradient transmission overhead. Secondly, a probability threshold-based delayed model download mechanism (lazy pulling) was introduced to reduce model synchronization frequency, and the global model was synchronized by training nodes. For other iterations, locally historical models were reused. Finally, rigorous theoretical analysis was performed to confirm that QPR achieves the same progressive order as the standard federated learning algorithm without communication compression — Federated Average (FedAvg) and possesses the linear speedup property with increasing number of nodes, thereby ensuring system scalability. Experimental results demonstrate that QPR enhances communication efficiency on multiple benchmark datasets and machine learning models significantly. Taking the ResNet18 model training task on the CIFAR-10 dataset as an example, QPR achieves communication speedup ratio up to 8.27 compared to uncompressed FedAvg, without any loss in model accuracy.

Table and Figures | Reference | Related Articles | Metrics