Journal of Computer Applications ›› 2022, Vol. 42 ›› Issue (6): 1675-1682.DOI: 10.11772/j.issn.1001-9081.2021061374

• National Open Distributed and Parallel Computing Conference 2021 (DPCS 2021) • Previous Articles    

Efficient wireless federated learning algorithm based on 1‑bit compressive sensing

Zhenyu ZHANG1, Guoping TAN1,2, Siyuan ZHOU1,2()   

  1. 1.College of Computer and Information,Hohai University,Nanjing Jiangsu 211100,China
    2.Jiangsu Intelligent Transportation and Intelligent Driving Research Institute,Nanjing Jiangsu 210019,China
  • Received:2021-08-02 Revised:2021-09-02 Accepted:2021-09-08 Online:2022-01-10 Published:2022-06-10
  • Contact: Siyuan ZHOU
  • About author:ZHANG Zhenyu,born in 1998,M. S. candidate. His researchinterests include wireless network,federated learning
    TAN Guoping,born in 1975,Ph. D.,professor. His researchinterests include wireless distributed machine learning,mobile edge computing,internet of vehicles.
  • Supported by:
    National Natural Science Foundation of China(61701168);China Postdoctoral Science Foundation(2019M651546);Major Science and Technology Project of Transportation Department of Jiangsu Province(2019Z07)

基于1‑bit压缩感知的高效无线联邦学习算法

章振宇1, 谭国平1,2, 周思源1,2()   

  1. 1.河海大学 计算机与信息学院,南京 211100
    2.江苏智能交通及智能驾驶研究院,南京 210019
  • 通讯作者: 周思源
  • 作者简介:章振宇(1998—),男,江苏南京人,硕士研究生,CCF会员,主要研究方向:无线网络、联邦学习
    谭国平(1975—),男,湖南澧县人,教授,博士生导师,博士,CCF会员,主要研究方向:无线分布式机器学习、移动边缘计算、车联网
  • 基金资助:
    国家自然科学基金资助项目(61701168);中国博士后科研基金资助项目(2019M651546);江苏省交通运输厅重大科技项目(2019Z07)

Abstract:

In the wireless Federated Learning (FL) architecture, the model parameter data need to be continuously exchanged between the client and the server to update the model, thus causing a large communication overhead and power consumption on the client. At present, there are many methods to reduce communication overhead by data quantization and data sparseness. In order to further reduce the communication overhead, a wireless FL algorithm based on 1?bit compressive sensing was proposed. In the uplink of wireless FL architecture, the data update parameters of its local model, including update amplitude and trend, were firstly recorded on the client. Then, sparsification was performed to the amplitude and trend information, and the threshold required for updating was determined. Finally, 1?bit compressive sensing was performed on the update trend information, thereby compressing the uplink data. On this basis, the data size was further compressed by setting dynamic threshold. Experimental results on MNIST datasets show that the 1?bit compressive sensing process with the introduction of dynamic threshold can achieve the same results as the lossless transmission process, and reduce the amount of model parameter data to be transmitted by the client during the uplink communication of FL applications to 1/25 of the normal FL process without this method; and can reduce the total user upload data size to 2/11 of the original size and reduce the transmission energy consumption to 1/10 of the original size when the global model is trained to the same level.

Key words: Federated Learning (FL), wireless channel, quantization coding, compressive sensing, communication overhead

摘要:

在无线联邦学习(FL)的架构中,用户端与服务器端之间需要持续交换模型参数数据来实现模型的更新,因此会对用户端造成较大的通信开销和功率消耗。目前已经有多种通过数据量化以及数据稀疏化来降低通信开销的方法。为了进一步降低通信开销,提出了一种基于1?bit压缩感知的无线FL算法。在无线FL架构的上行链路中,这种算法首先在用户端记录其本地模型数据的更新参数,包括更新幅值和趋势;接着对幅值和趋势信息进行稀疏化,并确定更新所需的阈值;最后对更新趋势信息进行1?bit压缩感知,从而压缩上行数据。在此基础上,通过设置动态阈值的方法进一步压缩数据大小。在MNIST数据集上的实验结果表明:引入动态阈值的1?bit压缩感知过程能够获得与无损传输过程相同的效果,在FL应用的上行通信过程中能将用户端需要传输的模型参数数据量降低至不采用该方法的标准FL过程的1/25;而在全局模型训练到相同水平时,能将用户上传数据总大小降低至原来的2/11,将传输能耗降低至原来的1/10。

关键词: 联邦学习, 无线信道, 量化编码, 压缩感知, 通信开销

CLC Number: