Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (5): 1589-1594.DOI: 10.11772/j.issn.1001-9081.2024050704

• Advanced computing • Previous Articles    

Node collaboration mechanism for quality optimization of hierarchical federated learning models under energy consumption constraints

Yazhou FAN1,2, Zhuo LI1,2()   

  1. 1.Beijing Key Laboratory of Internet Culture and Digital Dissemination Research (Beijing Information Science and Technology University),Beijing 100101,China
    2.School of Computer Science,Beijing Information Science and Technology University,Beijing 100101,China
  • Received:2024-05-30 Revised:2024-09-25 Accepted:2024-09-26 Online:2024-10-09 Published:2025-05-10
  • Contact: Zhuo LI
  • About author:FAN Yazhou, born in 1997, M. S. candidate. His research interests include edge computing.
    LI Zhuo, born in 1983, Ph. D., professor. His research interests include mobile wireless network, distributed computing.

能耗约束下分层联邦学习模型质量优化的节点协作机制

范亚州1,2, 李卓1,2()   

  1. 1.网络文化与数字传播北京市重点实验室(北京信息科技大学),北京 100101
    2.北京信息科技大学 计算机学院,北京 100101
  • 通讯作者: 李卓
  • 作者简介:范亚州(1997—),男,安徽阜阳人,硕士研究生,主要研究方向:边缘计算
    李卓(1983—),男,河南南阳人,教授,博士,CCF会员,主要研究方向:移动无线网络、分布式计算。
  • 基金资助:
    国家重点研发计划项目(2022YFF0604502);北京市自然科学基金资助项目(4232024)

Abstract:

The massive data generated at the edge can be used to train global models through Federated Learning (FL), making the combination of edge computing and federation learning become a key technology for reducing network energy consumption. In Hierarchical Federated Learning (HFL), the difference in the amount of local data and data quality of edge devices directly affects the quality of the global model of HFL. To address these issues, a Node Cooperation Algorithm under Transmission Energy Consumption Constraint (NCATTECC) was proposed to solve the global model quality optimization problem, which was proved to be an Non-deterministic Polynomial-hard (NP-hard) problem, and it was also proved that the proposed algorithm has an approximate ratio of (1-1/e). Specifically, node collaboration enabled the participation of more high-quality nodes in training without exceeding energy consumption limits. Simulation experimental results on the widely used CIFAR-10 and FashionMNIST datasets prove that the proposed algorithm achieves model accuracy improvements of 4.47% and 6.64% compared to FedAvg (Federated Averaging), and 3.47% and 4.58% compared to Fed-CBS (Federated Class-balanced Sampling), respectively, when training with selected nodes.

Key words: Hierarchical Federated Learning (HFL), Device-to-Device (D2D) communication, node cooperation, model quality optimization, energy consumption limit

摘要:

边缘生成的大量数据可以通过联邦学习(FL)的方式训练全局模型,因此边缘计算与联邦学习的结合已成为降低网络能耗的关键技术。在分层联邦学习(HFL)中,边缘设备的局部数据量和数据质量的差异会直接影响HFL全局模型的质量。为此,提出一种传输能量约束下的节点协作算法(NCATTECC)解决传输能耗限制下全局模型质量最优化问题,证明了该问题是一个NP-hard(Non-deterministic Polynomial-hard)问题,同时证明了所提算法具有(1-1/e)的近似比例。具体而言,通过协作,可以在不超过传输能耗限制的情况下,让更多的优质节点参与训练。在广泛使用的CIFAR-10、FashionMNIST数据集上的仿真实验结果表明,所提算法对所选节点进行训练,得到的模型准确率比FedAvg(Federated Averaging)和Fed-CBS(Federated Class-balanced Sampling)分别高出4.47%、6.64%和3.47%、4.58%。

关键词: 分层联邦学习, 端到端通信, 节点协作, 模型质量优化, 能耗限制

CLC Number: