Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (3): 715-724.DOI: 10.11772/j.issn.1001-9081.2024030322

• Frontier research and typical applications of large models • Previous Articles     Next Articles

Federated parameter-efficient fine-tuning technology for large model based on pruning

Hui ZENG1,2, Shiyu XIONG1,2, Yongzheng DI1,2, Hongzhou SHI1()   

  1. 1.Beijing Key Laboratory of Mobile Computing and Pervasive Device (Institute of Computing Technology,Chinese Academy of Sciences),Beijing 100190,China
    2.School of Computer Science and Technology,University of Chinese Academy of Sciences,Beijing 100190,China
  • Received:2024-03-13 Revised:2024-05-26 Accepted:2024-05-29 Online:2024-07-24 Published:2025-03-10
  • Contact: Hongzhou SHI
  • About author:ZENG Hui, born in 1998, M. S. candidate. His research interests include federated learning, foundation model fine-tuning.
    XIONG Shiyu, born in 1999, M. S. candidate. Her research interests include object detection and localization, federated learning.
    DI Yongzheng, born in 2001, M. S. candidate. His research interests include object detection and localization, federated learning.
  • Supported by:
    National Key Research and Development Program of China(2018YFB1004705)

基于剪枝的大模型联邦参数高效微调技术

曾辉1,2, 熊诗雨1,2, 狄永正1,2, 史红周1()   

  1. 1.移动计算与新型终端北京市重点实验室(中国科学院计算技术研究所),北京 100190
    2.中国科学院大学 计算机科学与技术学院,北京 100190
  • 通讯作者: 史红周
  • 作者简介:曾辉(1998—),男,江西于都人,硕士研究生,主要研究方向:联邦学习、大模型微调
    熊诗雨(1999—),女,重庆人,硕士研究生,CCF会员,主要研究方向:目标检测与定位、联邦学习
    狄永正(2001—),男,河南开封人,硕士研究生,主要研究方向:目标检测与定位、联邦学习;
  • 基金资助:
    国家重点研发计划项目(2018YFB1004705)

Abstract:

With the continues increasing importance of data privacy, fine-tuning Pre-trained Foundational Model (PFM) for downstream tasks has become increasingly challenging, leading to the emergence of federated learning research based on PFM. However, PFM poses significant challenges to federated learning systems, especially in terms of local computation and communication. Therefore, the corresponding solution schemes were proposed for the two main stages of federated learning: local computing and aggregation communication, namely the local efficient fine-tuning mode and the ring-shaped local aggregation mode. In the first mode, a model pruning algorithm based on Parameter-Efficient Fine-Tuning (PEFT) was employed to reduce local computation and communication costs. In the second mode, the centralized aggregation method was replaced with a distributed local aggregation scheme to enhance communication efficiency during the aggregation stage. Experimental results demonstrate that the proposed federated parameter-efficient fine-tuning framework for large model performs well in terms of both final performance and efficiency.

Key words: federated learning, large model, fine-tuning, Parameter-Efficient Fine-Tuning (PEFT), model pruning

摘要:

随着数据隐私重要性的不断提升,用于下游任务的预训练基础模型(PFM)的微调变得愈发困难,这推动了基于PFM的联邦学习研究。然而,PFM给联邦学习系统带来了显著的挑战,特别是在本地计算和通信方面。因此,针对联邦学习的本地计算和聚合通信这两个主要阶段,分别提出对应的解决方案,即本地高效微调模式和环形本地聚合模式。本地高效微调模式采用基于参数高效微调(PEFT)的模型剪枝算法以减轻本地计算和通信开销;环形本地聚合模式采用分布式的本地聚合方法取代中心化的聚合方法以提升聚合阶段的通信效率。实验结果表明,所提大模型联邦参数高效微调框架在最终性能和效率方面均表现良好。

关键词: 联邦学习, 大模型, 微调, 参数高效微调, 模型剪枝

CLC Number: