《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (10): 3221-3230.DOI: 10.11772/j.issn.1001-9081.2024101505

• 网络空间安全 • 上一篇    

基于差分隐私的联邦学习研究综述

张淑芬1,2,3, 汤本建1,2,3(), 田子坤1,2,3, 秦肖阳1,2,3   

  1. 1.华北理工大学 理学院,河北 唐山 063210
    2.河北省数据科学与应用重点实验室(华北理工大学),河北 唐山 063210
    3.唐山市数据科学重点实验室(华北理工大学),河北 唐山 063210
  • 收稿日期:2024-10-23 修回日期:2025-06-04 接受日期:2025-06-09 发布日期:2025-06-13 出版日期:2025-10-10
  • 通讯作者: 汤本建
  • 作者简介:张淑芬(1972—),女,河北唐山人,教授,硕士,CCF高级会员,主要研究方向:云计算、智能信息处理、数据安全、隐私保护
    汤本建(1999—),男,安徽马鞍山人,硕士研究生,CCF会员,主要研究方向:数据安全、隐私保护 Email:tangbenjian@stu.ncst.edu.cn
    田子坤(2001—),男,河北保定人,硕士研究生,主要研究方向:机器学习、深度学习
    秦肖阳(1999—),男,河南商丘人,硕士研究生,主要研究方向:机器学习、深度学习。
  • 基金资助:
    国家自然科学基金资助项目(U20A20179)

Survey of federated learning based on differential privacy

Shufen ZHANG1,2,3, Benjian TANG1,2,3(), Zikun TIAN1,2,3, Xiaoyang QING1,2,3   

  1. 1.College of Sciences,North China University of Science and Technology,Tangshan Hebei 063210,China
    2.Hebei Provincial Key Laboratory of Data Science and Application (North China University of Science and Technology),Tangshan Hebei 063210,China
    3.Tangshan Key Laboratory of Data Science (North China University of Science and Technology),Tangshan Hebei 063210,China
  • Received:2024-10-23 Revised:2025-06-04 Accepted:2025-06-09 Online:2025-06-13 Published:2025-10-10
  • Contact: Benjian TANG
  • About author:ZHANG Shufen, born in 1972, M. S., professor. Her research interests include cloud computing, intelligent information processing, data security, privacy protection.
    TANG Benjian, born in 1999, M. S. candidate. His research interests include data security, privacy protection.
    TIAN Zikun, born in 2001, M. S. candidate. His research interests include machine learning, deep learning.
    QING Xiaoyang, born in 1999, M. S. candidate. His research interests include machine learning, deep learning.
  • Supported by:
    National Natural Science Foundation of China(U20A20179)

摘要:

随着人工智能的快速发展,用户隐私泄露风险日益严重。差分隐私是一种关键的隐私保护技术,通过在数据中引入噪声防止个人信息泄露,而联邦学习(FL)则允许在不交换数据的情况下共同训练模型,保护数据的安全性。近年来,差分隐私技术与FL的结合使用可以充分发挥它们各自的优势:差分隐私确保数据使用过程中的隐私保护,而FL则通过分布式训练提高模型的泛化能力和效率。针对FL的隐私安全问题,首先,系统性地总结和比较基于差分隐私的FL的最新研究进展,包括不同的差分隐私机制、FL算法和应用场景;其次,重点讨论差分隐私在FL中的应用方式,包括数据聚合、梯度下降和模型训练等方面,并分析各种技术的优缺点;最后,详细总结该领域当前存在的挑战和发展方向。

关键词: 联邦学习, 差分隐私, 数据聚合, 梯度下降, 模型训练

Abstract:

With the rapid development of artificial intelligence, the risk of user privacy disclosure is becoming serious increasingly. Differential privacy is a key privacy protection technology, which prevents personal information leakage by introducing noise into data, while Federated Learning (FL) allows joint training of models without exchanging data to protect data security. In recent years, differential privacy technology and FL are used together to give full play of their respective advantages: differential privacy ensures privacy protection in the process of data use, while FL improves the generalization ability and efficiency of the model through distributed training. Aiming at the privacy security problem of FL, firstly, the latest research progress of FL based on differential privacy was summarized and compared systematically, including different differential privacy mechanisms, FL algorithms and application scenarios. Secondly, special attention was paid to application approaches of differential privacy in FL, including data aggregation, gradient descent, and model training, and the advantages and disadvantages of various technologies were analyzed. Finally, the existing challenges and development directions of this field were summarized in detail.

Key words: Federal Learning (FL), differential privacy, data aggregation, gradient descent, model training

中图分类号: