Journal of Computer Applications ›› 2026, Vol. 46 ›› Issue (2): 445-457.DOI: 10.11772/j.issn.1001-9081.2025020146

• Cyber security • Previous Articles    

Detection and defense mechanism for poisoning attacks to federated learning

Qi ZHONG1,2,3, Shufen ZHANG1,2,3(), Zhenbo ZHANG1,2,3, Yinlong JIAN1,2,3, Zhongrui JING1,2,3   

  1. 1.College of Sciences,North China University of Science and Technology,Tangshan Hebei 063210,China
    2.Hebei Province Key Laboratory of Data Science and Application (North China University of Science and Technology),Tangshan Hebei 063210,China
    3.Tangshan Key Laboratory of Data Science (North China University of Science and Technology),Tangshan Hebei 063210,China
  • Received:2025-02-17 Revised:2025-03-19 Accepted:2025-03-24 Online:2025-04-24 Published:2026-02-10
  • Contact: Shufen ZHANG
  • About author:ZHONG Qi, born in 1999, M. S. candidate. Her research interests include data security, privacy protection.
    ZHANG Shufen, born in 1972, M. S., professor. Her research interests include cloud computing, data security, privacy protection. Email:zhsf@ncst.edu.cn
    ZHANG Zhenbo, born in 1999, M. S. candidate. His research interests include data security, privacy protection.
    JIAN Yinlong, born in 2001, M. S. candidate. His research interests include data security, privacy protection.
    JING Zhongrui, born in 2000, M. S. candidate. His research interests include data security, privacy protection.
  • Supported by:
    National Natural Science Foundation of China(U20A20179)

面向联邦学习的投毒攻击检测与防御机制

钟琪1,2,3, 张淑芬1,2,3(), 张镇博1,2,3, 菅银龙1,2,3, 景忠瑞1,2,3   

  1. 1.华北理工大学 理学院,河北 唐山 063210
    2.河北省数据科学与应用重点实验室(华北理工大学),河北 唐山 063210
    3.唐山市数据科学重点实验室(华北理工大学),河北 唐山 063210
  • 通讯作者: 张淑芬
  • 作者简介:钟琪(1999—),女,河北张家口人,硕士研究生,CCF会员,主要研究方向:数据安全、隐私保护
    张淑芬(1972—),女,河北唐山人,教授,硕士,CCF高级会员,主要研究方向:云计算、数据安全、隐私保护 Email:zhsf@ncst.edu.cn
    张镇博(1999—),男,山东济南人,硕士研究生,CCF会员,主要研究方向:数据安全、隐私保护
    菅银龙(2001—),男,河南商丘人,硕士研究生,CCF会员,主要研究方向:数据安全、隐私保护
    景忠瑞(2000—),男,山西临汾人,硕士研究生,CCF会员,主要研究方向:数据安全、隐私保护。
  • 基金资助:
    国家自然科学基金资助项目(U20A20179)

Abstract:

In order to solve the problem that malicious clients in federated learning destroy the reliability of the global model by uploading malicious updates, a poisoning attack detection and defense algorithm for federated learning, FedDyna, was proposed. Firstly, an abnormal client detection scheme was designed to use the historical standard deviation of cosine similarity and Euclidean distance to detect abnormal updates preliminarily, and a multi-view model evaluation mechanism was combined to further detect suspicious clients. Then, an adaptive adjustment strategy was proposed to reduce the participation weights of abnormal clients gradually according to the weight adjustment factor until the malicious updates were removed from the model training process. The defense performance of FedDyna in different attack scenarios was evaluated on the EMNIST and CIFAR-10 datasets, and the algorithm was compared with the existing advanced defense algorithms. Experimental results show that, under the condition of a fixed attack frequency, a comparison of the effectiveness between the FedDyna algorithm and the Scope algorithm is conducted: When faced with three attack types, namely Projected Gradient Descent (PGD), Model Replacement (MR), and PGD+MR, FedDyna achieves the best results, reducing the Attack Success Rate (ASR) by 1.07 and 0.53, 1.49 and 1.45, 10.55 and 1.25 percentage points respectively; Under the EMNIST dataset subjected to Cosine Constraint Attack (CCA), although FedDyna experiences a slight decrease in ASR, it still achieves the second-best results. Additionally, when evaluated against comparison algorithms in different attacker pools, FedDyna’s ASR performs optimally under most conditions and ranks second-best under the remaining conditions. Notably, in scenarios with varying attack intensities, FedDyna achieves an impressive average global Model Accuracy (MA) of up to 98.5%. It can be seen that FedDyna has shown significant robustness against poisoning attacks in different attack scenarios and can detect and eliminate poisoning models effectively.

Key words: federated learning, poisoning attack, anomaly detection, multi-view model evaluation, adaptive adjustment

摘要:

为了解决联邦学习中恶意客户端通过上传恶意更新破坏全局模型可靠性的问题,提出一种面向联邦学习的投毒攻击检测与防御算法FedDyna。首先,设计一种异常客户端检测方案,利用余弦相似度与欧几里得距离的历史标准差初步检测异常更新,并结合多视角模型评估机制进一步检测可疑的客户端;其次,提出一种自适应调整策略,根据权重调整因子逐步降低被判定为异常客户端的参与权重,直至将恶意更新从模型训练过程中剔除。在EMNIST和CIFAR-10数据集上评估FedDyna在不同攻击场景下的防御性能,并与现有的先进防御算法进行对比。实验结果表明,在固定攻击频率的条件下,将FedDyna算法与Scope算法进行效果对比:面对投影梯度下降(PGD)、模型替换(MR)以及PGD+MR这3种攻击方式,FedDyna均取得了最优效果,攻击成功率(ASR)分别降低了1.07和0.53、1.49和1.45、10.55和1.25个百分点;在余弦约束攻击(CCA)攻击的EMNIST数据集下,FedDyna的ASR虽略有下降,但仍取得了次优结果。此外,当在不同攻击者池中与对比算法进行效果评估时,FedDyna的ASR在多数条件下表现最优,其余条件下也处于次优水平。尤为突出的是,在不同攻击强度的场景下,FedDyna的平均全局模型准确率(MA)高达98.5%。可见,FedDyna在不同攻击场景下表现出显著的抗投毒攻击稳健性,且能够有效检测并剔除投毒模型。

关键词: 联邦学习, 投毒攻击, 异常检测, 多视角模型评估, 自适应调整

CLC Number: