《计算机应用》唯一官方网站 ›› 2026, Vol. 46 ›› Issue (3): 798-808.DOI: 10.11772/j.issn.1001-9081.2025030357
郗恩康1,2,3, 范菁1,2,3(
), 金亚东1,2,3, 董华1,2,3, 俞浩1,2,3, 孙伊航1,2,3
收稿日期:2025-04-08
修回日期:2025-05-12
接受日期:2025-05-16
发布日期:2025-05-27
出版日期:2026-03-10
通讯作者:
范菁
作者简介:郗恩康(2000—),男,山东枣庄人,硕士研究生,CCF会员,主要研究方向:联邦学习、隐私安全基金资助:
Enkang XI1,2,3, Jing FAN1,2,3(
), Yadong JIN1,2,3, Hua DONG1,2,3, Hao YU1,2,3, Yihang SUN1,2,3
Received:2025-04-08
Revised:2025-05-12
Accepted:2025-05-16
Online:2025-05-27
Published:2026-03-10
Contact:
Jing FAN
About author:XI Enkang, born in 2000, M. S. candidate. His research interests include federated learning, privacy security.Supported by:摘要:
作为一种新型的分布式机器学习,联邦学习在解决数据孤岛和隐私保护问题上具有一定潜力,然而它面临着潜在的隐私威胁和安全威胁。因此,系统性综述联邦学习在隐私与安全领域的前沿研究成果,详细阐述联邦学习的基本概念和工作流程,并对联邦学习中的隐私和安全问题在现有前沿的研究成果上进行分类。首先,分析联邦学习中的隐私威胁,归纳相应的隐私保护方法;其次,总结联邦学习中的安全威胁问题,并介绍相应的安全攻击的防御方法;最后,讨论联邦学习中未来需要解决的挑战,并针对ChatGPT和DeepSeek等大语言模型(LLM)在联邦学习中的应用,进一步探讨LLM带来的计算效率瓶颈与隐私泄露挑战。
中图分类号:
郗恩康, 范菁, 金亚东, 董华, 俞浩, 孙伊航. 联邦学习在隐私安全领域面临的威胁综述[J]. 计算机应用, 2026, 46(3): 798-808.
Enkang XI, Jing FAN, Yadong JIN, Hua DONG, Hao YU, Yihang SUN. Review of threats faced by federated learning in privacy and security field[J]. Journal of Computer Applications, 2026, 46(3): 798-808.
| 文献 | 隐私威胁定义/分类 | 安全威胁定义/分类 | 隐私保护分类/对比 | 安全防御分类/对比 | 文献前沿性/充分性 |
|---|---|---|---|---|---|
| 本文 | 定义清晰,分类细致,涵盖具体威胁场景 | 明确区分模型相关攻击与数据相关攻击 | 详细分析客户端、服务器及合谋场景的保护方法 | 分攻击类型提出防御,讨论防御措施的局限性 | 引用最新文献,覆盖量子计算威胁等新兴方向 |
| 文献[ | 分类较简单,未明确区分隐私与安全威胁,未提及多方合谋问题 | 基于安全三要素分类,未细化攻击类型 | 列举技术但未对比,侧重加密技术 | 列举防御技术但未分类,未讨论局限性 | 文献截至2023年,未涉及前沿挑战 |
| 文献[ | 分类较简单,未明确区分隐私与安全威胁,未提及多方合谋问题 | 分为投毒攻击、对抗攻击等,未系统分类 | 列举技术但未对比,侧重加密技术 | 列举防御技术但未分类,未讨论局限性 | 文献截至2020年,未涉及前沿挑战 |
| 文献[ | 定义清晰,分类细致,讨论多方合谋问题但未深入 | 分类较细致,提出攻击强度划分 | 只讨论了通用技术,没有具体场景应用 | 分攻击类型提出防御,讨论防御的效率问题 | 文献截至2023年,未涉及前沿挑战 |
| 文献[ | 分类较简单,未明确区分隐私与安全威胁,未提及多方合谋问题 | 分类较笼统,未系统分类 | 列举技术但未对比,侧重客户端加噪机制 | 列举防御技术但未分类,未讨论局限性 | 文献截至2023年,未涉及前沿挑战 |
表1 本文与其他联邦学习综述文献方法的对比
Tab. 1 Comparison of methods in the paper and other federated learning review literature
| 文献 | 隐私威胁定义/分类 | 安全威胁定义/分类 | 隐私保护分类/对比 | 安全防御分类/对比 | 文献前沿性/充分性 |
|---|---|---|---|---|---|
| 本文 | 定义清晰,分类细致,涵盖具体威胁场景 | 明确区分模型相关攻击与数据相关攻击 | 详细分析客户端、服务器及合谋场景的保护方法 | 分攻击类型提出防御,讨论防御措施的局限性 | 引用最新文献,覆盖量子计算威胁等新兴方向 |
| 文献[ | 分类较简单,未明确区分隐私与安全威胁,未提及多方合谋问题 | 基于安全三要素分类,未细化攻击类型 | 列举技术但未对比,侧重加密技术 | 列举防御技术但未分类,未讨论局限性 | 文献截至2023年,未涉及前沿挑战 |
| 文献[ | 分类较简单,未明确区分隐私与安全威胁,未提及多方合谋问题 | 分为投毒攻击、对抗攻击等,未系统分类 | 列举技术但未对比,侧重加密技术 | 列举防御技术但未分类,未讨论局限性 | 文献截至2020年,未涉及前沿挑战 |
| 文献[ | 定义清晰,分类细致,讨论多方合谋问题但未深入 | 分类较细致,提出攻击强度划分 | 只讨论了通用技术,没有具体场景应用 | 分攻击类型提出防御,讨论防御的效率问题 | 文献截至2023年,未涉及前沿挑战 |
| 文献[ | 分类较简单,未明确区分隐私与安全威胁,未提及多方合谋问题 | 分类较笼统,未系统分类 | 列举技术但未对比,侧重客户端加噪机制 | 列举防御技术但未分类,未讨论局限性 | 文献截至2023年,未涉及前沿挑战 |
| 恶意敌手 | 威胁类型 | 文献 | 优点 | 缺点 |
|---|---|---|---|---|
| 数据上传路径 | 提取重构[ | 文献[ | 轻量级实现,部署简单 | 噪声降低模型精度 |
| 文献[ | 提升通信效率,减少加密开销 | 稀疏压缩丢失信息,加密复杂度高 | ||
| 隐私推断[ | 文献[ | 平衡隐私与模型性能 | 两阶段训练增加计算延迟 | |
| 文献[ | 批量梯度加密,通信效率显著提升 | 依赖特定硬件,通用性受限 | ||
| 文献[ | 无需加噪,模型精度损失低 | 增加客户端计算负担 | ||
| 文献[ | 抵御推断攻击能力提升 | 手动调参,适配性差 | ||
| 服务器隐私泄露[ | 文献[ | 去中心化验证,抗服务器攻击 | 区块链共识机制增加系统延迟 | |
| 文献[ | 高安全性,去中心化聚合 | 需客户端计算能力匹配 | ||
| 文献[ | 支持边缘设备高效验证 | 依赖可信执行环境 | ||
| 模型交互过程 | 客户端GAN攻击[ | 文献[ | 防御GAN攻击针对性强 | 特征混淆影响模型训练效果 |
| 文献[ | 轻量级防御,减少信息共享量 | 过度压缩会丢失关键更新 | ||
| 文献[ | 动态调参平衡隐私与性能 | 需本地训练GAN,计算负担增加 | ||
| 多方合谋[ | 文献[ | 多维度防御合谋攻击 | 计算开销大,影响效率 | |
| 文献[ | 支持多方合谋溯源 | 需预定义水印规则,大规模效率低 |
表2 联邦学习面临的隐私威胁及相应的隐私保护方法的优缺点比较
Tab. 2 Comparison of privacy threats in federated learning and advantages/disadvantages of corresponding protection methods
| 恶意敌手 | 威胁类型 | 文献 | 优点 | 缺点 |
|---|---|---|---|---|
| 数据上传路径 | 提取重构[ | 文献[ | 轻量级实现,部署简单 | 噪声降低模型精度 |
| 文献[ | 提升通信效率,减少加密开销 | 稀疏压缩丢失信息,加密复杂度高 | ||
| 隐私推断[ | 文献[ | 平衡隐私与模型性能 | 两阶段训练增加计算延迟 | |
| 文献[ | 批量梯度加密,通信效率显著提升 | 依赖特定硬件,通用性受限 | ||
| 文献[ | 无需加噪,模型精度损失低 | 增加客户端计算负担 | ||
| 文献[ | 抵御推断攻击能力提升 | 手动调参,适配性差 | ||
| 服务器隐私泄露[ | 文献[ | 去中心化验证,抗服务器攻击 | 区块链共识机制增加系统延迟 | |
| 文献[ | 高安全性,去中心化聚合 | 需客户端计算能力匹配 | ||
| 文献[ | 支持边缘设备高效验证 | 依赖可信执行环境 | ||
| 模型交互过程 | 客户端GAN攻击[ | 文献[ | 防御GAN攻击针对性强 | 特征混淆影响模型训练效果 |
| 文献[ | 轻量级防御,减少信息共享量 | 过度压缩会丢失关键更新 | ||
| 文献[ | 动态调参平衡隐私与性能 | 需本地训练GAN,计算负担增加 | ||
| 多方合谋[ | 文献[ | 多维度防御合谋攻击 | 计算开销大,影响效率 | |
| 文献[ | 支持多方合谋溯源 | 需预定义水印规则,大规模效率低 |
| 威胁大类 | 威胁子类 | 攻防类型 | 核心机制 | 文献 | 优点 | 缺点 |
|---|---|---|---|---|---|---|
模型 相关 | 模型投毒 | 攻击 | 无目标/ 有目标投毒 | 文献[ | 隐蔽性强,主任务精度保持 | 依赖模型结构,攻击成本高 |
| 文献[ | 利用噪声逃避检测,隐蔽性高 | 依赖特定聚合规则,泛化性受限 | ||||
| 防御 | 异常检测 | 文献[ | 提升鲁棒性,树模型效果佳 | 增加模型复杂度,影响训练效率 | ||
| 加密验证 | 文献[ | 检测恶意梯度,隐私保护鲁棒性结合 | 依赖梯度分布假设,抗新攻击弱 | |||
| 搭便车 | 攻击 | 资源盗用 | 文献[ | 理论框架完善,提出伪装策略 | 仅理论分析,缺乏实际验证 | |
| 文献[ | 提出区块链搭便车攻击FL | 未提出防御方案,仅问题分析 | ||||
| 防御 | 贡献评估 | 文献[ | 检测无贡献节点,公平性提高 | 依赖客户端协作,通信开销增加 | ||
| 权重演化频率检测 | 文献[ | 实时检测,轻量级方案 | 阈值调整困难,会误判正常节点 | |||
| 后门投毒 | 攻击 | 模型中毒 | 文献[ | 动态调整触发器,攻击灵活 | 依赖聚合规则,成功率波动 | |
| 文献[ | 系统总结后门攻击防御 | 存在性能权衡,动态检测困难 | ||||
| 防御 | 模型蒸馏 | 文献[ | 攻击成功率降至5%,精度提升2% | 区块链开销大,依赖干净数据集 | ||
| 注意力机制 | 文献[ | 消除后门模式,精度无损失,攻击成功率降至3% | 区块链开销大,依赖干净数据集 | |||
| 模型窃取 | 攻击 | 模型盗用 | 文献[ | 黑盒API攻击,普适性强 | 效率受API限制,依赖接口交互 | |
| 文献[ | 针对分割架构,隐蔽性强 | 需客户端与服务器协作,场景受限 | ||||
| 文献[ | 支持大规模语言模型数据窃取 | 攻击成本高 | ||||
| 防御 | 参数加密 | 文献[ | 细粒度保护,异构数据场景有效 | 服务器聚合复杂度高,扩展性受限 | ||
| 添加噪声 | 文献[ | 动态模型保护,降低计算开销 | 依赖参数敏感度分析 | |||
| 变分编码器 | 文献[ | 轻量级防御,收敛延迟<10% | 需平衡噪声与模型性能 | |||
数据 相关 | 数据投毒 | 攻击 | 标签翻转/ 数据污染 | 文献[ | 简单有效,低恶意率检测 | 依赖数据分布假设,高维效果差 |
| 文献[ | 双层规划优化,攻击效果显著 | 需全局模型信息,攻击成本高 | ||||
| 防御 | PCA检测 | 文献[ | 轻量级防御,低恶意参与率有效 | 高维数据效果下降 | ||
| 加密验证 | 文献[ | 源头保护数据,数据完整性保护 | 依赖可信服务器,中心化风险 | |||
| GAN攻击 | 攻击 | 生成对抗扰动 | 文献[ | 服务器端生成对抗样本,破坏性强 | 需服务器权限,攻击场景受限 | |
| 文献[ | 跨模态攻击有效,垂直FL专用 | 需辅助模型训练,计算开销大 | ||||
| 防御 | 干净标签训练 | 文献[ | 简单有效,中小模型首选方案 | 依赖干净数据集,存储成本高 | ||
| DAE异常检测 | 文献[ | 实时检测异常数据,预训练DAE模型 | 对新型攻击泛化能力较弱 |
表3 联邦学习面临的安全威胁及相应防御方法
Tab. 3 Security threats faced by federated learning and corresponding defense methods
| 威胁大类 | 威胁子类 | 攻防类型 | 核心机制 | 文献 | 优点 | 缺点 |
|---|---|---|---|---|---|---|
模型 相关 | 模型投毒 | 攻击 | 无目标/ 有目标投毒 | 文献[ | 隐蔽性强,主任务精度保持 | 依赖模型结构,攻击成本高 |
| 文献[ | 利用噪声逃避检测,隐蔽性高 | 依赖特定聚合规则,泛化性受限 | ||||
| 防御 | 异常检测 | 文献[ | 提升鲁棒性,树模型效果佳 | 增加模型复杂度,影响训练效率 | ||
| 加密验证 | 文献[ | 检测恶意梯度,隐私保护鲁棒性结合 | 依赖梯度分布假设,抗新攻击弱 | |||
| 搭便车 | 攻击 | 资源盗用 | 文献[ | 理论框架完善,提出伪装策略 | 仅理论分析,缺乏实际验证 | |
| 文献[ | 提出区块链搭便车攻击FL | 未提出防御方案,仅问题分析 | ||||
| 防御 | 贡献评估 | 文献[ | 检测无贡献节点,公平性提高 | 依赖客户端协作,通信开销增加 | ||
| 权重演化频率检测 | 文献[ | 实时检测,轻量级方案 | 阈值调整困难,会误判正常节点 | |||
| 后门投毒 | 攻击 | 模型中毒 | 文献[ | 动态调整触发器,攻击灵活 | 依赖聚合规则,成功率波动 | |
| 文献[ | 系统总结后门攻击防御 | 存在性能权衡,动态检测困难 | ||||
| 防御 | 模型蒸馏 | 文献[ | 攻击成功率降至5%,精度提升2% | 区块链开销大,依赖干净数据集 | ||
| 注意力机制 | 文献[ | 消除后门模式,精度无损失,攻击成功率降至3% | 区块链开销大,依赖干净数据集 | |||
| 模型窃取 | 攻击 | 模型盗用 | 文献[ | 黑盒API攻击,普适性强 | 效率受API限制,依赖接口交互 | |
| 文献[ | 针对分割架构,隐蔽性强 | 需客户端与服务器协作,场景受限 | ||||
| 文献[ | 支持大规模语言模型数据窃取 | 攻击成本高 | ||||
| 防御 | 参数加密 | 文献[ | 细粒度保护,异构数据场景有效 | 服务器聚合复杂度高,扩展性受限 | ||
| 添加噪声 | 文献[ | 动态模型保护,降低计算开销 | 依赖参数敏感度分析 | |||
| 变分编码器 | 文献[ | 轻量级防御,收敛延迟<10% | 需平衡噪声与模型性能 | |||
数据 相关 | 数据投毒 | 攻击 | 标签翻转/ 数据污染 | 文献[ | 简单有效,低恶意率检测 | 依赖数据分布假设,高维效果差 |
| 文献[ | 双层规划优化,攻击效果显著 | 需全局模型信息,攻击成本高 | ||||
| 防御 | PCA检测 | 文献[ | 轻量级防御,低恶意参与率有效 | 高维数据效果下降 | ||
| 加密验证 | 文献[ | 源头保护数据,数据完整性保护 | 依赖可信服务器,中心化风险 | |||
| GAN攻击 | 攻击 | 生成对抗扰动 | 文献[ | 服务器端生成对抗样本,破坏性强 | 需服务器权限,攻击场景受限 | |
| 文献[ | 跨模态攻击有效,垂直FL专用 | 需辅助模型训练,计算开销大 | ||||
| 防御 | 干净标签训练 | 文献[ | 简单有效,中小模型首选方案 | 依赖干净数据集,存储成本高 | ||
| DAE异常检测 | 文献[ | 实时检测异常数据,预训练DAE模型 | 对新型攻击泛化能力较弱 |
| [1] | LIM W Y B, LUONG N C, HOANG D T, et al. Federated learning in mobile edge networks: a comprehensive survey [J]. IEEE Communications Surveys and Tutorials, 2020, 22(3): 2031-2063. |
| [2] | CUSTERS B, SEARS A M, DECHESNE F, et al. EU personal data protection in policy and practice [M]. The Hague: T.M.C. Asser Press, 2019. |
| [3] | CAHILL K F, HARRIS D J, BROWNE M, et al. California Consumer Privacy Act: potential impact and key takeaways [J]. Intellectual Property and Technology Law Journal, 2018, 30(12): 11-18. |
| [4] | McMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data [C]// Proceedings of the 20th Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2017: 1273-1282. |
| [5] | 周由胜,高璟琨,左祥建,等. 基于自适应拜占庭防御的安全联邦学习方案[J]. 通信学报, 2024, 45(8): 166-179. |
| ZHOU Y S, GAO J K, ZUO X J, et al. Secure federated learning scheme based on adaptive Byzantine defense [J]. Journal on Communications, 2024, 45(8): 166-179. | |
| [6] | 穆旭彤,程珂,宋安霄,等. 抗拜占庭攻击的隐私保护联邦学习[J]. 计算机学报, 2024, 47(4): 842-861. |
| MU X T, CHENG K, SONG A X, et al. Privacy-preserving federated learning resistant to Byzantine attacks [J]. Chinese Journal of Computers, 2024, 47(4): 842-861. | |
| [7] | 许文韬,王斌君,朱莉欣,等. 横向联邦学习后门的多方共治防范策略[J]. 计算机科学, 2024, 51(11A): No.240100176. |
| XU W T, WANG B J, ZHU L X, et al. Multi-party co-governance prevention strategy for horizontal federated learning backdoors [J]. Computer Science, 2024, 51(11A): No.240100176. | |
| [8] | 熊世强,何道敬,王振东,等. 联邦学习及其安全与隐私保护研究综述[J]. 计算机工程, 2024, 50(5): 1-15. |
| XIONG S Q, HE D J, WANG Z D, et al. Review of federated learning and its security and privacy protection [J]. Computer Engineering, 2024, 50(5): 1-15. | |
| [9] | 周俊,方国英,吴楠. 联邦学习安全与隐私保护研究综述[J]. 西华大学学报(自然科学版), 2020, 39(4): 9-17. |
| ZHOU J, FANG G Y, WU N. Survey on security and privacy-preserving in federated learning [J]. Journal of Xihua University (Natural Science Edition), 2020, 39(4): 9-17. | |
| [10] | 肖雄,唐卓,肖斌,等. 联邦学习的隐私保护与安全防御研究综述[J]. 计算机学报, 2023, 46(5): 1019-1044. |
| XIAO X, TANG Z, XIAO B, et al. A survey on privacy and security issues in federated learning [J]. Chinese Journal of Computers, 2023, 46(5): 1019-1044. | |
| [11] | 陈学斌,任志强,张宏扬. 联邦学习中的安全威胁与防御措施综述[J]. 计算机应用, 2024, 44(6): 1663-1672. |
| CHEN X B, REN Z Q, ZHANG H Y. Review on security threats and defense measures in federated learning [J]. Journal of Computer Applications, 2024, 44(6): 1663-1672. | |
| [12] | ZHU L, LIU Z, HAN S. Deep leakage from gradients [C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 14774-14784. |
| [13] | FREDRIKSON M, JHA S, RISTENPART T. Model inversion attacks that exploit confidence information and basic countermeasures [C]// Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2015: 1322-1333. |
| [14] | YIN H, MALLYA A, VAHDAT A, et al. See through gradients: image batch recovery via GradInversion [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 16332-16341. |
| [15] | YANG J, ZHENG J, ZHANG Z, et al. Security of federated learning for cloud-edge intelligence collaborative computing [J]. International Journal of Intelligent Systems, 2022, 37(11): 9290-9308. |
| [16] | XU X, LIU P, WANG W, et al. CGIR: conditional generative instance reconstruction attacks against federated learning [J]. IEEE Transactions on Dependable and Secure Computing, 2023, 20(6): 4551-4563. |
| [17] | 孙钰,严宇,崔剑,等. 联邦学习深度梯度反演攻防研究进展[J]. 电子与信息学报, 2024, 46(2): 428-442. |
| SUN Y, YAN Y, CUI J, et al. Review of deep gradient inversion attacks and defenses in federated learning [J]. Journal of Electronics and Information Technology, 2024, 46(2): 428-442. | |
| [18] | FU J, HONG Y, LING X, et al. Differentially private federated learning: a systematic review [EB/OL]. [2025-02-20].. |
| [19] | YANG W, BAI Y, RAO Y, et al. Privacy-preserving federated learning with homomorphic encryption and sparse compression[C]// Proceedings of the 4th International Conference on Computer Communication and Artificial Intelligence. Piscataway: IEEE, 2024: 192-198. |
| [20] | SHOKRI R, STRONATI M, SONG C, et al. Membership inference attacks against machine learning models [C]// Proceedings of the 2017 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2017: 3-18. |
| [21] | HU H, SALCIC Z, SUN L, et al. Source inference attacks in federated learning [C]// Proceedings of the 2021 IEEE International Conference on Data Mining. Piscataway: IEEE, 2021: 1102-1107. |
| [22] | SURI A, KANANI P, MARATHE V J, et al. Subject membership inference attacks in federated learning [EB/OL]. [2024-10-02].. |
| [23] | WANG Q, YIN H, CHEN T, et al. Fast-adapting and privacy-preserving federated recommender system [J]. The VLDB Journal, 2022, 31(5): 877-896. |
| [24] | MIRONOV I. Rényi differential privacy[C]// Proceedings of the IEEE 30th Computer Security Foundations Symposium. Piscataway: IEEE, 2017: 263-275. |
| [25] | ZHANG C, LI S, XIA J, et al. BatchCrypt: efficient homomorphic encryption for cross-silo federated learning [C]// Proceedings of the 2020 USENIX Annual Technical Conference. Berkeley: USENIX Association, 2020: 493-506. |
| [26] | AREVALO C A, NOORBAKHSH S L, DONG Y, et al. Task-agnostic privacy-preserving representation learning for federated learning against attribute inference attacks [C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 10909-10917. |
| [27] | XU Y, YIN M, FANG M, et al. Robust federated learning mitigates client-side training data distribution inference attacks[C]// Companion Proceedings of the ACM Web Conference 2024. New York: ACM, 2024: 798-801. |
| [28] | MELIS L, SONG C, DE CRISTOFARO E, et al. Exploiting unintended feature leakage in collaborative learning [C]// Proceedings of the 2019 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2019: 691-706. |
| [29] | PILLUTLA K, KAKADE S M, HARCHAOUI Z. Robust aggregation for federated learning [J]. IEEE Transactions on Signal Processing, 2022, 70: 1142-1154. |
| [30] | YIN X, ZHU Y, HU J. A comprehensive survey of privacy-preserving federated learning: a taxonomy, review, and future directions [J]. ACM Computing Surveys, 2022, 54(6): No.131. |
| [31] | MIAO Y, LIU Z, LI H, et al. Privacy-preserving Byzantine-robust federated learning via blockchain systems [J]. IEEE Transactions on Information Forensics and Security, 2022, 17: 2848-2861. |
| [32] | ZHAO J, ZHU H, WANG F, et al. PVD-FL: a privacy-preserving and verifiable decentralized federated learning framework [J]. IEEE Transactions on Information Forensics and Security, 2022, 17: 2059-2073. |
| [33] | ZHOU H, YANG G, HUANG Y, et al. Privacy-preserving and verifiable federated learning framework for edge computing [J]. IEEE Transactions on Information Forensics and Security, 2023, 18: 565-580. |
| [34] | HAYES J, MELIS L, DANEZIS G, et al. LOGAN: membership inference attacks against generative models [EB/OL]. [2024-08-10].. |
| [35] | SONG M, WANG Z, ZHANG Z, et al. Analyzing user-level privacy attack against federated learning [J]. IEEE Journal on Selected Areas in Communications, 2020, 38(10): 2430-2444. |
| [36] | LUO X, ZHANG X. Exploiting defenses against GAN-based feature inference attacks in federated learning [J]. ACM Transactions on Knowledge Discovery from Data, 2025, 19(3): No.78. |
| [37] | CAO H, ZHU Y, REN Y, et al. Prevention of GAN-based privacy inferring attacks towards federated learning [C]// Proceedings of the 2022 EAI International Conference on Collaborative Computing: Networking, Applications and Worksharing, LNICST 461. Cham: Springer, 2022: 39-54. |
| [38] | WU H, SHI L, YE J, et al. The client-level GAN-based data reconstruction attack and defense in clustered federated learning[C]// Proceedings of the 2024 International Conference on Wireless Artificial Intelligent Computing Systems and Applications, LNCS 14997. Cham: Springer, 2025: 466-478. |
| [39] | YANG R, AU M H, LAI J, et al. Collusion resistant watermarking schemes for cryptographic functionalities [C]// Proceedings of the 2019 International Conference on the Theory and Application of Cryptology and Information Security, LNCS 11921. Cham: Springer, 2019: 371-398. |
| [40] | XIAO X, TANG Z, LI C, et al. SCA: Sybil-based collusion attacks of IIoT data poisoning in federated learning [J]. IEEE Transactions on Industrial Informatics, 2023, 19(3): 2608-2618. |
| [41] | ZHANG F, HUANG H, CHEN Z, et al. Robust and privacy-preserving federated learning with distributed additive encryption against poisoning attacks [J]. Computer Networks, 2024, 245: No.110383. |
| [42] | LUO Y, LI Y, QIN S, et al. Copyright protection framework for federated learning models against collusion attacks [J]. Information Sciences, 2024, 680: No.121161. |
| [43] | ZHOU X, XU M, WU Y, et al. Deep model poisoning attack on federated learning [J]. Future Internet, 2021, 13(3): No.73. |
| [44] | YANG M, CHENG H, CHEN F, et al. Model poisoning attack in differential privacy-based federated learning [J]. Information Sciences, 2023, 630: 158-172. |
| [45] | XIE L, LIU J, LU S, et al. An efficient learning framework for federated XGBoost using secret sharing and distributed optimization [J]. ACM Transactions on Intelligent Systems and Technology, 2022, 13(5): No.77. |
| [46] | YAZDINEJAD A, DEHGHANTANHA A, KARIMIPOUR H, et al. A robust privacy-preserving federated learning model against model poisoning attacks [J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 6693-6708. |
| [47] | FRABONI Y, VIDAL R, LORENZI M. Free-rider attacks on model aggregation in federated learning [C]// Proceedings of the 24th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2021: 1846-1854. |
| [48] | ZHANG L, QIN S, FENG G, et al. Decentralized federated learning under free-riders: credibility analysis [C]// Proceedings of the 2024 IEEE Conference on Computer Communications Workshops. Piscataway: IEEE, 2024: 1-6. |
| [49] | WANG J, CHANG X, MIŠIĆ J, et al. PASS: a parameter audit-based secure and fair federated learning scheme against free-rider attack [J]. IEEE Internet of Things Journal, 2024, 11(1): 1374-1384. |
| [50] | CHEN J, LI M, LIU T, et al. Rethinking the defense against free-rider attack from the perspective of model weight evolving frequency [J]. Information Sciences, 2024, 668: No.120527. |
| [51] | HUANG A. Dynamic backdoor attacks against federated learning[EB/OL]. [2024-12-04].. |
| [52] | 刘嘉浪,郭延明,老明瑞,等. 基于联邦学习的后门攻击与防御算法综述[J]. 计算机研究与发展, 2024, 61(10): 2607-2626. |
| LIU J L, GUO Y M, LAO M R, et al. Survey of backdoor attack and defense algorithms based on federated learning [J]. Journal of Computer Research and Development, 2024, 61(10): 2607-2626. | |
| [53] | WAN C, WANG Y, XU J, et al. Research on privacy protection in federated learning combining distillation defense and blockchain[J]. Electronics, 2024, 13(4): No.679. |
| [54] | ZHANG J, ZHU C, GE C, et al. BadCleaner: defending backdoor attacks in federated learning via attention-based multi-teacher distillation [J]. IEEE Transactions on Dependable and Secure Computing, 2024, 21(5): 4559-4573. |
| [55] | TRAMÈR F, ZHANG F, JUELS A, et al. Stealing machine learning models via prediction APIs [C]// Proceedings of the 25th USENIX Security Symposium. Berkeley: USENIX Association, 2016: 601-618. |
| [56] | LI J, RAKIN A S, CHEN X, et al. Model extraction attacks on split federated learning [EB/OL]. [2024-10-13].. |
| [57] | DAI C, LU L, ZHOU P. Stealing training data from large language models in decentralized training through activation inversion attack[C]// Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2025: 14539-14551. |
| [58] | ISSA W, MOUSTAFA N, TURNBULL B, et al. RVE-PFL: robust variational encoder-based personalized federated learning against model inversion attacks [J]. IEEE Transactions on Information Forensics and Security, 2024, 19: 3772-3787. |
| [59] | JIN W, YAO Y, HAN S, et al. FedML-HE: an efficient homomorphic-encryption-based privacy-preserving federated learning system [EB/OL]. [2024-08-17].. |
| [60] | KWON D H, CHOI B J. Noise-based global model protection for defending against model theft attacks in federated learning [C]// Proceedings of the 2024 Annual Conference of KIPS. Seoul: Korea Information Processing Society, 2024: 709-710. |
| [61] | TOLPEGIN V, TRUEX S, GURSOY M E, et al. Data poisoning attacks against federated learning systems [C]// Proceedings of the 25th European Symposium on Research in Computer Security, LNCS 12308. Cham: Springer, 2020: 480-501. |
| [62] | SUN G, CONG Y, DONG J, et al. Data poisoning attacks on federated machine learning [J]. IEEE Internet of Things Journal, 2022, 9(13): 11365-11375. |
| [63] | JODAYREE M, HE W, JANICKI R. Preventing image data poisoning attacks in federated machine learning by an encrypted verification key [J]. Procedia Computer Science, 2023, 225: 2723-2732. |
| [64] | WANG Z, SONG M, ZHANG Z, et al. Beyond inferring class representatives: user-level privacy leakage from federated learning[C]// Proceedings of the 2019 IEEE Conference on Computer Communications. Piscataway: IEEE, 2019: 2512-2520. |
| [65] | CHEN X, ZAN D, LI W, et al. A GAN-based data poisoning framework against anomaly detection in vertical federated learning[C]// Proceedings of the 2024 IEEE International Conference on Communications. Piscataway: IEEE, 2024: 3982-3987. |
| [66] | PSYCHOGYIOS K, VELIVASSAKI T H, BOUROU S, et al. GAN-driven data poisoning attacks and their mitigation in federated learning systems [J]. Electronics, 2023, 12(8): No.1805. |
| [67] | YANG J, ZHANG W, GUO Z, et al. TrustDFL: a blockchain-based verifiable and trusty decentralized federated learning framework [J]. Electronics, 2023, 13(1): No.86. |
| [68] | GURUNG D, POKHREL S R, LI G. Performance analysis and evaluation of post quantum secure blockchained federated learning[J]. Computer Networks, 2024, 255: No.110849. |
| [69] | KOUTSOUBIS N, YILMAZ Y, RAMACHANDRAN R P, et al. Privacy preserving federated learning in medical imaging with uncertainty estimation [EB/OL]. [2024-10-04]. . |
| [70] | GUPTA S, SUTAR V, SINGH V, et al. FedAlign: federated domain generalization with cross-client feature alignment [EB/OL]. [2025-03-19].. |
| [1] | 吴定佳, 崔喆. 增强模式链接与多生成器协同的SQL生成框架MG-SQL[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 723-731. |
| [2] | 王日龙, 李振平, 李晓松, 高强, 何亚, 钟勇, 赵英潇. 多Agent协作的知识推理框架[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 708-714. |
| [3] | 张昊洋, 张丽萍, 闫盛, 李娜, 张学飞. 面向知识图谱补全的大模型方法综述[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 683-695. |
| [4] | 黄奕明, 邹喜华, 邓果, 郑狄. 预回答与召回过滤:双阶段RAG问答系统优化方法[J]. 《计算机应用》唯一官方网站, 2026, 46(3): 696-707. |
| [5] | 高飞, 陈董, 边帝行, 范文强, 刘起东, 吕培, 张朝阳, 徐明亮. 面向学科撤销后科研人员重分配的多阶段耦合决策框架[J]. 《计算机应用》唯一官方网站, 2026, 46(2): 416-426. |
| [6] | 钟琪, 张淑芬, 张镇博, 菅银龙, 景忠瑞. 面向联邦学习的投毒攻击检测与防御机制[J]. 《计算机应用》唯一官方网站, 2026, 46(2): 445-457. |
| [7] | 张珂嘉, 方志军, 周南润, 史志才. 基于模型预分配与自蒸馏的个性化联邦学习方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 10-20. |
| [8] | 菅银龙, 陈学斌, 景忠瑞, 钟琪, 张镇博. 联邦学习中基于条件生成对抗网络的数据增强方案[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 21-32. |
| [9] | 谢欣冉, 崔喆, 陈睿, 彭泰来, 林德坤. 基于层次过滤与标签语义扩展的大模型零样本重排序方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 60-68. |
| [10] | 俞浩, 范菁, 孙伊航, 金亚东, 郗恩康, 董华. 边缘异构下的联邦分割学习优化方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 33-42. |
| [11] | 林怡, 夏冰, 王永, 孟顺达, 刘居宠, 张书钦. 基于AI智能体的隐藏RESTful API识别与漏洞检测方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 135-143. |
| [12] | 张滨滨, 秦永彬, 黄瑞章, 陈艳平. 结合大语言模型与动态提示的裁判文书摘要方法[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2783-2789. |
| [13] | 俞浩, 范菁, 孙伊航, 董华, 郗恩康. 联邦学习统计异质性综述[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2737-2746. |
| [14] | 苏锦涛, 葛丽娜, 肖礼广, 邹经, 王哲. 联邦学习中针对后门攻击的检测与防御方案[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2399-2408. |
| [15] | 葛丽娜, 王明禹, 田蕾. 联邦学习的高效性研究综述[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2387-2398. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||