《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (10): 3221-3230.DOI: 10.11772/j.issn.1001-9081.2024101505
• 网络空间安全 • 上一篇
张淑芬1,2,3, 汤本建1,2,3(), 田子坤1,2,3, 秦肖阳1,2,3
收稿日期:
2024-10-23
修回日期:
2025-06-04
接受日期:
2025-06-09
发布日期:
2025-06-13
出版日期:
2025-10-10
通讯作者:
汤本建
作者简介:
张淑芬(1972—),女,河北唐山人,教授,硕士,CCF高级会员,主要研究方向:云计算、智能信息处理、数据安全、隐私保护基金资助:
Shufen ZHANG1,2,3, Benjian TANG1,2,3(), Zikun TIAN1,2,3, Xiaoyang QING1,2,3
Received:
2024-10-23
Revised:
2025-06-04
Accepted:
2025-06-09
Online:
2025-06-13
Published:
2025-10-10
Contact:
Benjian TANG
About author:
ZHANG Shufen, born in 1972, M. S., professor. Her research interests include cloud computing, intelligent information processing, data security, privacy protection.Supported by:
摘要:
随着人工智能的快速发展,用户隐私泄露风险日益严重。差分隐私是一种关键的隐私保护技术,通过在数据中引入噪声防止个人信息泄露,而联邦学习(FL)则允许在不交换数据的情况下共同训练模型,保护数据的安全性。近年来,差分隐私技术与FL的结合使用可以充分发挥它们各自的优势:差分隐私确保数据使用过程中的隐私保护,而FL则通过分布式训练提高模型的泛化能力和效率。针对FL的隐私安全问题,首先,系统性地总结和比较基于差分隐私的FL的最新研究进展,包括不同的差分隐私机制、FL算法和应用场景;其次,重点讨论差分隐私在FL中的应用方式,包括数据聚合、梯度下降和模型训练等方面,并分析各种技术的优缺点;最后,详细总结该领域当前存在的挑战和发展方向。
中图分类号:
张淑芬, 汤本建, 田子坤, 秦肖阳. 基于差分隐私的联邦学习研究综述[J]. 计算机应用, 2025, 45(10): 3221-3230.
Shufen ZHANG, Benjian TANG, Zikun TIAN, Xiaoyang QING. Survey of federated learning based on differential privacy[J]. Journal of Computer Applications, 2025, 45(10): 3221-3230.
方案 | 隐私保护目标 | 核心优化方向 | 计算复杂度 | 通信开销 | 抗攻击能力 | 典型适用层 | 优势 | 劣势 | 应用场景 |
---|---|---|---|---|---|---|---|---|---|
差分隐私 | 数据不可 区分性 | 噪声控制与 预算分配 | O(n) (线性) | 低(仅传输 噪声参数) | 防成员推断 攻击 | 数据/模型层 | 量化隐私强度 | 模型精度 下降 | 医疗影像分析 |
同态加密 | 数据传输 过程隐私 | 计算效率提升 | O(n log n) (多项式) | 高(需传输 密文数据) | 防窃听与 中间人攻击 | 传输层 | 实现端到端 密文计算 | 计算开销 极高 | 金融联合风控 |
安全多方 计算 | 多方数据 输入隐私 | 通信开销降低 | O(n2) (交互式) | 极高(多方 频繁交互) | 防合谋攻击 | 协议层 | 无中心化风险 | 通信延迟 显著 | 跨企业商业 数据分析 |
表1 研究方案的对比
Tab. 1 Comparison of research schemes
方案 | 隐私保护目标 | 核心优化方向 | 计算复杂度 | 通信开销 | 抗攻击能力 | 典型适用层 | 优势 | 劣势 | 应用场景 |
---|---|---|---|---|---|---|---|---|---|
差分隐私 | 数据不可 区分性 | 噪声控制与 预算分配 | O(n) (线性) | 低(仅传输 噪声参数) | 防成员推断 攻击 | 数据/模型层 | 量化隐私强度 | 模型精度 下降 | 医疗影像分析 |
同态加密 | 数据传输 过程隐私 | 计算效率提升 | O(n log n) (多项式) | 高(需传输 密文数据) | 防窃听与 中间人攻击 | 传输层 | 实现端到端 密文计算 | 计算开销 极高 | 金融联合风控 |
安全多方 计算 | 多方数据 输入隐私 | 通信开销降低 | O(n2) (交互式) | 极高(多方 频繁交互) | 防合谋攻击 | 协议层 | 无中心化风险 | 通信延迟 显著 | 跨企业商业 数据分析 |
类型 | 攻击方法 | 描述 | 防御措施 |
---|---|---|---|
投毒 | 数据投毒[ 模型投毒[ | 投毒攻击主要是指在训练或再训练过程中,恶意的参与者通过 攻击训练数据集操纵机器学习模型的预测[ | 参数检测[ 鲁棒主成分回归[ |
对抗 | 对抗攻击[ 对抗样本攻击[ | 对抗攻击是指恶意构造输入样本,导致模型以高置信度输出错误结果 | 对抗训练[ 数据压缩[ 对抗样本检测[ |
隐私 | 模型提取攻击[ 模型逆向攻击[ | 通过模型提取攻击,攻击者利用模型的输入输出窃取模型参数和 超参数;通过模型逆向攻击,分析模型的输出行为和输入输出对, 逆向推理出模型内部结构、参数或训练数据 | 差分隐私[ 同态加密[ |
表2 联邦学习中的3类安全威胁及其防御措施
Tab. 2 Three types of security threats and their defense measures in federated learning
类型 | 攻击方法 | 描述 | 防御措施 |
---|---|---|---|
投毒 | 数据投毒[ 模型投毒[ | 投毒攻击主要是指在训练或再训练过程中,恶意的参与者通过 攻击训练数据集操纵机器学习模型的预测[ | 参数检测[ 鲁棒主成分回归[ |
对抗 | 对抗攻击[ 对抗样本攻击[ | 对抗攻击是指恶意构造输入样本,导致模型以高置信度输出错误结果 | 对抗训练[ 数据压缩[ 对抗样本检测[ |
隐私 | 模型提取攻击[ 模型逆向攻击[ | 通过模型提取攻击,攻击者利用模型的输入输出窃取模型参数和 超参数;通过模型逆向攻击,分析模型的输出行为和输入输出对, 逆向推理出模型内部结构、参数或训练数据 | 差分隐私[ 同态加密[ |
优化器 | 测试准确率 | Top-1准确率 | Top-5准确率 |
---|---|---|---|
SGD | 82.95±0.28 | 61.33±0.11 | 83.52±0.14 |
SWA | 80.16±0.16 | 62.32±0.13 | 84.23±0.05 |
Adam(W) | 82.95±0.28 | 55.51±0.19 | 79.09±0.33 |
Padam(W) | 82.37±0.35 | 59.65±0.17 | 81.74±0.16 |
Gadam | 82.13±0.20 | 60.50±0.19 | 82.56±0.13 |
GadamX | 83.27±0.11 | 63.04±0.06 | 84.75±0.03 |
表3 自适应算法与传统的SGD在WRN-28-10网络结构上的比较 (%)
Tab. 3 Comparison of adaptive algorithms and traditional SGD on WRN-28-10 network structure
优化器 | 测试准确率 | Top-1准确率 | Top-5准确率 |
---|---|---|---|
SGD | 82.95±0.28 | 61.33±0.11 | 83.52±0.14 |
SWA | 80.16±0.16 | 62.32±0.13 | 84.23±0.05 |
Adam(W) | 82.95±0.28 | 55.51±0.19 | 79.09±0.33 |
Padam(W) | 82.37±0.35 | 59.65±0.17 | 81.74±0.16 |
Gadam | 82.13±0.20 | 60.50±0.19 | 82.56±0.13 |
GadamX | 83.27±0.11 | 63.04±0.06 | 84.75±0.03 |
方法 | 数据集 | 数据源 | 迭代次数 | 评价指标及结果 | 说明 |
---|---|---|---|---|---|
RASE[ | REFIT | 5 | 10 | 平均绝对误差从1 216.4下降到60.9, 均方误差从1.7×106下降到0.3×106 | 与基线方法BR、Laplace-Fisher和 BR-mallows对比 |
SCDA[ | Sensor | 100 | 30 | 误差减小到0.000 1 | 100节点进行30次迭代 |
PPAC[ | Agents | — | — | 在均方意义上收敛率达到平均值 | 最大似然估计协方差矩阵 |
SPoFC[ | PAMAP2 | 9 | 10 | 均方误差分别减小260、100 | 对比Baseline和EPD(Extreme Point with DJW) |
GFL-LFF[ | MNIST、 CIFAR-10、 LFW | — | 350 | 吞吐量分别提升2、3、5和7个百分点, 延迟分别减少1.1、1.5、2.3和2.7 s | 对比FedOpt(Federated Optimization)、 DPPDA(Decentralized Privacy-Preserving Data Aggregation)、PPDAFL(Privacy- Preserving Data Aggregation based on FL)和 PPDC(Privacy-Preserving Distinct Counting) |
DPDR[ | MNIST、 CIFAR-10、 SVHN | — | 20 | 准确率提升了2个百分点 | 对比DP-SGD |
BR-DP[ | Adult | 15 | 1 000 | 在相同隐私预算下查询,提高了接受率 | 对比Laplacian机制、子采样BR-DP机制和 Gaussian机制 |
Gowalla | 1 | ||||
IDP-SGD[ | MNIST | — | 80 | 使用Sample、Scale方法的准确率分别 提升了1.06和1.03个百分点 | 使用Sample和Scale两种机制,对比标准的 DP-SGD |
SVHN | 30 | 使用Sample、Scale方法的准确率分别 提升了2.63和2.31个百分点 | |||
CIFAR-10 | 30 | 使用Sample、Scale方法的准确率分别 提升了5.09和5.26个百分点 | |||
Gadam[ GadamX[ | CIFAR-10 | — | 100 | VGG-16架构达到94.88%测试准确率 | Gadam和GadamX与其他基线算法 (如SGD、Adam、SWA)对比 |
CIFAR-100 | 100 | VGG-16架构达到77.22%测试准确率 | |||
ImageNet | 50 | Top-1准确率84.75% | |||
PTB | 50 | 测试准确率58.77% | |||
Multistage QHM[ | CIFAR-10 | — | 75 | 不同初始学习率测试准确率从66.0%到 87.3%不等 | 对比不同超参数调度策略下模型在 测试集上的准确率 |
aSGD[ | MNIST | — | 100 | 准确率达到98.5%,训练时间减少15% | 对比传统的SGD |
RDP-LDA[ | KOS、NIPS | 2 | 50 | 使用JS散度测量2个概率分布之间的相似性 | 对比高斯机制和截断高斯机制在参数估计 中的表现 |
Fed-MPS[ | CIFAR-10、 MNIST、 FMNIST | — | — | 提取增强模型精度的参数,从而提高最终 模型精度 | 仅对选定的参数添加噪声,减少隐私预算 |
HME[ HLR[ | SNP、 CA Housing | — | 100 | 误差降低2.3个百分点 | 将所提算法的误差与非私有OLS估计器 比较 |
f-DP[ | SST-2 | — | 3 | 模型准确率提高18.3个百分点 | 通过比较标准校准和直接校准噪声 |
CIFAR-10 | — | 100 | 模型准确率提高12.8个百分点 | ||
DPAdapter[ | CIFAR-100 | — | 1 000 | 平均准确率提升了4.17个百分点 | 在不同隐私预算下,与SAM(Sharpness- Aware Minimization)技术对比 |
CIFAR-10 | 100 | ||||
SVHN | 100 | ||||
STL-10 | 100 | ||||
PDP-FL[ | MNIST | — | — | 分类准确度提升了0.7个百分点 | 在不同隐私预算场景下,对比LDP-Fed (Federated learning with Local Differential Privacy)和GDP-FL(FL with Global Differential Privacy) |
CIFAR-10 | — | — | 分类准确度提升了1.8个百分点 | ||
NbAFL[ | MNIST | 50 | 25 | 损失函数的值在不同聚合时间下呈下降趋势 | 与非隐私方法对比 |
表4 各种模型优化方法的实验结果对比
Tab. 4 Experimental result comparison of various model optimization methods
方法 | 数据集 | 数据源 | 迭代次数 | 评价指标及结果 | 说明 |
---|---|---|---|---|---|
RASE[ | REFIT | 5 | 10 | 平均绝对误差从1 216.4下降到60.9, 均方误差从1.7×106下降到0.3×106 | 与基线方法BR、Laplace-Fisher和 BR-mallows对比 |
SCDA[ | Sensor | 100 | 30 | 误差减小到0.000 1 | 100节点进行30次迭代 |
PPAC[ | Agents | — | — | 在均方意义上收敛率达到平均值 | 最大似然估计协方差矩阵 |
SPoFC[ | PAMAP2 | 9 | 10 | 均方误差分别减小260、100 | 对比Baseline和EPD(Extreme Point with DJW) |
GFL-LFF[ | MNIST、 CIFAR-10、 LFW | — | 350 | 吞吐量分别提升2、3、5和7个百分点, 延迟分别减少1.1、1.5、2.3和2.7 s | 对比FedOpt(Federated Optimization)、 DPPDA(Decentralized Privacy-Preserving Data Aggregation)、PPDAFL(Privacy- Preserving Data Aggregation based on FL)和 PPDC(Privacy-Preserving Distinct Counting) |
DPDR[ | MNIST、 CIFAR-10、 SVHN | — | 20 | 准确率提升了2个百分点 | 对比DP-SGD |
BR-DP[ | Adult | 15 | 1 000 | 在相同隐私预算下查询,提高了接受率 | 对比Laplacian机制、子采样BR-DP机制和 Gaussian机制 |
Gowalla | 1 | ||||
IDP-SGD[ | MNIST | — | 80 | 使用Sample、Scale方法的准确率分别 提升了1.06和1.03个百分点 | 使用Sample和Scale两种机制,对比标准的 DP-SGD |
SVHN | 30 | 使用Sample、Scale方法的准确率分别 提升了2.63和2.31个百分点 | |||
CIFAR-10 | 30 | 使用Sample、Scale方法的准确率分别 提升了5.09和5.26个百分点 | |||
Gadam[ GadamX[ | CIFAR-10 | — | 100 | VGG-16架构达到94.88%测试准确率 | Gadam和GadamX与其他基线算法 (如SGD、Adam、SWA)对比 |
CIFAR-100 | 100 | VGG-16架构达到77.22%测试准确率 | |||
ImageNet | 50 | Top-1准确率84.75% | |||
PTB | 50 | 测试准确率58.77% | |||
Multistage QHM[ | CIFAR-10 | — | 75 | 不同初始学习率测试准确率从66.0%到 87.3%不等 | 对比不同超参数调度策略下模型在 测试集上的准确率 |
aSGD[ | MNIST | — | 100 | 准确率达到98.5%,训练时间减少15% | 对比传统的SGD |
RDP-LDA[ | KOS、NIPS | 2 | 50 | 使用JS散度测量2个概率分布之间的相似性 | 对比高斯机制和截断高斯机制在参数估计 中的表现 |
Fed-MPS[ | CIFAR-10、 MNIST、 FMNIST | — | — | 提取增强模型精度的参数,从而提高最终 模型精度 | 仅对选定的参数添加噪声,减少隐私预算 |
HME[ HLR[ | SNP、 CA Housing | — | 100 | 误差降低2.3个百分点 | 将所提算法的误差与非私有OLS估计器 比较 |
f-DP[ | SST-2 | — | 3 | 模型准确率提高18.3个百分点 | 通过比较标准校准和直接校准噪声 |
CIFAR-10 | — | 100 | 模型准确率提高12.8个百分点 | ||
DPAdapter[ | CIFAR-100 | — | 1 000 | 平均准确率提升了4.17个百分点 | 在不同隐私预算下,与SAM(Sharpness- Aware Minimization)技术对比 |
CIFAR-10 | 100 | ||||
SVHN | 100 | ||||
STL-10 | 100 | ||||
PDP-FL[ | MNIST | — | — | 分类准确度提升了0.7个百分点 | 在不同隐私预算场景下,对比LDP-Fed (Federated learning with Local Differential Privacy)和GDP-FL(FL with Global Differential Privacy) |
CIFAR-10 | — | — | 分类准确度提升了1.8个百分点 | ||
NbAFL[ | MNIST | 50 | 25 | 损失函数的值在不同聚合时间下呈下降趋势 | 与非隐私方法对比 |
方法 | 研究动机 | 优势 | 局限性 | 适用场景 |
---|---|---|---|---|
RASE[ | 物联网隐私保护 | 低复杂度 | 资源受限 | 敏感数据聚合 |
SCDA[ | 自组织网络隐私聚合 | 无需中心控制器 | 通信延迟较大 | 资源受限环境 |
PPAC[ | 分布式共识隐私保护 | 精确平均一致性 | 算法复杂 | 隐私保护的平均计算 |
SPoFC[ | 流数据隐私聚合 | 减少扰动数据点数量 | 数据流波动时效果差 | 可穿戴设备数据聚合 |
GFL-LFF[ | 数据共享与隐私平衡 | 提升特征表达能力 | 计算开销高、通信延迟 | 工业物联网数据聚合 |
DPDR[ | 优化隐私预算效率 | 减少噪声注入,加速收敛 | 实施和调优相对复杂 | 高隐私需求的深度学习早期训练 |
BR-DP[ | 隐私‒效用动态平衡 | 适应多种DP机制 | 噪声方差增加 | 高隐私要求的多次查询场景 |
IDP-SGD[ | 个性化隐私需求 | 灵活分配隐私预算 | 隐私预算保密性挑战 | 用户隐私敏感度差异大 |
Gadam[ GadamX[ | 提高泛化性能 | 减少调参需求,优化收敛 | 计算成本高,依赖正则化 | 大规模图像分类任务 |
Multistage QHM[ | 优化超参数调度 | 简化超参数调优并提高泛化能力 | 复杂性高,需额外调优 | 集中式/分布式SGD超参数优化 |
aSGD[ | 解决传统SGD偏差 | 提升训练速度与准确性 | 依赖ReLU激活函数 | 无需额外超参数调整 |
RDP-LDA[ | 提升LDA模型隐私保护 | 防止成员推断攻击 | 计算资源需求大 | LDA模型参数估计 |
Fed-MPS[ | 资源受限环境优化 | 减少噪声,提升效率 | 计算开销大 | 资源受限环境 |
HME[ | 隐私与统计准确性权衡 | 最优收敛速率 | 需调优参数 | 高/低维差分隐私统计估计 |
f-DP[ | 攻击风险直接校准噪声 | 减少噪声需求 | 依赖准确的攻击风险评估 | 高攻击风险场景 |
DPAdapter[ | 减少噪声引入的性能损失 | 增强参数鲁棒性 | 参数选择敏感 | 噪声影响和迁移学习时的鲁棒性 |
PDP-FL[ | 个性化隐私预算分配 | 支持全局隐私量化 | 隐私分级灵活性不足 | 用户隐私需求差异大 |
NbAFL[ | 客户端隐私保护 | 灵活K-client调度,效率提升 | 噪声过大影响模型效果 | 多方协作的隐私敏感场景 |
表5 模型优化方法分析
Tab. 5 Analysis of model optimization methods
方法 | 研究动机 | 优势 | 局限性 | 适用场景 |
---|---|---|---|---|
RASE[ | 物联网隐私保护 | 低复杂度 | 资源受限 | 敏感数据聚合 |
SCDA[ | 自组织网络隐私聚合 | 无需中心控制器 | 通信延迟较大 | 资源受限环境 |
PPAC[ | 分布式共识隐私保护 | 精确平均一致性 | 算法复杂 | 隐私保护的平均计算 |
SPoFC[ | 流数据隐私聚合 | 减少扰动数据点数量 | 数据流波动时效果差 | 可穿戴设备数据聚合 |
GFL-LFF[ | 数据共享与隐私平衡 | 提升特征表达能力 | 计算开销高、通信延迟 | 工业物联网数据聚合 |
DPDR[ | 优化隐私预算效率 | 减少噪声注入,加速收敛 | 实施和调优相对复杂 | 高隐私需求的深度学习早期训练 |
BR-DP[ | 隐私‒效用动态平衡 | 适应多种DP机制 | 噪声方差增加 | 高隐私要求的多次查询场景 |
IDP-SGD[ | 个性化隐私需求 | 灵活分配隐私预算 | 隐私预算保密性挑战 | 用户隐私敏感度差异大 |
Gadam[ GadamX[ | 提高泛化性能 | 减少调参需求,优化收敛 | 计算成本高,依赖正则化 | 大规模图像分类任务 |
Multistage QHM[ | 优化超参数调度 | 简化超参数调优并提高泛化能力 | 复杂性高,需额外调优 | 集中式/分布式SGD超参数优化 |
aSGD[ | 解决传统SGD偏差 | 提升训练速度与准确性 | 依赖ReLU激活函数 | 无需额外超参数调整 |
RDP-LDA[ | 提升LDA模型隐私保护 | 防止成员推断攻击 | 计算资源需求大 | LDA模型参数估计 |
Fed-MPS[ | 资源受限环境优化 | 减少噪声,提升效率 | 计算开销大 | 资源受限环境 |
HME[ | 隐私与统计准确性权衡 | 最优收敛速率 | 需调优参数 | 高/低维差分隐私统计估计 |
f-DP[ | 攻击风险直接校准噪声 | 减少噪声需求 | 依赖准确的攻击风险评估 | 高攻击风险场景 |
DPAdapter[ | 减少噪声引入的性能损失 | 增强参数鲁棒性 | 参数选择敏感 | 噪声影响和迁移学习时的鲁棒性 |
PDP-FL[ | 个性化隐私预算分配 | 支持全局隐私量化 | 隐私分级灵活性不足 | 用户隐私需求差异大 |
NbAFL[ | 客户端隐私保护 | 灵活K-client调度,效率提升 | 噪声过大影响模型效果 | 多方协作的隐私敏感场景 |
[1] | HARD A, RAO K, MATHEWS R, et al. Federated learning for mobile keyboard prediction[EB/OL]. [2024-10-01].. |
[2] | 吴俊仪,李晓会. 基于差分隐私的分段裁剪联邦学习算法[J]. 计算机应用研究, 2024, 41(5): 1532-1537. |
WU J Y, LI X H. Segmental tailoring federated learning algorithm based on differential privacy[J]. Application Research of Computers, 2024, 41(5): 1532-1537. | |
[3] | MADNI H A, UMER R M, FORESTI G L. Exploiting data diversity in multi-domain federated learning[J]. Machine Learning: Science and Technology, 2024, 5(2): No.025041. |
[4] | KONEČNÝ J, McMAHAN H B, YU F X, et al. Federated learning: strategies for improving communication efficiency[EB/OL]. [2024-10-01].. |
[5] | SWEENEY L. k-anonymity: a model for protecting privacy[J]. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 2002, 10(5): 557-570. |
[6] | DWORK C. Differential privacy[C]// Proceedings of the 2006 International Colloquium on Automata, Languages, and Programming, LNCS 4052. Berlin: Springer, 2006: 1-12. |
[7] | BOGETOFT P, CHRISTENSEN D L, DAMGÅRD I, et al. Secure multiparty computation goes live[C]// Proceedings of the 2009 International Conference on Financial Cryptography and Data Security, LNCS 5628. Berlin: Springer, 2009: 325-343. |
[8] | ABADI M, CHU A, GOODFELLOW I, et al. Deep learning with differential privacy[C]// Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2016: 308-318. |
[9] | ZHU L, LIU Z, HAN S. Deep leakage from gradients[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 14774-14784. |
[10] | PAILLIER P. Public-key cryptosystems based on composite degree residuosity classes[C]// Proceedings of the 1999 International Conference on the Theory and Applications of Cryptographic Techniques, LNCS 1592. Berlin: Springer, 1999: 223-238. |
[11] | CHEON J H, KIM A, KIM M, et al. Homomorphic encryption for arithmetic of approximate numbers[C]// Proceedings of the 2017 International Conference on the Theory and Applications of Cryptology and Information Security, LNCS 10624. Cham: Springer, 2017: 409-437. |
[12] | MOHASSEL P, RINDAL P. ABY3: a mixed protocol framework for machine learning[C]// Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2018: 35-52. |
[13] | GEYER R C, KLEIN T, NABI M. Differentially private federated learning: a client level perspective[EB/OL]. [2024-10-01].. |
[14] | ZHOU P, FANG P, HUI P. Loss tolerant federated learning[EB/OL]. [2024-10-01].. |
[15] | DWORK C, McSHERRY F, NISSIM K, et al. Calibrating noise to sensitivity in private data analysis[C]// Proceedings of the 2006 Theory of Cryptography Conference, LNCS 3876. Berlin: Springer, 2006: 265-284. |
[16] | WANG D, LONG S. Boosting the accuracy of differentially private in weighted social networks[J]. Multimedia Tools and Applications, 2019, 78(24): 34801-34817. |
[17] | NISSIM K, RASKHODNIKOVA S, SMITH A. Smooth sensitivity and sampling in private data analysis[C]// Proceedings of the 39th Annual ACM Symposium on Theory of Computing. New York: ACM, 2007: 75-84. |
[18] | 何英哲,胡兴波,何锦雯,等. 机器学习系统的隐私和安全问题综述[J]. 计算机研究与发展, 2019, 56(10): 2049-2070. |
HE Y Z, HU X B, HE J W, et al. Privacy and security issues in machine learning systems: a survey[J]. Journal of Computer Research and Development, 2019, 56(10): 2049-2070. | |
[19] | BIGGIO B, NELSON B, LASKOV P. Poisoning attacks against support vector machines[C]// Proceedings of the 29th International Conference on Machine Learning. Madison, WI: Omnipress, 2012: 1467-1474. |
[20] | RUBINSTEIN B I P, NELSON B, HUANG L, et al. ANTIDOTE: understanding and defending against poisoning of anomaly detectors[C]// Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement. New York: ACM, 2009: 1-14. |
[21] | ZHOU X, XU M, WU Y, et al. Deep model poisoning attack on federated learning[J]. Future Internet, 2021, 13(3): No.73. |
[22] | HOSSAIN M T, ISLAM S, BADSHA S, et al. DeSMP: differential privacy-exploited stealthy model poisoning attacks in federated learning[C]// Proceedings of the 17th International Conference on Mobility, Sensing and Networking. Piscataway: IEEE, 2021: 167-174. |
[23] | CAO X, GONG N Z. MPAF: model poisoning attacks to federated learning based on fake clients[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 3395-3403. |
[24] | ZHANG J, CHEN J, WU D, et al. Poisoning attack in federated learning using generative adversarial nets[C]// Proceedings of the 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/13th IEEE International Conference on Big Data Science and Engineering. Piscataway: IEEE, 2019: 374-380. |
[25] | JIANG W, LI H, LIU S, et al. A flexible poisoning attack against machine learning[C]// Proceedings of the 2019 IEEE International Conference on Communications. Piscataway: IEEE, 2019: 1-6. |
[26] | CHEN X, LIU C, LI B, et al. Targeted backdoor attacks on deep learning systems using data poisoning[EB/OL]. [2024-10-01].. |
[27] | BAGDASARYAN E, VEIT A, HUA Y, et al. How to backdoor federated learning[C]// Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2020: 2938-2948. |
[28] | LIM W Y B, LUONG N C, HOANG D T, et al. Federated learning in mobile edge networks: a comprehensive survey[J]. IEEE Communications Surveys and Tutorials, 2020, 22(3): 2031-2063. |
[29] | BHAGOJI A N, CHAKRABORTY S, MITTAL P, et al. Analyzing federated learning through an adversarial lens[C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 634-643. |
[30] | WANG H, SREENIVASAN K, RAJPUT S, et al. Attack of the tails: yes, you really can backdoor federated learning[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 16070-16084. |
[31] | MHAMDI E M EL, GUERRAOUI R, ROUAULT S. The hidden vulnerability of distributed learning in Byzantium[C]// Proceedings of the 35th International Conference on Machine Learning. New York: JMLR.org, 2018: 3521-3530. |
[32] | FANG M, CAO X, JIA J, et al. Local model poisoning attacks to Byzantine-robust federated learning[C]// Proceedings of the 29th USENIX Security Symposium. Berkeley: USENIX Association, 2020: 1623-1640. |
[33] | SHEJWALKAR V, HOUMANSADR A. Manipulating the Byzantine: optimizing model poisoning attacks and defenses for federated learning[C]// Proceedings of the 2021 Network and Distributed Systems Security Symposium. Reston, VA: Internet Society, 2021: 1-18. |
[34] | XIE C, KOYEJO O, GUPTA I. Generalized Byzantine-tolerant SGD[EB/OL]. [2024-10-01].. |
[35] | KURAKIN A, GOODFELLOW I J, BENGIO S. Adversarial examples in the physical world[M]// YAMPOLSKIY R V. Artificial intelligence safety and security. New York: Chapman and Hall/CRC, 2018: 99-112. |
[36] | PAPERNOT N, McDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning[C]// Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security. New York: ACM, 2017: 506-519. |
[37] | 李欣姣,吴国伟,姚琳,等. 机器学习安全攻击与防御机制研究进展和未来挑战[J]. 软件学报, 2021, 32(2): 406-423. |
LI X J, WU G W, YAO L, et al. Progress and future challenges of security attacks and defense mechanisms in machine learning[J]. Journal of Software, 2021, 32(2): 406-423. | |
[38] | SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[EB/OL]. [2024-10-01].. |
[39] | PARK S, SO J. On the effectiveness of adversarial training in defending against adversarial example attacks for image classification[J]. Applied Sciences, 2020, 10(22): No.8079. |
[40] | TRAMÈR F, ZHANG F, JUELS A, et al. Stealing machine learning models via prediction APIs[C]// Proceedings of the 25th USENIX Security Symposium. Berkeley: USENIX Association, 2016: 601-618. |
[41] | ATENIESE G, MANCINI L V, SPOGNARDI A, et al. Hacking smart machines with smarter ones: how to extract meaningful data from machine learning classifiers[J]. International Journal of Security and Networks, 2015, 10(3): 137-150. |
[42] | KUSHILEVITZ E, MANSOUR Y. Learning decision trees using the Fourier spectrum[J]. SIAM Journal on Computing, 1993, 22(6): 1331-1348. |
[43] | COHN D, ATLAS L, LADNER R. Improving generalization with active learning[J]. Machine Learning, 1994, 15(2): 201-221. |
[44] | LOWD D, MEEK C. Adversarial learning[C]// Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2005: 641-647. |
[45] | FREDRIKSON M, LANTZ E, JHA S, et al. Privacy in pharmacogenetics: an end-to-end case study of personalized warfarin dosing[C]// Proceedings of the 23rd USENIX Security Symposium. Berkeley: USENIX Association, 2014: 17-32. |
[46] | FREDRIKSON M, JHA S, RISTENPART T. Model inversion attacks that exploit confidence information and basic countermeasures[C]// Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2015: 1322-1333. |
[47] | LIU C, LI B, VOROBEYCHIK Y, et al. Robust linear regression against training data poisoning[C]// Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2017: 91-102. |
[48] | BARACALDO N, CHEN B, LUDWIG H, et al. Mitigating poisoning attacks on machine learning models: a data provenance based approach[C]// Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2017: 103-110. |
[49] | MIYATO T, MAEDA S I, KOYAMA M, et al. Virtual adversarial training: a regularization method for supervised and semi-supervised learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(8): 1979-1993. |
[50] | LIANG B, LI H, SU M, et al. Detecting adversarial image examples in deep neural networks with adaptive noise reduction[J]. IEEE Transactions on Dependable and Secure Computing, 2021, 18(1): 72-85. |
[51] | ZANTEDESCHI V, NICOLAE M I, RAWAT A. Efficient defenses against adversarial attacks[C]// Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2017: 39-49. |
[52] | 吴嫚. 基于PCA的对抗样本攻击防御研究[D]. 海口:海南大学, 2020: 22-30. |
WU M. Research on adversarial samples attack defense based on PCA[D]. Haikou: Hainan University, 2020: 22-30. | |
[53] | PAPERNOT N, McDANIEL P, WU X, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// Proceedings of the 2016 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2016: 582-597. |
[54] | ROSS A S, DOSHI-VELEZ F. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2018: 1660-1669. |
[55] | MA X, LI B, WANG Y, et al. Characterizing adversarial subspaces using local intrinsic dimensionality[EB/OL]. [2024-10-01].. |
[56] | BONAWITZ K, IVANOV V, KREUTER B, et al. Practical secure aggregation for privacy-preserving machine learning[C]// Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2017: 1175-1191. |
[57] | HESAMIFARD E, TAKABI H, GHASEMI M. CryptoDL: deep neural networks over encrypted data[EB/OL]. [2024-10-01].. |
[58] | HAO M, LI H, XU G, et al. Towards efficient and privacy-preserving federated deep learning[C]// Proceedings of the 2019 IEEE International Conference on Communications. Piscataway: IEEE, 2019: 1-6. |
[59] | 孙敏,丁希宁,成倩. 基于差分隐私的联邦学习方案[J]. 计算机科学, 2024, 51(6A): No.230600211. |
SUN M, DING X N, CHENG Q. Federated learning scheme based on differential privacy[J]. Computer Science, 2024, 51(6A): No.230600211. | |
[60] | LIU Y, XIONG L, LIU Y, et al. DPDR: gradient decomposition and reconstruction for differentially private deep learning[EB/OL]. [2024-10-01]. . |
[61] | BOENISCH F, MÜHL C, DZIEDZIC A, et al. Have it your way: individualized privacy assignment for DP-SGD[C]// Proceedings of the 37th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2023: 19073-19103. |
[62] | JIANG B, DU J, SHARMA S, et al. Budget recycling differential privacy[C]// Proceedings of the 2024 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2024: 1028-1046. |
[63] | OU L, QIN Z, LIAO S, et al. An optimal noise mechanism for cross-correlated IoT data releasing[J]. IEEE Transactions on Dependable and Secure Computing, 2021, 18(4): 1528-1540. |
[64] | CAO Y, YOSHIKAWA M, XIAO Y, et al. Quantifying differential privacy in continuous data release under temporal correlations[J]. IEEE Transactions on Knowledge and Data Engineering, 2019, 31(7): 1281-1295. |
[65] | WANG Z, TAO J, ZOU D. RASE: efficient privacy-preserving data aggregation against disclosure attacks for IoTs[EB/OL]. [2024-10-01].. |
[66] | HE J, CAI L, CHENG P, et al. Consensus-based data-privacy preserving data aggregation[J]. IEEE Transactions on Automatic Control, 2019, 64(12): 5222-5229. |
[67] | MO Y, MURRAY R M. Privacy preserving average consensus[J]. IEEE Transactions on Automatic Control, 2017, 62(2): 753-765. |
[68] | YANG M, LAM K Y, ZHU T, et al. SPoFC: a framework for stream data aggregation with local differential privacy[J]. Concurrency and Computation: Practice and Experience, 2023, 35(5): No.e7572. |
[69] | REGAN R, JOSPHINELEELA R, KHAMRUDDIN M, et al. Balancing data privacy and sharing in IIoT: introducing the GFL-LFF aggregation algorithm[J]. Computer Networks, 2024, 247: No.110401. |
[70] | GRANZIOL D, BASKERVILLE N P, WAN X, et al. Iterative averaging in the quest for best test error[J]. The Journal of Machine Learning Research, 2024, 25(1):1035-1089. |
[71] | SUN J, YANG Y, XUN G, et al. Scheduling hyperparameters to improve generalization: from centralized SGD to asynchronous SGD[J]. ACM Transactions on Knowledge Discovery from Data, 2023, 17(2): No.29. |
[72] | SHI H, YANG N, TANG H, et al. aSGD: stochastic gradient descent with adaptive batch size for every parameter[J]. Mathematics, 2022, 10(6): No.863. |
[73] | HUANG T, ZHAO S Y, CHEN H, et al. Improving parameter estimation and defensive ability of latent Dirichlet allocation model training under Rényi differential privacy[J]. Journal of Computer Science and Technology, 2022, 37(6): 1382-1397. |
[74] | JIANG S, WANG X, QUE Y, et al. Fed-MPS: federated learning with local differential privacy using model parameter selection for resource-constrained CPS[J]. Journal of Systems Architecture, 2024, 150: No.103108. |
[75] | CAI T T, WANG Y, ZHANG L. The cost of privacy: optimal rates of convergence for parameter estimation with differential privacy[J]. The Annals of Statistics, 2021, 49(5): 2825-2850. |
[76] | KULYNYCH B, GOMEZ J F, KAISSIS G, et al. Attack-aware noise calibration for differential privacy[C]// Proceedings of the 38th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2024: 34868-34901. |
[77] | WANG Z, ZHU R, ZHOU D, et al. DPAdapter: improving differentially private deep learning through noise tolerance pre-training[C]// Proceedings of the 33rd USENIX Security Symposium. Berkeley: USENIX Association, 2024: 991-1008. |
[78] | 尹春勇,屈锐. 基于个性化差分隐私的联邦学习算法[J]. 计算机应用, 2023, 43(4): 1160-1168. |
YIN C Y, QU R. Federated learning algorithm based on personalized differential privacy[J]. Journal of Computer Applications, 2023, 43(4): 1160-1168. | |
[79] | WEI K, LI J, DING M, et al. Federated learning with differential privacy: algorithms and performance analysis[J]. IEEE Transactions on Information Forensics and Security, 2020, 15: 3454-3469. |
[1] | 俞浩, 范菁, 孙伊航, 董华, 郗恩康. 联邦学习统计异质性综述[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2737-2746. |
[2] | 苏锦涛, 葛丽娜, 肖礼广, 邹经, 王哲. 联邦学习中针对后门攻击的检测与防御方案[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2399-2408. |
[3] | 葛丽娜, 王明禹, 田蕾. 联邦学习的高效性研究综述[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2387-2398. |
[4] | 晏燕, 李飞飞, 吕雅琴, 冯涛. 安全高效的混洗差分隐私频率估计方法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2600-2611. |
[5] | 张宏扬, 张淑芬, 谷铮. 面向个性化与公平性的联邦学习算法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2123-2131. |
[6] | 张一鸣, 曹腾飞. 基于本地漂移和多样性算力的联邦学习优化算法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1447-1454. |
[7] | 范亚州, 李卓. 能耗约束下分层联邦学习模型质量优化的节点协作机制[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1589-1594. |
[8] | 陈庆礼, 郭渊博, 方晨. 面向数据异构的聚类联邦学习算法[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1086-1094. |
[9] | 项钰斐, 倪郑威. 基于演化博弈的分层联邦学习边缘联合动态分析[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1077-1085. |
[10] | 林海力, 李京. 基于工作证明的联邦学习懒惰客户端识别方法[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 856-863. |
[11] | 曾辉, 熊诗雨, 狄永正, 史红周. 基于剪枝的大模型联邦参数高效微调技术[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 715-724. |
[12] | 任志强, 陈学斌. 基于历史模型更新的自适应防御机制FedAud[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 490-496. |
[13] | 徐超, 张淑芬, 陈海田, 彭璐璐, 张帅华. 基于自适应差分隐私与客户选择优化的联邦学习方法[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 482-489. |
[14] | 王心妍, 杜嘉程, 钟李红, 徐旺旺, 刘伯宇, 佘维. 融合电力数据的纵向联邦学习企业排污预测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 518-525. |
[15] | 陈海田, 陈学斌, 马锐奎, 张帅华. 面向遥感数据的基于本地差分隐私的联邦学习隐私保护方案[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 506-517. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||