《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (1): 223-232.DOI: 10.11772/j.issn.1001-9081.2023010088
• 网络空间安全 • 上一篇
周辉1,2, 陈玉玲1,2(), 王学伟3, 张洋文1,2, 何建江1,2
收稿日期:
2023-02-06
修回日期:
2023-05-15
接受日期:
2023-05-16
发布日期:
2023-06-06
出版日期:
2024-01-10
通讯作者:
陈玉玲
作者简介:
周辉(1999—),男,湖南衡阳人,硕士研究生,CCF会员,主要研究方向:联邦学习、人工智能安全;基金资助:
Hui ZHOU1,2, Yuling CHEN1,2(), Xuewei WANG3, Yangwen ZHANG1,2, Jianjiang HE1,2
Received:
2023-02-06
Revised:
2023-05-15
Accepted:
2023-05-16
Online:
2023-06-06
Published:
2024-01-10
Contact:
Yuling CHEN
About author:
ZHOU Hui, born in 1999, M. S. candidate. His research interests include federated learning, artificial intelligence security.Supported by:
摘要:
联邦学习(FL)可以使用户在不直接上传原始数据的条件下完成多方数据共享和交互,有效降低隐私泄露风险。然而,现有的研究表明敌手仍可以通过共享的梯度信息重构出原始数据。为进一步保护联邦学习隐私,基于生成对抗网络(GAN)提出一种联邦学习深度影子防御方案。首先,通过生成对抗网络学习原始真实数据分布特征,并生成可替代的影子数据;然后,通过影子数据训练影子模型替代原始模型,敌手无法直接获取真实数据训练过的原始模型;最后,利用影子数据在影子模型中产生的影子梯度替代真实梯度,使敌手无法获取真实梯度。在CIFAR10和CIFAR100数据集上进行了实验:与添加噪声、梯度裁剪、梯度压缩、表征扰动和局部正则化稀疏化五种防御方案相比,在CIFAR10数据集上所提方案的均方误差(MSE)是对比方案的1.18~5.34倍,特征均方误差(FMSE)是对比方案的4.46~1.03×107倍,峰值信噪比(PSNR)是对比方案的49.9%~90.8%;在CIFAR100数据集上的MSE是对比方案的1.04~1.06倍,FMSE是对比方案的5.93~4.24×103倍,PSNR是对比方案的96.0%~97.6%。相较于深度影子防御方法,所提方案考虑了敌手的实际攻击能力和影子模型训练存在的问题,设计了威胁模型和影子模型生成算法,在理论分析和实验方面表现更好,而且能够在保证准确率的前提下有效降低联邦学习隐私泄露风险。
中图分类号:
周辉, 陈玉玲, 王学伟, 张洋文, 何建江. 基于生成对抗网络的联邦学习深度影子防御方案[J]. 计算机应用, 2024, 44(1): 223-232.
Hui ZHOU, Yuling CHEN, Xuewei WANG, Yangwen ZHANG, Jianjiang HE. Deep shadow defense scheme of federated learning based on generative adversarial network[J]. Journal of Computer Applications, 2024, 44(1): 223-232.
名称 | 含义 | 名称 | 含义 |
---|---|---|---|
真实数据 | 联邦学习模型 | ||
生成器 | 参数权重 | ||
判别器 | 影子模型 | ||
影子数据 | 真实梯度 | ||
虚假数据 | 影子梯度 | ||
真实数据分布 | 虚假梯度 | ||
生成分布 |
表1 常用符号变量
Tab. 1 Common symbolic variables
名称 | 含义 | 名称 | 含义 |
---|---|---|---|
真实数据 | 联邦学习模型 | ||
生成器 | 参数权重 | ||
判别器 | 影子模型 | ||
影子数据 | 真实梯度 | ||
虚假数据 | 影子梯度 | ||
真实数据分布 | 虚假梯度 | ||
生成分布 |
参数 | 设置 |
---|---|
原始模型损失函数 | CrossEntropy |
影子模型损失函数 | CrossEntropy |
GAN损失函数 | Binary CrossEntropy |
生成器优化器 | Adam(自适应矩估计) |
判别器优化器 | Adam(自适应矩估计) |
优化器学习率 | lr=0.002 |
动量梯度下降因子 | betas=(0.5,0.999) |
原始模型、影子模型学习率调节器 | StepLR |
学习率下降间隔数 | step_size=10 |
学习率调整倍数 | Gamma=0.5 |
表2 参数设置
Tab. 2 Parameter settings
参数 | 设置 |
---|---|
原始模型损失函数 | CrossEntropy |
影子模型损失函数 | CrossEntropy |
GAN损失函数 | Binary CrossEntropy |
生成器优化器 | Adam(自适应矩估计) |
判别器优化器 | Adam(自适应矩估计) |
优化器学习率 | lr=0.002 |
动量梯度下降因子 | betas=(0.5,0.999) |
原始模型、影子模型学习率调节器 | StepLR |
学习率下降间隔数 | step_size=10 |
学习率调整倍数 | Gamma=0.5 |
对象比较 | CIFAR10 | CIFAR100 | ||||
---|---|---|---|---|---|---|
MSE | FMSE | PSNR/dB | MSE | FMSE | PSNR/dB | |
真实数据与影子数据 | 0.000 5 | 8.590 0 | 33.10 | 0.000 2 | 14.10 | 26.60 |
影子数据与虚假数据 | 0.159 0 | 0.018 1 | 8.00 | 0.177 0 | 0.02 | 7.53 |
真实数据与虚假数据 | 0.159 0 | 8.280 0 | 7.99 | 0.193 0 | 14.20 | 7.15 |
表3 FedDSD的防御指标
Tab. 3 Defense metrics of FedDSD
对象比较 | CIFAR10 | CIFAR100 | ||||
---|---|---|---|---|---|---|
MSE | FMSE | PSNR/dB | MSE | FMSE | PSNR/dB | |
真实数据与影子数据 | 0.000 5 | 8.590 0 | 33.10 | 0.000 2 | 14.10 | 26.60 |
影子数据与虚假数据 | 0.159 0 | 0.018 1 | 8.00 | 0.177 0 | 0.02 | 7.53 |
真实数据与虚假数据 | 0.159 0 | 8.280 0 | 7.99 | 0.193 0 | 14.20 | 7.15 |
数据集 | 准确率 | |
---|---|---|
真实数据 | 影子数据 | |
CIFAR10 | 97.87 | 97.22 |
CIFAR100 | 92.24 | 91.59 |
表4 FedDSD在CIFAR10和CIFAR100数据集的准确率 ( %)
Tab. 4 Accuracies of FedDSD on CIFAR10 and CIFAR100 datasets
数据集 | 准确率 | |
---|---|---|
真实数据 | 影子数据 | |
CIFAR10 | 97.87 | 97.22 |
CIFAR100 | 92.24 | 91.59 |
防御方案 | CIFAR10 | CIFAR100 | ||||
---|---|---|---|---|---|---|
MSE | FMSE | PSNR/dB | MSE | FMSE | PSNR/dB | |
添加噪声 | 0.156 0 | 0.028 5 | 8.07 | 0.185 0 | 0.098 3 | 7.34 |
梯度裁剪 | 0.035 4 | 0.000 0* | 14.50 | 0.190 0 | 2.360 0 | 7.22 |
梯度压缩 | 0.154 0 | 0.001 2 | 8.12 | 0.185 0 | 0.003 3 | 7.32 |
表征扰动 | 0.160 0 | 1.880 0 | 7.97 | 0.190 0 | 1.370 0 | 7.23 |
局部正则化稀疏化 | 0.140 0 | 1.720 0 | 8.54 | 0.189 0 | 1.150 0 | 7.24 |
FedDSD | 0.189 0 | 8.380 0 | 7.24 | 0.197 0 | 14.000 0 | 7.05 |
表5 不同防御方案对DLG攻击的防御效果
Tab. 5 Effects of different defense schemes against DLG attacks
防御方案 | CIFAR10 | CIFAR100 | ||||
---|---|---|---|---|---|---|
MSE | FMSE | PSNR/dB | MSE | FMSE | PSNR/dB | |
添加噪声 | 0.156 0 | 0.028 5 | 8.07 | 0.185 0 | 0.098 3 | 7.34 |
梯度裁剪 | 0.035 4 | 0.000 0* | 14.50 | 0.190 0 | 2.360 0 | 7.22 |
梯度压缩 | 0.154 0 | 0.001 2 | 8.12 | 0.185 0 | 0.003 3 | 7.32 |
表征扰动 | 0.160 0 | 1.880 0 | 7.97 | 0.190 0 | 1.370 0 | 7.23 |
局部正则化稀疏化 | 0.140 0 | 1.720 0 | 8.54 | 0.189 0 | 1.150 0 | 7.24 |
FedDSD | 0.189 0 | 8.380 0 | 7.24 | 0.197 0 | 14.000 0 | 7.05 |
防御措施 | 深度影子防御方法 | FedDSD |
---|---|---|
基于生成对抗网络 | 是 | 是 |
设计威胁模型 | 否 | 是 |
考虑敌手攻击范式 | 否 | 是 |
设计影子模型算法 | 否 | 是 |
设计评价指标 | 否 | 是 |
实验论证 | 否 | 是 |
表6 深度影子防御方法与FedDSD对比
Tab. 6 Comparison of deep shadow defense method with FedDSD
防御措施 | 深度影子防御方法 | FedDSD |
---|---|---|
基于生成对抗网络 | 是 | 是 |
设计威胁模型 | 否 | 是 |
考虑敌手攻击范式 | 否 | 是 |
设计影子模型算法 | 否 | 是 |
设计评价指标 | 否 | 是 |
实验论证 | 否 | 是 |
1 | McMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data [J]. Proceedings of Machine Learning Research, 2017, 54: 1273-1282. |
2 | 温亚兰,陈美娟.融合联邦学习与区块链的医疗数据共享方案[J].计算机工程, 2022, 48(5): 145-153, 161. 10.19678/j.issn.1000-3428.0061284 |
WEN Y L, CHEN M J. Medical data sharing scheme combined with federated learning and blockchain [J]. Computer Engineering, 2022, 48(5): 145-153, 161. 10.19678/j.issn.1000-3428.0061284 | |
3 | 孙睿,李超,王伟,等.基于区块链的联邦学习研究进展[J].计算机应用, 2022, 42(11): 3413-3420. 10.11772/j.issn.1001-9081.2021111934 |
SUN R, LI C, WANG W, et al. Research progress of blockchain-based federated learning [J]. Journal of Computer Applications, 2022, 42(11): 3413-3420. 10.11772/j.issn.1001-9081.2021111934 | |
4 | 梁天恺,曾碧,陈光.联邦学习综述:概念、技术、应用与挑战[J].计算机应用, 2022, 42(12): 3651-3662. 10.11772/j.issn.1001-9081.2021101821 |
LIANG T K, ZENG B, CHEN G. Federated learning survey: concepts, technologies, applications and challenges [J]. Journal of Computer Applications, 2022, 42(12): 3651-3662. 10.11772/j.issn.1001-9081.2021101821 | |
5 | BONAWITZ K, IVANOV V, KREUTER B, et al. Practical secure aggregation for privacy-preserving machine learning [C]// Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2017: 1175-1191. 10.1145/3133956.3133982 |
6 | ZHU L, LIU Z, HAN S. Deep leakage from gradients [C]// Proceedings of the 33 rd International Conference on Neural Information Processing System. Red Hook: Curran Associates Inc., 2019: 14774-14784. 10.1007/978-3-030-63076-8_2 |
7 | ZHAO B, MOPURI K R, BILEN H. iDLG: Improved deep leakage from gradients [EB/OL]. (2020-01-08)[2023-04-28]. . |
8 | MELIS L, SONG C, DE CRISTOFARO E, et al. Exploiting unintended feature leakage in collaborative learning [C]// Proceedings of the 2019 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2019: 691-706. 10.1109/sp.2019.00029 |
9 | WANG Z, SONG M, ZHANG Z, et al. Beyond inferring class representatives: User-level privacy leakage from federated learning [C]// Proceedings of the 2019 IEEE Conference on Computer Communications. Piscataway: IEEE, 2019: 2512-2520. 10.1109/infocom.2019.8737416 |
10 | LYU L, YU H, MA X, et al. Privacy and robustness in federated learning: Attacks and defenses [J/OL]. IEEE Transactions on Neural Networks and Learning Systems, 2022, (2022-11-10)[2023-04-29]. . 10.1109/tnnls.2022.3216981 |
11 | SONG C, RISTENPART T, SHMATIKOV V. Machine learning models that remember too much [C]// Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2017: 587-601. 10.1145/3133956.3134077 |
12 | SUN J, LI A, WANG B, et al. Soteria: Provable defense against privacy leakage in federated learning from representation perspective [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 9307-9315. 10.1109/cvpr46437.2021.00919 |
13 | SHOKRI R, SHMATIKOV V. Privacy-preserving deep learning [C]// Proceedings of the 22 nd ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2015: 1310-1321. 10.1145/2810103.2813687 |
14 | ABADI M, CHU A, GOODFELLOW I, et al. Deep learning with differential privacy [C]// Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2016: 308-318. 10.1145/2976749.2978318 |
15 | GEYER R C, KLEIN T, NABI M. Differentially private federated learning: A client level perspective [EB/OL]. (2018-03-01)[2023-04-28]. . |
16 | JORGENSEN Z, YU T, CORMODE G. Conservative or liberal? Personalized differential privacy [C]// Proceedings of the 2015 IEEE 31 st International Conference on Data Engineering. Piscataway: IEEE, 2015: 1023-1034. 10.1109/icde.2015.7113353 |
17 | NIU B, CHEN Y, WANG B, et al. AdaPDP: Adaptive personalized differential privacy [C]// Proceedings of the 2021 IEEE Conference on Computer Communications. Piscataway: IEEE, 2021: 1-10. 10.1109/infocom42981.2021.9488825 |
18 | PHONG L T, AONO Y, HAYASHI T, et al. Privacy-preserving deep learning via additively homomorphic encryption [J]. IEEE Transactions on Information Forensics and Security, 2017, 13(5): 1333-1345. 10.1109/tifs.2017.2787987 |
19 | PHONG L T, AONO Y, HAYASHI T, et al. Privacy-preserving deep learning: Revisited and enhanced [C]// Proceedings of the 2017 International Conference on Applications and Techniques in Information Security. Singapore: Springer, 2017: 100-110. 10.1007/978-981-10-5421-1_9 |
20 | BELL J H, BONAWITZ K A, GASCÓN A, et al. Secure single-server aggregation with (poly) logarithmic overhead [C]// Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2020: 1253-1269. 10.1145/3372297.3417885 |
21 | XU G, LI H, LIU S, et al. VerifyNet: Secure and verifiable federated learning [J]. IEEE Transactions on Information Forensics and Security, 2019, 15: 911-926. 10.1109/tifs.2019.2929409 |
22 | MOHASSEL P, ZHANG Y. SecureML: A system for scalable privacy-preserving machine learning [C]// Proceedings of the 2017 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2017: 19-38. 10.1109/sp.2017.12 |
23 | WAGH S, GUPTA D, CHANDRAN N. SecureNN: 3-party secure computation for neural network training [J]. Proceedings on Privacy Enhancing Technologies, 2019(3): 26-49. 10.2478/popets-2019-0035 |
24 | AGRAWAL N, SHAHIN SHAMSABADI A, KUSNER M J, et al. QUOTIENT: two-party secure neural network training and prediction [C]// Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2019: 1231-1247. 10.1145/3319535.3339819 |
25 | 周辉,陈玉玲,李涛,等.一种联邦学习深度影子防御方法: CN202211341978.5 [P]. 2023-03-21. |
ZHOU H, CHEN Y L, LI T, et al. A federated learning deep shadow defense method: CN202211341978.5 [P]. 2023-03-21. | |
26 | GEIPING J, BAUERMEISTER H, DRÖGE H, et al. Inverting gradients — how easy is it to break privacy in federated learning? [C]// Proceedings of the 34th International Conference on Neural Information Processing System. Red Hook: Curran Associates Inc., 2020: 16937-16947. |
27 | YIN H, MALLYA A, VAHDAT A, et al. See through gradients: Image batch recovery via GradInversion [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 16332-16341. 10.1109/cvpr46437.2021.01607 |
28 | JEON J, KIM J, LEE K, et al. Gradient inversion with generative image prior [C]// Proceedings of the 34th International Conference on Neural Information Processing System. New York: ACM, 2021:29898-29908. 10.48550/arXiv.2110.14962 |
29 | HITAJ B, ATENIESE G, PEREZ-CRUZ F. Deep models under the GAN: information leakage from collaborative deep learning [C]// Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2017: 603-618. 10.1145/3133956.3134012 |
30 | GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets [C]// Proceedings of the 27th International Conference on Neural Information Processing System. New York: ACM, 2014: 2672-2680. |
31 | SHOKRI R, STRONATI M, SONG C, et al. Membership inference attacks against machine learning models [C]// Proceedings of the 2017 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2017: 3-18. 10.1109/sp.2017.41 |
32 | SALEM A, ZHANG Y, HUMBERT M, et al. ML-Leaks: Model and data independent membership inference attacks and defenses on machine learning models [C]// Proceedings of the 26th Annual Network and Distributed System Security Symposium. San Diego: Internet Society, 2019: 1-15. 10.14722/ndss.2019.23119 |
33 | NASR M, SHOKRI R, HOUMANSADR A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning [C]// Proceedings of the 2019 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2019: 739-753. 10.1109/sp.2019.00065 |
34 | ZHANG J, ZHANG J, CHEN J, et al. GAN enhanced membership inference: A passive local attack in federated learning [C]// Proceedings of the 2020 IEEE International Conference on Communications. Piscataway: IEEE, 2020: 1-6. 10.1109/icc40277.2020.9148790 |
35 | HAYES J, MELIS L, DANEZIS G, et al. LOGAN: evaluating privacy leakage of generative models using generative adversarial networks [EB/OL]. (2022-06-09)[2023-04-28]. . 10.2478/popets-2019-0008 |
36 | ATENIESE G, MANCINI L V, SPOGNARDI A, et al. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers [J]. International Journal of Security and Networks, 2015, 10(3): 137-150. 10.1504/ijsn.2015.071829 |
37 | FREDRIKSON M, LANTZ E, JHA S, et al. Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing [C]// Proceedings of the 23 rd USENIX Security Symposium. Berkeley: USENIX Association, 2014: 17-32. |
38 | FREDRIKSON M, JHA S, RISTENPART T. Model inversion attacks that exploit confidence information and basic countermeasures [C]// Proceedings of the 22 nd ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2015: 1322-1333. 10.1145/2810103.2813677 |
39 | WANG Z, HUANG Y, SONG M, et al. Poisoning-assisted property inference attack against federated learning [J]. IEEE Transactions on Dependable and Secure Computing, 2023, 20(4): 3328-3340. 10.1109/tdsc.2022.3196646 |
40 | 陈宛桢,张恩,秦磊勇,等.边缘计算下基于区块链的隐私保护联邦学习算法[J].计算机应用, 2023, 43(7): 2209-2216. |
CHEN W Z, ZHANG E, QIN L Y, et al. Privacy-preserving federated learning algorithm based on blockchain in edge computing [J]. Journal of Computer Applications, 2023, 43(7): 2209-2216. | |
41 | CHENG A, WANG P, ZHANG X S, et al. Differentially private federated learning with local regularization and sparsification [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 10122-10131. 10.1109/cvpr52688.2022.00988 |
42 | RADFORD A, METZ L, CHINTALA S. Unsupervised representation learning with deep convolutional generative adversarial networks [EB/OL]. (2016-01-07)[2023-04-28]. . 10.1109/aiar.2018.8769811 |
43 | KRIZHEVSKY A, HINTON G. Learning multiple layers of features from tiny images [R/OL].[2023-04-29].. 10.1016/j.tics.2007.09.004 |
44 | ILYAS A, SANTURKAR S, TSIPRAS D, et al. Adversarial examples are not bugs, they are features [C]// Proceedings of the 33 rd International Conference on Neural Information Processing System. New York: ACM, 2019: 125-136. 10.23915/distill.00019 |
[1] | 黄硕, 李艳辉, 曹建秋. 本地化差分隐私下的频繁序列模式挖掘算法PrivSPM[J]. 《计算机应用》唯一官方网站, 2023, 43(7): 2057-2064. |
[2] | 陈少权, 蔡剑平, 孙岚. 动态梯度阈值裁剪的差分隐私生成对抗网络算法[J]. 《计算机应用》唯一官方网站, 2023, 43(7): 2065-2072. |
[3] | 蓝梦婕, 蔡剑平, 孙岚. 非独立同分布数据下的自正则化联邦学习优化方法[J]. 《计算机应用》唯一官方网站, 2023, 43(7): 2073-2081. |
[4] | 陈宛桢, 张恩, 秦磊勇, 洪双喜. 边缘计算下基于区块链的隐私保护联邦学习算法[J]. 《计算机应用》唯一官方网站, 2023, 43(7): 2209-2216. |
[5] | 刘安阳, 赵怀慈, 蔡文龙, 许泽超, 解瑞灯. 基于主动判别机制的自适应生成对抗网络图像去模糊算法[J]. 《计算机应用》唯一官方网站, 2023, 43(7): 2288-2294. |
[6] | 林尚静, 马冀, 庄琲, 李月颖, 李子怡, 李铁, 田锦. 基于联邦学习的无线通信流量预测[J]. 《计算机应用》唯一官方网站, 2023, 43(6): 1900-1909. |
[7] | 靳鑫, 刘仰川, 朱叶晨, 张子健, 高欣. 基于残差编解码-生成对抗网络的正弦图修复的稀疏角度锥束CT图像重建[J]. 《计算机应用》唯一官方网站, 2023, 43(6): 1950-1957. |
[8] | 吴家皋, 章仕稳, 蒋宇栋, 刘林峰. 基于状态精细化长短期记忆和注意力机制的社交生成对抗网络用于行人轨迹预测[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1565-1570. |
[9] | 翟冉, 陈学斌, 张国鹏, 裴浪涛, 马征. 基于不同敏感度的改进K-匿名隐私保护算法[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1497-1503. |
[10] | 郭劲文, 马兴华, 骆功宁, 王玮, 曹阳, 王宽全. 基于Transformer的结构强化IVOCT导丝伪影去除方法[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1596-1605. |
[11] | 尹春勇, 屈锐. 基于个性化差分隐私的联邦学习算法[J]. 《计算机应用》唯一官方网站, 2023, 43(4): 1160-1168. |
[12] | 樊小宇, 蔺素珍, 王彦博, 刘峰, 李大威. 基于残差图卷积神经网络的高倍欠采样核磁共振图像重建算法[J]. 《计算机应用》唯一官方网站, 2023, 43(4): 1261-1268. |
[13] | 郝劭辰, 卫孜钻, 马垚, 于丹, 陈永乐. 基于高效联邦学习算法的网络入侵检测模型[J]. 《计算机应用》唯一官方网站, 2023, 43(4): 1169-1175. |
[14] | 王昊, 王子成, 张超, 马韵升. 基于生成对抗网络的数据不确定性量化方法[J]. 《计算机应用》唯一官方网站, 2023, 43(4): 1094-1101. |
[15] | 尹春勇, 周立文. 基于再编码的无监督时间序列异常检测模型[J]. 《计算机应用》唯一官方网站, 2023, 43(3): 804-811. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||