[1] |
YANG Q, LIU Y, CHEN T, et al. Federated machine learning: concept and applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): No.12.
|
[2] |
肖雄,唐卓,肖斌,等. 联邦学习的隐私保护与安全防御研究综述[J].计算机学报, 2023, 46(5): 1019-1044.
|
|
XIAO X, TANG Z, XIAO B, et al. A survey on privacy and security issues in federated learning[J]. Chinese Journal of Computers, 2023, 46(5):1019-1044.
|
[3] |
陈学斌,任志强,张宏扬. 联邦学习中的安全威胁与防御措施综述 [J]. 计算机应用, 2024, 44(6): 1663-1672.
|
|
CHEN X B, REN Z Q, ZHANG H Y. Review on security threats and defense measures in federated learning[J]. Journal of Computer Applications, 2024, 44(6): 1663-1672.
|
[4] |
顾育豪,白跃彬. 联邦学习模型安全与隐私研究进展[J]. 软件学报, 2023, 34(6):2833-2864.
|
|
GU Y H, BAI Y B. Survey on security and privacy of federated learning models[J]. Journal of Software, 2023, 34(6):2833-2864.
|
[5] |
NGUYEN T D, NGUYEN T, LE NGUYEN P, et al. Backdoor attacks and defenses in federated learning: survey, challenges and future research directions[J]. Engineering Applications of Artificial Intelligence, 2024, 127(Pt A): No.107166.
|
[6] |
MAO J, QIAN Y, HUANG J, et al. Object-free backdoor attack and defense on semantic segmentation[J]. Computers and Security, 2023, 132: No.103365.
|
[7] |
GONG X, CHEN Y, HUANG H, et al. Coordinated backdoor attacks against federated learning with model-dependent triggers[J]. IEEE Network, 2022, 36(1): 84-90.
|
[8] |
ZOU T, LIU Y, KANG Y, et al. Defending batch-level label inference and replacement attacks in vertical federated learning[J]. IEEE Transactions on Big Data, 2024, 10(6): 1016-1027.
|
[9] |
ZHOU X, XU M, WU Y, et al. Deep model poisoning attack on federated learning[J]. Future Internet, 2021, 13(3): No.73.
|
[10] |
LI T, SAHU A K, TALWALKAR A, et al. Federated learning: challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50-60.
|
[11] |
XUE M, HE C, WANG J, et al. One-to-N & N-to-One: two advanced backdoor attacks against deep learning models[J]. IEEE Transactions on Dependable and Secure Computing, 2022, 19(3): 1562-1578.
|
[12] |
SAXENA D, CAO J. Generative Adversarial Networks (GANs): challenges, solutions, and future directions[J]. ACM Computing Surveys, 2022, 54(3): No.63.
|
[13] |
GOU J, YU B, MAYBANK S J, et al. Knowledge distillation: a survey[J]. International Journal of Computer Vision, 2021, 129(6): 1789-1819.
|
[14] |
LeCUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436-444.
|
[15] |
VERBRAEKEN J, WOLTING M, KATZY J, et al. A survey on distributed machine learning[J]. ACM Computing Surveys, 2021, 53(2): No.30.
|
[16] |
ALEDHARI M, RAZZAK R, PARIZI R M, et al. Federated learning: a survey on enabling technologies, protocols, and applications[J]. IEEE Access, 2020, 8: 140699-140725.
|
[17] |
LI X L. Preconditioned stochastic gradient descent[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29(5): 1454-1466.
|
[18] |
XUE M, NI S, WU Y, et al. Imperceptible and multi-channel backdoor attack[J]. Applied Intelligence, 2024, 54(1): 1099-1116.
|
[19] |
YERLIKAYA F A, BAHTIYA Ş. Data poisoning attacks against machine learning algorithms[J]. Expert Systems with Applications, 2022, 208: No.118101.
|
[20] |
KOH P W, STEINHARDT J, LIANG P, et al. Stronger data poisoning attacks break data sanitization defenses[J]. Machine Learning, 2022, 111(1): 1-47.
|
[21] |
LU S, LI R, CHEN X, et al. Defense against local model poisoning attacks to byzantine-robust federated learning[J]. Frontiers of Computer Science, 2022, 16(6): No.166337.
|
[22] |
LI J, LI Z, ZHANG H, et al. Poison attack and poison detection on deep source code processing models[J]. ACM Transactions on Software Engineering and Methodology, 2024, 33(3): No.62.
|
[23] |
FUNG C, YOON C J M, BESCHASTNIK I. Mitigating sybils in federated learning poisoning[EB/OL]. [2024-07-11]..
|
[24] |
CHEN Y, SU L, XU J. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent[J]. ACM SIGMETRICS Performance Evaluation Review, 2018, 46(1): 96-96.
|
[25] |
PILLUTLA K, KAKADE S M, HARCHAOUI Z, et al. Robust aggregation for federated learning[J]. IEEE Transactions on Signal Processing, 2022, 70: 1142-1154.
|
[26] |
BLANCHARD P, MHAMDI E M EL, GUERRAOUI R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 118-128.
|
[27] |
MUÑOZ-GONZÁLEZ L, CO K T, LUPU E C, et al. Byzantine-robust federated machine learning through adaptive model averaging[EB/OL]. [2024-07-11]..
|