《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (7): 2123-2131.DOI: 10.11772/j.issn.1001-9081.2024070934
• CCF第39届中国计算机应用大会 (CCF NCCA 2024) • 上一篇 下一篇
收稿日期:
2024-07-05
修回日期:
2024-09-25
接受日期:
2024-09-29
发布日期:
2025-07-10
出版日期:
2025-07-10
通讯作者:
张淑芬
作者简介:
张宏扬(1999—),男,江苏淮安人,硕士研究生,CCF会员,主要研究方向:数据安全、隐私保护基金资助:
Hongyang ZHANG1,2,3, Shufen ZHANG1,2,4(), Zheng GU1,2,3
Received:
2024-07-05
Revised:
2024-09-25
Accepted:
2024-09-29
Online:
2025-07-10
Published:
2025-07-10
Contact:
Shufen ZHANG
About author:
ZHANG Hongyang, born in 1999, M. S. candidate. His research interests include data security, privacy protection.Supported by:
摘要:
作为一种分布式优化范式,联邦学习(FL)允许大量资源有限的客户端节点在不共享数据时协同训练模型。然而,传统联邦学习算法,如FedAvg,通常未充分考虑公平性的问题。在实际场景中,数据分布通常具备高度异构性,常规的聚合操作可能会使模型对某些客户端产生偏见,导致全局模型在客户端本地的性能分布出现巨大差异。针对这一问题,提出一种面向个性化与公平性的联邦学习FedPF(Federated learning for Personalization and Fairness)算法。FedPF旨在有效减少联邦学习中低效的聚合行为,并通过寻找全局模型与本地模型的相关性,在客户端之间分配个性化模型,从而在保证全局模型性能的同时,使客户端本地性能分布更均衡。将FedPF在Synthetic、MNIST以及CIFAR10数据集上进行实验和性能分析,并与FedProx、q-FedAvg和FedAvg这3种联邦学习算法进行对比。实验结果表明,FedPF在有效性和公平性上均得到了有效提升。
中图分类号:
张宏扬, 张淑芬, 谷铮. 面向个性化与公平性的联邦学习算法[J]. 计算机应用, 2025, 45(7): 2123-2131.
Hongyang ZHANG, Shufen ZHANG, Zheng GU. Federated learning algorithm for personalization and fairness[J]. Journal of Computer Applications, 2025, 45(7): 2123-2131.
算法 | TA/% | TL | MVA/% | MVL | SDVA | SDVL |
---|---|---|---|---|---|---|
FedAvg | 68.16 | 0.891 | 70.09 | 0.842 | 0.324 | 0.719 |
FedProx | 69.27 | 0.919 | 70.98 | 0.910 | 0.326 | 1.156 |
q-FedAvg | 51.83 | 1.613 | 53.92 | 1.588 | 0.387 | 0.604 |
FedPF | 80.16 | 0.720 | 82.25 | 0.675 | 0.170 | 0.281 |
表1 Synthetic数据集的实验结果对比
Tab. 1 Comparison of experimental results on Synthetic dataset
算法 | TA/% | TL | MVA/% | MVL | SDVA | SDVL |
---|---|---|---|---|---|---|
FedAvg | 68.16 | 0.891 | 70.09 | 0.842 | 0.324 | 0.719 |
FedProx | 69.27 | 0.919 | 70.98 | 0.910 | 0.326 | 1.156 |
q-FedAvg | 51.83 | 1.613 | 53.92 | 1.588 | 0.387 | 0.604 |
FedPF | 80.16 | 0.720 | 82.25 | 0.675 | 0.170 | 0.281 |
数据集 | 算法 | TA/% | TL | MVA/% | MVL | SDVA | SDVL |
---|---|---|---|---|---|---|---|
MNIST (Diversity) | FedAvg | 97.81 | 0.067 | 97.50 | 0.083 | 0.021 | 0.062 |
FedProx | 97.06 | 0.087 | 97.01 | 0.101 | 0.022 | 0.062 | |
q-FedAvg | 91.85 | 0.587 | 90.93 | 0.609 | 0.043 | 0.125 | |
FedPF | 98.08 | 0.058 | 97.67 | 0.074 | 0.021 | 0.050 | |
MNIST (Dirichlet) | FedAvg | 97.63 | 0.072 | 97.18 | 0.091 | 0.025 | 0.073 |
FedProx | 96.09 | 0.112 | 95.35 | 0.136 | 0.076 | 0.199 | |
q-FedAvg | 89.92 | 0.665 | 88.64 | 0.693 | 0.062 | 0.159 | |
FedPF | 97.63 | 0.070 | 97.19 | 0.085 | 0.021 | 0.055 |
表2 在MNIST数据集上的实验结果对比
Tab. 2 Comparison of experimental results on MNIST dataset
数据集 | 算法 | TA/% | TL | MVA/% | MVL | SDVA | SDVL |
---|---|---|---|---|---|---|---|
MNIST (Diversity) | FedAvg | 97.81 | 0.067 | 97.50 | 0.083 | 0.021 | 0.062 |
FedProx | 97.06 | 0.087 | 97.01 | 0.101 | 0.022 | 0.062 | |
q-FedAvg | 91.85 | 0.587 | 90.93 | 0.609 | 0.043 | 0.125 | |
FedPF | 98.08 | 0.058 | 97.67 | 0.074 | 0.021 | 0.050 | |
MNIST (Dirichlet) | FedAvg | 97.63 | 0.072 | 97.18 | 0.091 | 0.025 | 0.073 |
FedProx | 96.09 | 0.112 | 95.35 | 0.136 | 0.076 | 0.199 | |
q-FedAvg | 89.92 | 0.665 | 88.64 | 0.693 | 0.062 | 0.159 | |
FedPF | 97.63 | 0.070 | 97.19 | 0.085 | 0.021 | 0.055 |
数据集 | 算法 | TA/% | TL | MVA/% | MVL | SDVA | SDVL |
---|---|---|---|---|---|---|---|
CIFAR10 (Diversity) | FedAvg | 59.99 | 1.359 | 59.92 | 1.364 | 0.114 | 0.437 |
FedProx | 65.65 | 1.103 | 65.37 | 1.115 | 0.104 | 0.355 | |
q-FedAvg | 41.95 | 1.649 | 42.99 | 1.645 | 0.100 | 0.130 | |
FedPF | 64.35 | 1.162 | 64.54 | 1.135 | 0.120 | 0.363 | |
CIFAR10 (Dirichlet) | FedAvg | 41.26 | 1.617 | 40.07 | 1.643 | 0.268 | 0.664 |
FedProx | 52.89 | 1.345 | 53.52 | 1.354 | 0.186 | 0.530 | |
q-FedAvg | 39.49 | 1.641 | 38.90 | 1.671 | 0.160 | 0.223 | |
FedPF | 58.09 | 1.252 | 57.11 | 1.257 | 0.197 | 0.594 |
表3 CIFAR10数据集上的实验结果对比
Tab. 3 Comparison of experimental results on CIFAR10 dataset
数据集 | 算法 | TA/% | TL | MVA/% | MVL | SDVA | SDVL |
---|---|---|---|---|---|---|---|
CIFAR10 (Diversity) | FedAvg | 59.99 | 1.359 | 59.92 | 1.364 | 0.114 | 0.437 |
FedProx | 65.65 | 1.103 | 65.37 | 1.115 | 0.104 | 0.355 | |
q-FedAvg | 41.95 | 1.649 | 42.99 | 1.645 | 0.100 | 0.130 | |
FedPF | 64.35 | 1.162 | 64.54 | 1.135 | 0.120 | 0.363 | |
CIFAR10 (Dirichlet) | FedAvg | 41.26 | 1.617 | 40.07 | 1.643 | 0.268 | 0.664 |
FedProx | 52.89 | 1.345 | 53.52 | 1.354 | 0.186 | 0.530 | |
q-FedAvg | 39.49 | 1.641 | 38.90 | 1.671 | 0.160 | 0.223 | |
FedPF | 58.09 | 1.252 | 57.11 | 1.257 | 0.197 | 0.594 |
[1] | YANG F, ZHANG Q, JI X, et al. Machine learning applications in drug repurposing [J]. Interdisciplinary Sciences: Computational Life Sciences, 2022, 14(1): 15-21. |
[2] | LIU J C, GOETZ J, SEN S, et al. Learning from others without sacrificing privacy: simulation comparing centralized and federated machine learning on mobile health data [J]. JMIR mHealth and uHealth, 2021, 9(3): No.e23728. |
[3] | TANKARD C. What the GDPR means for businesses [J]. Network Security, 2016, 2016(6): 5-8. |
[4] | O’HERRIN J K, FOST N, KUDSK K A. Health Insurance Portability Accountability Act (HIPAA) regulations [J]. Annals of Surgery, 2004, 239(6): 772-778. |
[5] | KONEČNÝ J, McMAHAN H B, RAMAGE D, et al. Federated optimization: distributed machine learning for on-device intelligence [EB/OL]. [2024-04-15]. . |
[6] | KONEČNÝ J, McMAHAN H B, YU F X, et al. Federated learning: strategies for improving communication efficiency [EB/OL]. [2024-04-20]. . |
[7] | McMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data [C]// Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2017: 1273-1282. |
[8] | KAIROUZ P, McMAHAN H B, AVENT B, et al. Advances and open problems in federated learning [J]. Foundations and Trends in Machine Learning, 2021, 14(1/2): 1-210. |
[9] | 张淑芬,张宏扬,任志强,等.联邦学习的公平性研究综述[J/OL].[2024-07-11].. |
ZHANG S F, ZHANG H Y, REN Z Q, et al. Survey of fairness in federated learning [J/OL].[2024-07-11].. | |
[10] | ZHOU Z, CHU L, LIU C, et al. Towards fair federated learning [C]// Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2021: 4100-4101. |
[11] | LI T, HU S, BEIRAMI A, et al. Ditto: fair and robust federated learning through personalization [C]// Proceedings of the 38th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2021: 6357-6368. |
[12] | LI T, SAHU A K, ZAHEER M, et al. Federated optimization in heterogeneous networks [EB/OL]. [2024-04-20]. . |
[13] | LYU L, XU X, WANG Q, et al. Collaborative fairness in federated learning [M]// YANG Q, FAN L, YU H. Federated learning: privacy and incentive. Cham: Springer, 2020: 189-204. |
[14] | XU X, LYU L. A reputation mechanism is all you need: collaborative fairness and adversarial robustness in federated learning [EB/OL]. [2024-04-25]. . |
[15] | YU H, LIU Z, LIU Y, et al. A fairness-aware incentive scheme for federated learning [C]// Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society. New York: ACM, 2020: 393-399. |
[16] | 耿方兴,李卓,陈昕.基于多领导者Stackelberg博弈的分层联邦学习激励机制设计[J].计算机应用,2023, 43(11): 3551-3558. |
GENG F X, LI Z, CHEN X. Incentive mechanism design for hierarchical federated learning based on multi-leader Stackelberg game [J]. Journal of Computer Applications, 2023, 43(11): 3551-3558. | |
[17] | 余孙婕,曾辉,熊诗雨,等.基于生成式对抗网络的联邦学习激励机制[J].计算机应用,2024, 44(2): 344-352. |
YU S J, ZENG H, XIONG S Y, et al. Incentive mechanism for federated learning based on generative adversarial network [J]. Journal of Computer Applications, 2024, 44(2): 344-352. | |
[18] | DWORK C, HARDT M, PITASSI T, et al. Fairness through awareness [C]// Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. New York: ACM, 2012: 214-226. |
[19] | HARDT M, PRICE E, SREBRO N. Equality of opportunity in supervised learning [C]// Proceedings of the 30th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2016: 3323-3331. |
[20] | KUSNER M, LOFTUS J, RUSSELL C, et al. Counterfactual fairness [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 4069-4079. |
[21] | LIANG P P, LIU T, ZIYIN L, et al. Think locally, act globally: federated learning with local and global representations [EB/OL]. [2024-06-25]. . |
[22] | DU W, XU D, WU X, et al. Fairness-aware agnostic federated learning[C]// Proceedings of the 2021 SIAM International Conference on Data Mining. Philadelphia, PA: SIAM, 2021: 181-189. |
[23] | NISHIO T, YONETANI R. Client selection for federated learning with heterogeneous resources in mobile edge [C]// Proceedings of the 2019 IEEE International Conference on Communications. Piscataway: IEEE, 2019: 1-7. |
[24] | CHAI Z, ALI A, ZAWAD S, et al. TiFL: a tier-based federated learning system [C]// Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing. New York: ACM, 2020: 125-136. |
[25] | RIBERO M, VIKALO H. Communication-efficient federated learning via optimal client sampling [EB/OL]. [2024-06-20]. . |
[26] | WANG K, MATHEWS R, KIDDON C, et al. Federated evaluation of on-device personalization [EB/OL]. [2024-06-25]. . |
[27] | GASANOV E, KHALED A, HORVÁTH S, et al. FLIX: a simple and communication-efficient alternative to local methods in federated learning [C]// Proceedings of the 25th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2022: 10374-10421. |
[28] | DINH C T, TRAN N H, NGUYEN T D. Personalized federated learning with Moreau envelopes [C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 21394-21405. |
[29] | ARIVAZHAGAN M G, AGGARWAL V K, SINGH A K, et al. Federated learning with personalization layers [EB/OL]. [2024-05-25]. . |
[30] | LI A, SUN J, WANG B, et al. LotteryFL: personalized and communication-efficient federated learning with lottery ticket hypothesis on non-IID datasets [EB/OL]. [2024-05-25]. . |
[31] | VAHIDIAN S, MORAFAH M, LIN B. Personalized federated learning by structured and unstructured pruning under data heterogeneity [C]// Proceedings of the IEEE 41st International Conference on Distributed Computing Systems Workshops. Piscataway: IEEE, 2021: 27-34. |
[32] | HUANG T, LIU S, SHEN L, et al. Achieving personalized federated learning with sparse local models [EB/OL]. [2024-06-20]. . |
[33] | CHO Y J, WANG J, JOSHI G. Client selection in federated learning: convergence analysis and power-of-choice selection strategies [EB/OL]. [2024-06-24]. . |
[34] | CHO Y J, WANG J, JOSHI G. Towards understanding biased client selection in federated learning [C]// Proceedings of the 25th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2022: 10351-10375. |
[35] | MITZENMACHER M. The power of two choices in randomized load balancing [J]. IEEE Transactions on Parallel and Distributed Systems, 2001, 12(10): 1094-1104. |
[36] | JOSE M, GIL-LAFUENTE A M. Using the OWA operator in the Minkowski distance [J]. World Academy of Science, Engineering and Technology, 2008, 2(9): 1032-1040. |
[37] | LeCUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition [J]. Proceedings of the IEEE, 1998, 86(11): 2278-2324. |
[38] | KRIZHEVSKY A. Learning multiple layers of features from tiny images [R/OL]. [2024-05-12]. . |
[39] | LI T, ANIT K S, MANZIL Z, et al. Federated optimization in heterogeneous networks [EB/OL]. [2024-05-25]. . |
[40] | LI T, SANJABI M, BEIRAMI A, et al. Fair resource allocation in federated learning [EB/OL]. [2024-06-05]. . |
[1] | 王利娥, 林彩怡, 李永东, 傅星珵, 李先贤. 基于区块链的数字内容版权保护和公平追踪方案[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1756-1765. |
[2] | 张一鸣, 曹腾飞. 基于本地漂移和多样性算力的联邦学习优化算法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1447-1454. |
[3] | 范亚州, 李卓. 能耗约束下分层联邦学习模型质量优化的节点协作机制[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1589-1594. |
[4] | 陈庆礼, 郭渊博, 方晨. 面向数据异构的聚类联邦学习算法[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1086-1094. |
[5] | 项钰斐, 倪郑威. 基于演化博弈的分层联邦学习边缘联合动态分析[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1077-1085. |
[6] | 张学飞, 张丽萍, 闫盛, 侯敏, 赵宇博. 知识图谱与大语言模型协同的个性化学习推荐[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 773-784. |
[7] | 林海力, 李京. 基于工作证明的联邦学习懒惰客户端识别方法[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 856-863. |
[8] | 董艳民, 林佳佳, 张征, 程程, 吴金泽, 王士进, 黄振亚, 刘淇, 陈恩红. 个性化学情感知的智慧助教算法设计与实践[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 765-772. |
[9] | 曾辉, 熊诗雨, 狄永正, 史红周. 基于剪枝的大模型联邦参数高效微调技术[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 715-724. |
[10] | 徐超, 张淑芬, 陈海田, 彭璐璐, 张帅华. 基于自适应差分隐私与客户选择优化的联邦学习方法[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 482-489. |
[11] | 王心妍, 杜嘉程, 钟李红, 徐旺旺, 刘伯宇, 佘维. 融合电力数据的纵向联邦学习企业排污预测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 518-525. |
[12] | 陈海田, 陈学斌, 马锐奎, 张帅华. 面向遥感数据的基于本地差分隐私的联邦学习隐私保护方案[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 506-517. |
[13] | 任志强, 陈学斌. 基于历史模型更新的自适应防御机制FedAud[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 490-496. |
[14] | 朱亮, 慕京哲, 左洪强, 谷晶中, 朱付保. 基于联邦图神经网络的位置隐私保护推荐方案[J]. 《计算机应用》唯一官方网站, 2025, 45(1): 136-143. |
[15] | 晏燕, 钱星颖, 闫鹏斌, 杨杰. 位置大数据的联邦学习统计预测与差分隐私保护方法[J]. 《计算机应用》唯一官方网站, 2025, 45(1): 127-135. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||