Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (12): 3747-3756.DOI: 10.11772/j.issn.1001-9081.2024121794
• Artificial intelligence • Next Articles
Zhongrui JING1,2,3, Xuebin CHEN1,2,3, Yinlong JIAN1,2,3, Qi ZHONG1,2,3, Zhenbo ZHANG1,2,3
Received:2024-12-20
Revised:2025-03-13
Accepted:2025-03-18
Online:2025-03-27
Published:2025-12-10
Contact:
Xuebin CHEN
About author:JING Zhongrui, born in 2000, M. S. candidate. His research interests include data security, privacy protection.Supported by:景忠瑞1,2,3, 陈学斌1,2,3, 菅银龙1,2,3, 钟琪1,2,3, 张镇博1,2,3
通讯作者:
陈学斌
作者简介:景忠瑞(2000—),男,山西临汾人,硕士研究生,CCF会员,主要研究方向:数据安全、隐私保护基金资助:CLC Number:
Zhongrui JING, Xuebin CHEN, Yinlong JIAN, Qi ZHONG, Zhenbo ZHANG. Federated learning fairness algorithm based on personalized submodel and K-means clustering[J]. Journal of Computer Applications, 2025, 45(12): 3747-3756.
景忠瑞, 陈学斌, 菅银龙, 钟琪, 张镇博. 基于个性化子模型和K均值聚类的联邦学习公平性算法[J]. 《计算机应用》唯一官方网站, 2025, 45(12): 3747-3756.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024121794
| 算法 | pat | dir1.0 | dir2.0 | dir3.0 | ||||
|---|---|---|---|---|---|---|---|---|
| 最大准确率/% | 最大准确率/% | 最大准确率/% | 最大准确率/% | |||||
| FedAvg | 42.12 | 39.46 | 35.35 | 86.99 | 43.86 | -11.95 | 56.12 | 40.44 |
| FedProx | 41.97 | 40.06 | 35.14 | 87.44 | 43.76 | -12.27 | 56.16 | 41.32 |
| FedAVE | 31.56 | 94.80 | 78.51 | 82.76 | 74.91 | |||
| FedSAC | 40.37 | 41.33 | 56.23 | 99.62 | ||||
| FedPSK | 44.46 | 99.32 | 36.59 | 99.01 | 47.64 | 99.74 | 58.86 | |
Tab. 1 Comparison of maximum accuracies and correlation coefficients of fairness measurement of different algorithms on CIFAR-10 dataset
| 算法 | pat | dir1.0 | dir2.0 | dir3.0 | ||||
|---|---|---|---|---|---|---|---|---|
| 最大准确率/% | 最大准确率/% | 最大准确率/% | 最大准确率/% | |||||
| FedAvg | 42.12 | 39.46 | 35.35 | 86.99 | 43.86 | -11.95 | 56.12 | 40.44 |
| FedProx | 41.97 | 40.06 | 35.14 | 87.44 | 43.76 | -12.27 | 56.16 | 41.32 |
| FedAVE | 31.56 | 94.80 | 78.51 | 82.76 | 74.91 | |||
| FedSAC | 40.37 | 41.33 | 56.23 | 99.62 | ||||
| FedPSK | 44.46 | 99.32 | 36.59 | 99.01 | 47.64 | 99.74 | 58.86 | |
| 算法 | pat | dir1.0 | dir2.0 | dir3.0 | ||||
|---|---|---|---|---|---|---|---|---|
| 最大准确率/% | 最大准确率/% | 最大准确率/% | 最大准确率/% | |||||
| FedAvg | -37.25 | 76.54 | 50.74 | 77.03 | 66.52 | 90.69 | 62.17 | |
| FedProx | 76.57 | -37.47 | 76.35 | 50.72 | 76.97 | 67.86 | 90.73 | 61.94 |
| FedAVE | 74.76 | 77.58 | 72.90 | 98.88 | 82.73 | 72.62 | 83.23 | 81.96 |
| FedSAC | 75.16 | 79.69 | 99.47 | |||||
| FedPSK | 76.00 | 99.33 | 82.78 | 99.39 | 99.73 | 93.02 | ||
Tab. 2 Comparison of maximum accuracies and correlation coefficients of fairness measurement of different algorithms on Fashion-MNIST dataset
| 算法 | pat | dir1.0 | dir2.0 | dir3.0 | ||||
|---|---|---|---|---|---|---|---|---|
| 最大准确率/% | 最大准确率/% | 最大准确率/% | 最大准确率/% | |||||
| FedAvg | -37.25 | 76.54 | 50.74 | 77.03 | 66.52 | 90.69 | 62.17 | |
| FedProx | 76.57 | -37.47 | 76.35 | 50.72 | 76.97 | 67.86 | 90.73 | 61.94 |
| FedAVE | 74.76 | 77.58 | 72.90 | 98.88 | 82.73 | 72.62 | 83.23 | 81.96 |
| FedSAC | 75.16 | 79.69 | 99.47 | |||||
| FedPSK | 76.00 | 99.33 | 82.78 | 99.39 | 99.73 | 93.02 | ||
| 算法 | 评估耗时/s | ||
|---|---|---|---|
神经网络1 (154) | 神经网络2 (618) | 神经网络3 (1 130) | |
| FedPSK( | 13.06 | 14.06 | 18.15 |
| FedPSK( | 15.76 | 16.73 | 19.57 |
| FedSAC | 99.24 | 436.24 | 758.24 |
Tab. 3 Neuron evaluation time consumption for neural networks of different sizes
| 算法 | 评估耗时/s | ||
|---|---|---|---|
神经网络1 (154) | 神经网络2 (618) | 神经网络3 (1 130) | |
| FedPSK( | 13.06 | 14.06 | 18.15 |
| FedPSK( | 15.76 | 16.73 | 19.57 |
| FedSAC | 99.24 | 436.24 | 758.24 |
| [1] | McMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]// Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2017: 1273-1282. |
| [2] | PAN Z, LI C, YU F, et al. FedLF: layer-wise fair federated learning[C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 14527-14535. |
| [3] | LYU L, XU X, WANG Q, et al. Collaborative fairness in federated learning[M]// YANG Q, FAN L, YU H. Federated learning: privacy and incentive, LNCS 12500. Cham: Springer, 2020: 189-204. |
| [4] | WANG Z, WANG Z, LYU L, et al. FedSAC: dynamic submodel allocation for collaborative fairness in federated learning[C]// Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2024: 3299-3310. |
| [5] | YU H, LIU Z, LIU Y, et al. A fairness-aware incentive scheme for federated learning[C]// Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society. New York: ACM, 2020: 393-399. |
| [6] | SHI Y, YU H, LEUNG C. Towards fairness-aware federated learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(9): 11922-11938. |
| [7] | ZHANG J, LI C, ROBLES-KELLY A, et al. Hierarchically fair federated learning[EB/OL]. [2024-11-11].. |
| [8] | LYU L, YU J, NANDAKUMAR K, et al. Towards fair and privacy-preserving federated deep models[J]. IEEE Transactions on Parallel and Distributed Systems, 2020, 31(11): 2524-2541. |
| [9] | KANG J, XIONG Z, NIYATO D, et al. Incentive design for efficient federated learning in mobile networks: a contract theory approach[C]// Proceedings of the 2019 IEEE VTS Asia Pacific Wireless Communications Symposium. Piscataway: IEEE, 2019: 1-5. |
| [10] | COHEN A I. Contract theory[M]// CLAEYS G. Encyclopedia of modern political thought. Thousand Oaks, CA: CQ Press, 2013: 191-194. |
| [11] | SARIKAYA Y, ERCETIN O. Motivating workers in federated learning: a Stackelberg game perspective[J]. IEEE Networking Letters, 2020, 2(1): 23-27. |
| [12] | FAN Z, FANG H, ZHOU Z, et al. Fair and efficient contribution valuation for vertical federated learning[EB/OL]. [2024-10-12].. |
| [13] | CHENG Q, QU S, LEE J. SHARPNN: shapley value regularized tabular neural network[EB/OL]. [2024-11-16].. |
| [14] | XU X, LYU L, MA X, et al. Gradient driven rewards to guarantee fairness in collaborative machine learning[C]// Proceedings of the 35th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2021: 16104-16117. |
| [15] | TASTAN N, FARES S, AREMU T, et al. Redefining contributions: shapley-driven federated learning [C]// Proceedings of the 33rd International Joint Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 5009-5017. |
| [16] | WAN T, DENG X, LIAO W, et al. Enhancing fairness in federated learning: a contribution‑based differentiated model approach [J]. International Journal of Intelligent Systems, 2023, 2023: No.6692995. |
| [17] | TASTAN N, HORVATH S, NANDAKUMAR K. CYCle: choosing your collaborators wisely to enhance collaborative fairness in decentralized learning[EB/OL]. [2025-02-08].. |
| [18] | WANG Z, PENG Z, FAN X, et al. FedAVE: adaptive data value evaluation framework for collaborative fairness in federated learning[J]. Neurocomputing, 2024, 574: No.127227. |
| [19] | RODGERS J l, NICEWANDER W A. Thirteen ways to look at the correlation coefficient[J]. The American Statistician, 1988, 42(1): 59-66. |
| [20] | LI T, SAHU A K, TALWALKAR A, et al. Federated learning: challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50-60. |
| [21] | 浙江君同智能科技有限责任公司. 基于神经元激活值聚类的纵向联邦学习后门防御法: 202210146719.0[P]. 2022-03-18. |
| Zhejiang Juntong Intelligent Technology Company Limited. Longitudinal federated learning backdoor defense based on neuron activation value clustering: 202210146719.0 [P]. 2023-03-18. | |
| [22] | HARTIGAN J A, WONG M A. Algorithm AS 136: a K-means clustering algorithm[J]. Journal of the Royal Statistical Society. Series C (Applied Statistics), 1979, 28(1): 100-108. |
| [23] | MOLCHANOV P, MALLYA A, TYREE S, et al. Importance estimation for neural network pruning[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 11256-11264. |
| [24] | KRIZHEVSKY A. Learning multiple layers of features from tiny images[R/OL]. [2024-11-25].. |
| [25] | XIAO H, RASUL K, VOLLGRAF R. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms [EB/OL]. [2024-11-25].. |
| [26] | ZHANG J, LIU Y, HUA Y, et al. PFLlib: personalized federated learning algorithm library[J]. Journal of Machine Learning Research, 2025, 26: 1-10. |
| [27] | YU Y, WEI A, KARIMIREDDY S P, et al. TCT: convexifying federated learning using bootstrapped neural tangent kernels[C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 30882-30897. |
| [28] | LI T, SAHU A K, ZAHEER M, et al. Federated optimization in heterogeneous networks [J]. Proceedings of Machine Learning and Systems, 2020, 2: 429-450. |
| [1] | Jintao SU, Lina GE, Liguang XIAO, Jing ZOU, Zhe WANG. Detection and defense scheme for backdoor attacks in federated learning [J]. Journal of Computer Applications, 2025, 45(8): 2399-2408. |
| [2] | Hongyang ZHANG, Shufen ZHANG, Zheng GU. Federated learning algorithm for personalization and fairness [J]. Journal of Computer Applications, 2025, 45(7): 2123-2131. |
| [3] | Yiming ZHANG, Tengfei CAO. Federated learning optimization algorithm based on local drift and diversity computing power [J]. Journal of Computer Applications, 2025, 45(5): 1447-1454. |
| [4] | Qingli CHEN, Yuanbo GUO, Chen FANG. Clustering federated learning algorithm for heterogeneous data [J]. Journal of Computer Applications, 2025, 45(4): 1086-1094. |
| [5] | Ruilong CHEN, Peng YI, Tao HU, Youjun BU. Encrypted traffic classification method based on federated prototypical incremental learning [J]. Journal of Computer Applications, 2025, 45(12): 3888-3895. |
| [6] | Shufen ZHANG, Hongyang ZHANG, Zhiqiang REN, Xuebin CHEN. Survey of fairness in federated learning [J]. Journal of Computer Applications, 2025, 45(1): 1-14. |
| [7] | Zheyuan SHEN, Keke YANG, Jing LI. Personalized federated learning method based on dual stream neural network [J]. Journal of Computer Applications, 2024, 44(8): 2319-2325. |
| [8] | Chenghao YANG, Jie HU, Hongjun WANG, Bo PENG. Incomplete multi-view clustering algorithm based on attention mechanism [J]. Journal of Computer Applications, 2024, 44(12): 3784-3789. |
| [9] | Jie WU, Xuezhong QIAN, Wei SONG. Personalized federated learning based on similarity clustering and regularization [J]. Journal of Computer Applications, 2024, 44(11): 3345-3353. |
| [10] | Xuebin CHEN, Changsheng QU. Overview of backdoor attacks and defense in federated learning [J]. Journal of Computer Applications, 2024, 44(11): 3459-3469. |
| [11] | Chunyong YIN, Yongcheng ZHOU. Automatically adjusted clustered federated learning for double-ended clustering [J]. Journal of Computer Applications, 2024, 44(10): 3011-3020. |
| [12] | Linhao LI, Xiaoqian ZHANG, Yao DONG, Xu WANG, Yongfeng DONG. Knowledge tracing based on personalized learning and deep refinement [J]. Journal of Computer Applications, 2024, 44(10): 3039-3046. |
| [13] | Hui ZHOU, Yuling CHEN, Xuewei WANG, Yangwen ZHANG, Jianjiang HE. Deep shadow defense scheme of federated learning based on generative adversarial network [J]. Journal of Computer Applications, 2024, 44(1): 223-232. |
| [14] | Mengjie LAN, Jianping CAI, Lan SUN. Self-regularization optimization methods for Non-IID data in federated learning [J]. Journal of Computer Applications, 2023, 43(7): 2073-2081. |
| [15] | Wanzhen CHEN, En ZHANG, Leiyong QIN, Shuangxi HONG. Privacy-preserving federated learning algorithm based on blockchain in edge computing [J]. Journal of Computer Applications, 2023, 43(7): 2209-2216. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||