Journal of Computer Applications ›› 2022, Vol. 42 ›› Issue (2): 333-342.DOI: 10.11772/j.issn.1001-9081.2021020232
• Artificial intelligence • Next Articles
Xinyuan QIU1,2, Zecong YE1,2, Xiaolong CUI2,3(), Zhiqiang GAO2
Received:
2021-02-09
Revised:
2021-04-13
Accepted:
2021-04-20
Online:
2022-02-11
Published:
2022-02-10
Contact:
Xiaolong CUI
About author:
QIU Xinyuan, born in 1999, M. S. candidate. Her research interests include federated learning, deep learning.Supported by:
邱鑫源1,2, 叶泽聪1,2, 崔翛龙2,3(), 高志强2
通讯作者:
崔翛龙
作者简介:
邱鑫源(1999—),女,江西南昌人,硕士研究生,主要研究方向:联邦学习、深度学习;基金资助:
CLC Number:
Xinyuan QIU, Zecong YE, Xiaolong CUI, Zhiqiang GAO. Survey of communication overhead of federated learning[J]. Journal of Computer Applications, 2022, 42(2): 333-342.
邱鑫源, 叶泽聪, 崔翛龙, 高志强. 联邦学习通信开销研究综述[J]. 《计算机应用》唯一官方网站, 2022, 42(2): 333-342.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2021020232
目标精度/% | 不同算法的通信轮次 | ||
---|---|---|---|
SGD | FedSGD | FedAvg | |
80 | 18 000 | 3 750 | 280 |
82 | 31 000 | 6 600 | 630 |
85 | 99 000 | — | 2 000 |
Tab. 1 Communication rounds of different algorithms with same target accuracy on CIFAR-10 test set
目标精度/% | 不同算法的通信轮次 | ||
---|---|---|---|
SGD | FedSGD | FedAvg | |
80 | 18 000 | 3 750 | 280 |
82 | 31 000 | 6 600 | 630 |
85 | 99 000 | — | 2 000 |
模型 | E | B | u | 通信轮次 | |
---|---|---|---|---|---|
IID | non-IID | ||||
FedSGD | 1 | ∞ | 1.0 | 626 | 483 |
FedAvg | 5 | ∞ | 5.0 | 179 | 1 000 |
1 | 50 | 12.0 | 65 | 600 | |
20 | ∞ | 20.0 | 234 | 672 | |
1 | 10 | 60.0 | 34 | 350 | |
5 | 50 | 60.0 | 29 | 334 | |
20 | 50 | 240.0 | 32 | 426 | |
5 | 10 | 300.0 | 20 | 229 | |
20 | 10 | 1 200.0 | 18 | 173 |
Tab. 2 FedSGD and FedAvg communication rounds under 99% target accuracy on MNIST test set[12]
模型 | E | B | u | 通信轮次 | |
---|---|---|---|---|---|
IID | non-IID | ||||
FedSGD | 1 | ∞ | 1.0 | 626 | 483 |
FedAvg | 5 | ∞ | 5.0 | 179 | 1 000 |
1 | 50 | 12.0 | 65 | 600 | |
20 | ∞ | 20.0 | 234 | 672 | |
1 | 10 | 60.0 | 34 | 350 | |
5 | 50 | 60.0 | 29 | 334 | |
20 | 50 | 240.0 | 32 | 426 | |
5 | 10 | 300.0 | 20 | 229 | |
20 | 10 | 1 200.0 | 18 | 173 |
模型 | E | B | u | 通信轮次 | |
---|---|---|---|---|---|
IID | non-IID | ||||
FedSGD | 1 | ∞ | 1.0 | 2 488 | 3 906 |
FedAvg | 1 | 50 | 1.5 | 1 635 | 549 |
5 | ∞ | 5.0 | 613 | 597 | |
1 | 10 | 7.4 | 460 | 164 | |
5 | 50 | 7.4 | 401 | 152 | |
5 | 10 | 37.1 | 192 | 41 |
Tab. 3 FedSGD and FedAvg communication rounds under 54% target accuracy on SHAKESPEARE test set[12]
模型 | E | B | u | 通信轮次 | |
---|---|---|---|---|---|
IID | non-IID | ||||
FedSGD | 1 | ∞ | 1.0 | 2 488 | 3 906 |
FedAvg | 1 | 50 | 1.5 | 1 635 | 549 |
5 | ∞ | 5.0 | 613 | 597 | |
1 | 10 | 7.4 | 460 | 164 | |
5 | 50 | 7.4 | 401 | 152 | |
5 | 10 | 37.1 | 192 | 41 |
模型 | 数据集 | 参数量 | 单次迭代耗时/s | |
---|---|---|---|---|
FedAvg | Overlap-FedAvg | |||
MLP | MNIST | 199 210 | 31.20 | 28.85 |
MnistNet | FMNIST | 1 199 882 | 32.96 | 28.31 |
MnistNet | EMNIST | 1 199 882 | 47.19 | 42.15 |
CNNCifar | CIFAR-10 | 878 538 | 48.07 | 45.33 |
VGGR | CIFAR-10 | 2 440 394 | 64.40 | 49.33 |
ResNetR | CIFAR-10 | 11 169 162 | 156.88 | 115.31 |
ResNetR | CIFAR-100 | 11 169 162 | 156.02 | 115.30 |
Transformer | WIKITEXT-2 | 13 828 478 | 133.19 | 87.90 |
Tab. 4 Comparison of average wall-clock time of Overlap-FedAvg and FedAvg for one iteration[18]
模型 | 数据集 | 参数量 | 单次迭代耗时/s | |
---|---|---|---|---|
FedAvg | Overlap-FedAvg | |||
MLP | MNIST | 199 210 | 31.20 | 28.85 |
MnistNet | FMNIST | 1 199 882 | 32.96 | 28.31 |
MnistNet | EMNIST | 1 199 882 | 47.19 | 42.15 |
CNNCifar | CIFAR-10 | 878 538 | 48.07 | 45.33 |
VGGR | CIFAR-10 | 2 440 394 | 64.40 | 49.33 |
ResNetR | CIFAR-10 | 11 169 162 | 156.88 | 115.31 |
ResNetR | CIFAR-100 | 11 169 162 | 156.02 | 115.30 |
Transformer | WIKITEXT-2 | 13 828 478 | 133.19 | 87.90 |
种类 | IID | non-IID | dispatch |
---|---|---|---|
FedAvg | 83.75 | 83.41 | 79.94 |
HDAFL | 83.70 | 80.21 | 44.20 |
DH | 85.44 | 85.12 | 74.95 |
DH+GS | 84.30 | 84.44 | 85.62 |
Tab. 5 Comparison of model precision of DH+GS and other algorithms on TTC dataset[46]
种类 | IID | non-IID | dispatch |
---|---|---|---|
FedAvg | 83.75 | 83.41 | 79.94 |
HDAFL | 83.70 | 80.21 | 44.20 |
DH | 85.44 | 85.12 | 74.95 |
DH+GS | 84.30 | 84.44 | 85.62 |
1 | YANG Q, LIU Y, CHEN T, et al. Federated machine learning: concept and applications[J]. ACM Transactions on Intelligent Systems and Technology, 2019, 10(2): No.12. 10.1145/3298981 |
2 | KAIROUZ P, MCMAHAN H B, AVENT B, et al. Advances and open problems in federated learning[EB/OL]. (2021-03-09) [2021-03-26]. . 10.1561/2200000083 |
3 | LI T, SAHU A K, TALWALKAR A, et al. Federated learning: challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50-60. 10.1109/msp.2020.2975749 |
4 | 王健宗,孔令炜,黄章成,等.联邦学习隐私保护研究进展[J].大数据, 2021, 7(3): 130-149. 10.1145/3469968.3469985 |
WANG J Z, KONG L W, HUANG Z C, et al. Research advances on privacy protection of federated learning[J]. Big Data Research, 2021, 7(3): 130-149. 10.1145/3469968.3469985 | |
5 | 陈兵,成翔,张佳乐,等.联邦学习安全与隐私保护综述[J].南京航空航天大学学报, 2020, 52(5): 675-684. 10.16356/j.1005 |
CHEN B, CHENG X, ZHANG J L, et al. Survey of security and privacy in federated learning[J]. Journal of Nanjing University of Aeronautics and Astronautics, 2020, 52(5): 675-684. 10.16356/j.1005 | |
6 | 周俊,方国英,吴楠.联邦学习安全与隐私保护研究综述[J].西华大学学报(自然科学版), 2020, 39(4): 9-17. 10.12198/j.issn.1673?159X.3607 |
ZHOU J, FANG G Y, WU N. Survey on security and privacy-preserving in federated learning[J]. Journal of Xinhua University (Natural Science Edition), 2020, 39(4): 9-17. 10.12198/j.issn.1673?159X.3607 | |
7 | LI L, FAN Y X, TSE M, et al. A review of applications in federated learning[J]. Computers and Industrial Engineering, 2020, 149: No.106854. 10.1016/j.cie.2020.106854 |
8 | 刘耕,赵立君,陈庆勇,等.联邦学习在5G云边协同场景中的原理和应用综述[J].通讯世界, 2020, 27(7): 50-52. 10.3969/j.issn.1006-4222.2020.07.026 |
LIU G, ZHAO L J, CHEN Q Y, et al. Summary of principles and applications of federated learning in 5G cloud-edge collaboration scenarios[J]. Telecom World, 2020, 27(7): 50-52. 10.3969/j.issn.1006-4222.2020.07.026 | |
9 | 王亚珅.面向数据共享交换的联邦学习技术发展综述[J].无人系统技术, 2019, 2(6): 58-62. 10.1109/icicas48597.2019.00162 |
WANG Y S. A survey on federated learning for data sharing and exchange[J]. Unmanned Systems Technology, 2019, 2(6): 58-62. 10.1109/icicas48597.2019.00162 | |
10 | 王健宗,孔令炜,黄章成,等.联邦学习算法综述[J].大数据, 2020, 6(6): 64-82. |
WANG J Z, KONG L W, HUANG Z C, et al. Research review of federated learning algorithms[J]. Big Data Research, 2020, 6(6): 64-82. | |
11 | SHI D, LI L, CHEN R, et al. Towards energy efficient federated learning over 5G+ mobile devices[EB/OL]. (2021-01-13) [2021-01-26]. . 10.1155/2021/2537546 |
12 | MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data [C]// Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2017: 1273-1282. |
13 | ALISTARH D, GRUBIC D, LI J Z, et al. QSGD: communication-efficient SGD via gradient quantization and encoding [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017: 1707-1718. |
14 | KONEČNÝ J. Stochastic, distributed and federated optimization for machine learning[D/OL]. (2017-07-04) [2021-01-26]. . |
15 | KONEČNÝ J, MCMAHAN H B, YU F X, et al. Federated learning: strategies for improving communication efficiency[EB/OL]. (2017-10-30) [2021-01-26]. . |
16 | LI T, SAHU A K, ZAHEER M, et al. Federated optimization for heterogeneous networks[EB/OL]. [2021-01-26]. . |
17 | SHI W Q, ZHOU S, NIU Z S, et al. Joint device scheduling and resource allocation for latency constrained wireless federated learning[J]. IEEE Transactions on Wireless Communications, 2021, 20(1): 453-467. 10.1109/twc.2020.3025446 |
18 | ZHOU Y H, YE Q, LV J C. Communication-efficient federated learning with compensated Overlap-FedAvg[J]. IEEE Transactions on Parallel and Distributed Systems, 2022, 33(1): 192-205. 10.1109/tpds.2021.3090331 |
19 | KONEČNÝ J, MCMAHAN H B, RAMAGE D, et al. Federated optimization: distributed machine learning for on-device intelligence[EB/OL]. (2016-10-08) [2021-01-26]. . |
20 | DINH C T, TRAN N H, NGUYEN M N H, et al. Federated learning over wireless networks: convergence analysis and resource allocation[J]. IEEE/ACM Transactions on Networking, 2021, 29(1): 398-409. 10.1109/tnet.2020.3035770 |
21 | SATTLER F, WIEDEMANN S, MÜLLER K R, et al. Robust and communication-efficient federated learning from non-IID data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 31(9): 3400-3413. 10.1109/tnnls.2019.2944481 |
22 | LI L, SHI D, HOU R H, et al. To talk or to work: flexible communication compression for energy efficient federated learning over heterogeneous mobile edge devices[EB/OL]. (2020-12-22) [2021-01-26]. . 10.1109/infocom42981.2021.9488839 |
23 | HE Y, ZENK M, FRITZ M. CosSGD: nonlinear quantization for communication-efficient federated learning[EB/OL]. (2020-12-15) [2021-01-26]. . |
24 | KARIMIREDDY S P, REBJOCK Q, STICH S, et al. Error feedback fixes signSGD and other gradient compression schemes [C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 3252-3261. |
25 | LIN Y J, HAN S, MAO H Z, et al. Deep gradient compression: reducing the communication bandwidth for distributed training[EB/OL]. (2020-06-23) [2021-01-26]. . |
26 | BERNSTEIN J, WANG Y X, AZIZZADENESHELI K, et al. signSGD: compressed optimisation for non-convex problems [C]// Proceedings of the 35th International Conference on Machine Learning. New York: JMLR.org, 2018: 560-569. |
27 | CHEN R, LI L, XUE K P, et al. To talk or to work: energy efficient federated learning over mobile devices via the weight quantization and 5G transmission co-design[EB/OL]. (2020-12-21) [2021-01-26]. . |
28 | CHANG W T, TANDON R. Communication efficient federated learning over multiple access channels[EB/OL]. (2020-01-23) [2021-01-26]. . |
29 | AJI A F, HEAFIELD K. Sparse communication for distributed gradient descent [C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2017: 440-445. 10.18653/v1/d17-1045 |
30 | WANGNI J Q, WANG J L, LIU J, et al. Gradient sparsification for communication-efficient distributed optimization [C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2018: 1306-1316. 10.1145/3301326.3301347 |
31 | WU H D, WANG P. Fast-convergent federated learning with adaptive weighting[J]. IEEE Transactions on Cognitive Communications and Networking, 2021, 7(4): 1078-1088. 10.1109/tccn.2021.3084406 |
32 | HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. (2015-03-09) [2021-01-26]. . |
33 | JEONG E, OH S, KIM H, et al. Communication-efficient on-device machine learning: federated distillation and augmentation under non-IID private data[EB/OL]. (2018-11-28) [2021-01-26]. . 10.1109/mis.2020.3028613 |
34 | RAHBAR A, PANAHI A, BHATTACHARYYA C, et al. On the unreasonable effectiveness of knowledge distillation: analysis in the kernel regime — long version[EB/OL]. (2020-09-25) [2021-01-26]. . |
35 | PHUONG M, LAMPERT C. Towards understanding knowledge distillation [C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 5142-5151. 10.1109/iccv.2019.00144 |
36 | SATTLER F, MARBAN A, RISCHKE R, et al. Communication-efficient federated distillation[EB/OL]. (2020-12-01) [2021-01-26]. . 10.1109/tnse.2021.3081748 |
37 | YANG K, JIANG T, SHI Y M, et al. Federated learning based on over-the-air computation [C]// Proceedings of the 2019 IEEE International Conference on Communications. Piscataway: IEEE, 2019: 1-6. 10.1109/icc.2019.8761429 |
38 | NISHIO T, YONETANI R. Client selection for federated learning with heterogeneous resources in mobile edge [C]// Proceedings of the 2019 IEEE International Conference on Communications. Piscataway: IEEE, 2019: 1-7. 10.1109/icc.2019.8761315 |
39 | YOSHIDA N, NISHIO T, MORIKURA M, et al. Hybrid-FL for wireless networks: cooperative learning mechanism using non-IID data [C]// Proceedings of the 2020 IEEE International Conference on Communications. Piscataway: IEEE, 2020: 1-7. 10.1109/icc40277.2020.9149323 |
40 | HUANG T S, LIN W W, LI K Q, et al. Stochastic client selection for federated learning with volatile clients[EB/OL]. (2020-11-17) [2021-01-26]. . 10.1109/tpds.2020.3040887 |
41 | XIA S H, ZHU J Y, YANG Y H, et al. Fast convergence algorithm for analog federated learning [C]// Proceedings of the 2021 IEEE Conference on Computer Communications. Piscataway: IEEE, 2021: 1-6. 10.1109/icc42927.2021.9500875 |
42 | TANG S H, ZHANG C, OBANA S. Multi-slot over-the-air computation in fading channels[EB/OL]. (2020-10-23) [2021-01-26]. . 10.1109/access.2021.3070901 |
43 | HU C, JIANG J, WANG Z. Decentralized federated learning: a segmented gossip approach[EB/OL]. [2021-01-26]. . 10.3390/electronics9030440 |
44 | BOUACIDA N, HOU J H, ZANG H, et al. Adaptive federated dropout: improving communication efficiency and generalization for federated learning [C]// Proceedings of the 2021 IEEE Conference on Computer Communications Workshops. Piscataway: IEEE, 2021: 1-6. 10.1109/infocomwkshps51825.2021.9484526 |
45 | CALDAS S, KONEČNỲ J, MCMAHAN H B, et al. Expanding the reach of federated learning by reducing client resource requirements[EB/OL]. (2019-01-08) [2021-01-26]. . |
46 | LI D W, CHANG Q L, PANG L X, et al. More industry-friendly: federated learning with high efficient design[EB/OL]. (2020-12-16) [2021-01-26]. . |
47 | TRAN N H, BAO W, ZOMAYA A, et al. Federated learning over wireless networks: optimization model design and analysis [C]// Proceedings of the 2019 IEEE Conference on Computer Communications. Piscataway: IEEE, 2019: 1387-1395. 10.1109/infocom.2019.8737464 |
48 | AMIRI M M, GÜNDÜZ D. Federated learning over wireless fading channels[J]. IEEE Transactions on Wireless Communications, 2020, 19(5): 3546-3557. 10.1109/twc.2020.2974748 |
[1] | Tingwei CHEN, Jiacheng ZHANG, Junlu WANG. Random validation blockchain construction for federated learning [J]. Journal of Computer Applications, 2024, 44(9): 2770-2776. |
[2] | Runlian ZHANG, Mi ZHANG, Xiaonian WU, Rui SHU. Differential property evaluation method based on GPU for large-state cryptographic S-boxes [J]. Journal of Computer Applications, 2024, 44(9): 2785-2790. |
[3] | Zheyuan SHEN, Keke YANG, Jing LI. Personalized federated learning method based on dual stream neural network [J]. Journal of Computer Applications, 2024, 44(8): 2319-2325. |
[4] | Dongwei WANG, Baichen LIU, Zhi HAN, Yanmei WANG, Yandong TANG. Deep network compression method based on low-rank decomposition and vector quantization [J]. Journal of Computer Applications, 2024, 44(7): 1987-1994. |
[5] | Wei LUO, Jinquan LIU, Zheng ZHANG. Dual vertical federated learning framework incorporating secret sharing technology [J]. Journal of Computer Applications, 2024, 44(6): 1872-1879. |
[6] | Xuebin CHEN, Zhiqiang REN, Hongyang ZHANG. Review on security threats and defense measures in federated learning [J]. Journal of Computer Applications, 2024, 44(6): 1663-1672. |
[7] | Sunjie YU, Hui ZENG, Shiyu XIONG, Hongzhou SHI. Incentive mechanism for federated learning based on generative adversarial network [J]. Journal of Computer Applications, 2024, 44(2): 344-352. |
[8] | Zucuan ZHANG, Xuebin CHEN, Rui GAO, Yuanhuai ZOU. Federated learning client selection method based on label classification [J]. Journal of Computer Applications, 2024, 44(12): 3759-3765. |
[9] | Jie WU, Xuezhong QIAN, Wei SONG. Personalized federated learning based on similarity clustering and regularization [J]. Journal of Computer Applications, 2024, 44(11): 3345-3353. |
[10] | Xuebin CHEN, Changsheng QU. Overview of backdoor attacks and defense in federated learning [J]. Journal of Computer Applications, 2024, 44(11): 3459-3469. |
[11] | Shuaihua ZHANG, Shufen ZHANG, Mingchuan ZHOU, Chao XU, Xuebin CHEN. Malicious traffic detection model based on semi-supervised federated learning [J]. Journal of Computer Applications, 2024, 44(11): 3487-3494. |
[12] | Chunyong YIN, Yongcheng ZHOU. Automatically adjusted clustered federated learning for double-ended clustering [J]. Journal of Computer Applications, 2024, 44(10): 3011-3020. |
[13] | Hui ZHOU, Yuling CHEN, Xuewei WANG, Yangwen ZHANG, Jianjiang HE. Deep shadow defense scheme of federated learning based on generative adversarial network [J]. Journal of Computer Applications, 2024, 44(1): 223-232. |
[14] | Mengjie LAN, Jianping CAI, Lan SUN. Self-regularization optimization methods for Non-IID data in federated learning [J]. Journal of Computer Applications, 2023, 43(7): 2073-2081. |
[15] | Wanzhen CHEN, En ZHANG, Leiyong QIN, Shuangxi HONG. Privacy-preserving federated learning algorithm based on blockchain in edge computing [J]. Journal of Computer Applications, 2023, 43(7): 2209-2216. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||