To solve the irreconcilable contradiction between data sharing demands and requirements of privacy protection， federated learning was proposed. As a distributed machine learning， federated learning has a large number of model parameters needed to be exchanged between the participants and the central server， resulting in higher communication overhead. At the same time， federated learning is increasingly deployed on mobile devices with limited communication bandwidth and limited power， and the limited network bandwidth and the sharply raising client amount will make the communication bottleneck worse. For the communication bottleneck problem of federated learning， the basic workflow of federated learning was analyzed at first， and then from the perspective of methodology， three mainstream types of methods based on frequency reduction of model updating， model compression and client selection respectively as well as special methods such as model partition were introduced， and a deep comparative analysis of specific optimization schemes was carried out. Finally， the development trends of federated learning communication overhead technology research were summarized and prospected.