Federated learning is a distributed machine learning framework that emphasizes privacy protection. However, it faces significant challenges in addressing statistical heterogeneity. Statistical heterogeneity is come from differences in data distribution across participating nodes, which may lead to problems such as model update biases, performance degradation of the global model, and instability in convergence. Aiming at the above problems, firstly, main issues caused by statistical heterogeneity were analyzed in detail, including inconsistent feature distributions, imbalanced label distributions, asymmetrical data sizes, and varying data quality. Secondly, a systematic review of the existing solutions of statistical heterogeneity in federated learning was provided, including local correction, clustering methods, client selection optimization, aggregation strategy adjustments, data sharing, knowledge distillation, and decoupling optimization, with an evaluation of their advantages, disadvantages, and applicable scenarios. Finally, future related research directions were discussed, such as device computing capacity awareness, model heterogeneity adaptation, optimization of privacy security mechanisms, and enhancement of cross-task transferability, thereby providing references for addressing statistical heterogeneity in practical applications.