The massive data generated at the edge can be used to train global models through Federated Learning (FL), making the combination of edge computing and federation learning become a key technology for reducing network energy consumption. In Hierarchical Federated Learning (HFL), the difference in the amount of local data and data quality of edge devices directly affects the quality of the global model of HFL. To address these issues, a Node Cooperation Algorithm under Transmission Energy Consumption Constraint (NCATTECC) was proposed to solve the global model quality optimization problem, which was proved to be an Non-deterministic Polynomial-hard (NP-hard) problem, and it was also proved that the proposed algorithm has an approximate ratio of (1-1/
). Specifically, node collaboration enabled the participation of more high-quality nodes in training without exceeding energy consumption limits. Simulation experimental results on the widely used CIFAR-10 and FashionMNIST datasets prove that the proposed algorithm achieves model accuracy improvements of 4.47% and 6.64% compared to FedAvg (Federated Averaging), and 3.47% and 4.58% compared to Fed-CBS (Federated Class-balanced Sampling), respectively, when training with selected nodes.