For addressing issues such as data heterogeneity, tendency of gradients to converge to local optimum, and high computational and communication overhead in federated learning, a hybrid training framework of “key edge screening-early-stopping genetic evolution-local fine-tuning” was developed for Kolmogorov-Arnold Network (KAN), called KB-GA-KAN. First, key edges on each client were selected dynamically according to kernel function amplitude and activation sensitivity, and only the kernel coefficients of these edges were evolved genetically, enabling a global search for good initial solutions. Then, an early-stopping criterion was introduced, and collaborative optimization was achieved by combining the evolution with local Stochastic Gradient Descent (SGD). Experimental results on five Non-Independent and Identically Distributed (Non-IID) datasets demonstrate that compared to KAN framework with pure gradient training, KB-GA-KAN has test accuracy raised by an average of 1.34%, and the number of convergence rounds lowered by 42%, and it improves the robustness of heterogeneous scenarios with a slight additional computational cost. Visual results of the kernel functions further confirm that KB-GA-KAN enhances model interpretability. It can be seen that KB-GA-KAN offers a new route to balance accuracy, convergence speed, and computational cost of efficient SGD KAN under privacy-restricted conditions.