Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Hybrid optimization framework for improving Kolmogorov-Arnold network in federated learning
Zhi JIANG, Xuebin CHEN, Changyin LUO, Ziye ZHEN
Journal of Computer Applications    2026, 46 (4): 1023-1033.   DOI: 10.11772/j.issn.1001-9081.2025050536
Abstract69)   HTML9)    PDF (1022KB)(38)       Save

For addressing issues such as data heterogeneity, tendency of gradients to converge to local optimum, and high computational and communication overhead in federated learning, a hybrid training framework of “key edge screening-early-stopping genetic evolution-local fine-tuning” was developed for Kolmogorov-Arnold Network (KAN), called KB-GA-KAN. First, key edges on each client were selected dynamically according to kernel function amplitude and activation sensitivity, and only the kernel coefficients of these edges were evolved genetically, enabling a global search for good initial solutions. Then, an early-stopping criterion was introduced, and collaborative optimization was achieved by combining the evolution with local Stochastic Gradient Descent (SGD). Experimental results on five Non-Independent and Identically Distributed (Non-IID) datasets demonstrate that compared to KAN framework with pure gradient training, KB-GA-KAN has test accuracy raised by an average of 1.34%, and the number of convergence rounds lowered by 42%, and it improves the robustness of heterogeneous scenarios with a slight additional computational cost. Visual results of the kernel functions further confirm that KB-GA-KAN enhances model interpretability. It can be seen that KB-GA-KAN offers a new route to balance accuracy, convergence speed, and computational cost of efficient SGD KAN under privacy-restricted conditions.

Table and Figures | Reference | Related Articles | Metrics