《计算机应用》唯一官方网站 ›› 2026, Vol. 46 ›› Issue (4): 1023-1033.DOI: 10.11772/j.issn.1001-9081.2025050536

• 人工智能 •    下一篇

联邦学习中改进Kolmogorov-Arnold网络的混合优化框架

姜志1, 陈学斌1(), 罗长银1,2, 甄子业1   

  1. 1.华北理工大学 理学院,河北 唐山 063210
    2.宁夏大学 数学与统计学院,银川 750021
  • 收稿日期:2025-05-16 修回日期:2025-06-27 接受日期:2025-07-15 发布日期:2025-08-01 出版日期:2026-04-10
  • 通讯作者: 陈学斌
  • 作者简介:姜志(2000—),男,山东青岛人,硕士研究生,CCF会员,主要研究方向:联邦学习、机器学习
    罗长银(1994—),男,陕西安康人,博士研究生,CCF会员,主要研究方向:联邦学习、机器学习
    甄子业(2001—),男,河北定州人,硕士研究生,主要研究方向:径向基函数插值、多元样条、深度学习。
  • 基金资助:
    国家自然科学基金资助项目(U20A20179)

Hybrid optimization framework for improving Kolmogorov-Arnold network in federated learning

Zhi JIANG1, Xuebin CHEN1(), Changyin LUO1,2, Ziye ZHEN1   

  1. 1.College of Sciences,North China University of Science and Technology,Tangshan Hebei 063210,China
    2.School of Mathematics and Statistics,Ningxia University,Yinchuan Ningxia 750021,China
  • Received:2025-05-16 Revised:2025-06-27 Accepted:2025-07-15 Online:2025-08-01 Published:2026-04-10
  • Contact: Xuebin CHEN
  • About author:JIANG Zhi, born in 2000, M. S. candidate. His research interests include federated learning, machine learning.
    LUO Changyin, born in 1994, Ph. D. candidate. His research interests include federated learning, machine learning.
    ZHEN Ziye, born in 2001, M. S. candidate. His research interests include radial basis function interpolation, multivariate spline, deep learning.
  • Supported by:
    National Natural Science Foundation of China(U20A20179)

摘要:

针对联邦学习中数据异质、梯度易陷入局部最优及计算?通信开销偏高等问题,面向Kolmogorov-Arnold网络(KAN)提出一种“关键边筛选?早停遗传进化?局部微调”的混合训练框架——KB-GA-KAN。首先依据核函数幅度和激活敏感度在客户端动态选取关键边,并仅对这些边的核系数进行遗传进化,全局搜索优良初始解;然后引入早停判据,结合进化与本地随机梯度下降(SGD)实现协同优化。在5个非独立同分布(Non-IID)数据集上的实验结果表明:相较于纯梯度训练的KAN,KB-GA-KAN在相同通信轮数下的测试准确率平均提高了1.34%,收敛轮数减少了42%,并以轻微的额外计算代价提升了异构场景的鲁棒性。核函数的可视化结果进一步验证了KB-GA-KAN对模型可解释性的促进作用。可见,KB-GA-KAN能为隐私受限条件下的高效随机梯度下降KAN训练提供兼顾准确率、收敛速度与计算成本的新途径。

关键词: 联邦学习, Kolmogorov-Arnold网络, 遗传算法, 早停机制

Abstract:

For addressing issues such as data heterogeneity, tendency of gradients to converge to local optimum, and high computational and communication overhead in federated learning, a hybrid training framework of “key edge screening-early-stopping genetic evolution-local fine-tuning” was developed for Kolmogorov-Arnold Network (KAN), called KB-GA-KAN. First, key edges on each client were selected dynamically according to kernel function amplitude and activation sensitivity, and only the kernel coefficients of these edges were evolved genetically, enabling a global search for good initial solutions. Then, an early-stopping criterion was introduced, and collaborative optimization was achieved by combining the evolution with local Stochastic Gradient Descent (SGD). Experimental results on five Non-Independent and Identically Distributed (Non-IID) datasets demonstrate that compared to KAN framework with pure gradient training, KB-GA-KAN has test accuracy raised by an average of 1.34%, and the number of convergence rounds lowered by 42%, and it improves the robustness of heterogeneous scenarios with a slight additional computational cost. Visual results of the kernel functions further confirm that KB-GA-KAN enhances model interpretability. It can be seen that KB-GA-KAN offers a new route to balance accuracy, convergence speed, and computational cost of efficient SGD KAN under privacy-restricted conditions.

Key words: federated learning, Kolmogorov-Arnold Network (KAN), Genetic Algorithm (GA), early stopping mechanism

中图分类号: