Journal of Computer Applications
Next Articles
Received:
Revised:
Accepted:
Online:
Published:
韩雨晨1,2,3,徐峰磊1,吕凡4,姚睿5,胡伏原1,2,3*
通讯作者:
基金资助:
Abstract: Task-free online continual learning was established as a task-agnostic and autonomous machine learning approach, where model dynamics were achieved through continuous adaptation to new data while catastrophic forgetting was mitigated. In conventional online continual learning methods, model accuracy was typically prioritized at the expense of computational efficiency, which resulted in delayed training responsiveness that failed to keep pace with high-velocity data streams. To address the above challenges, an efficiency-performance co-optimized online sparse continual learning framework was proposed, which overcame the limitations of conventional approaches through a bidirectional sparse adaptive regulation mechanism. First, a dynamic sparse topology optimization framework for parameter importance measurement was designed, where unstructured parameter pruning was achieved by incorporating parameter sensitivity analysis. Second, a memory-efficiency bi-objective optimization model was established, in which computational budget allocation was dynamically adjusted based on online class distribution estimation to realize optimal resource configuration. Finally, a gradient decoupling optimization strategy was developed, employing gradient masking to enable bidirectional optimization of both old and new knowledge while accelerating model updates and preserving the integrity of the knowledge topology. The proposed method demonstrates significant advantages across CIFAR-10, CIFAR-100, and Mini-ImageNet benchmarks. On CIFAR-10 with a memory capacity of 100, it achieves average improvements of 3.51% in Average Online Accuracy (AOA) and 5.00% in Test Accuracy (TA) compared to the Experience Replay (ER) baseline. For CIFAR-100, the method obtains enhancements of 2.59% in AOA and 1.55% in TA. In Mini-ImageNet evaluations, it shows performance gains of 3.52% and 0.13% in key metrics respectively. Visualization reveals that the framework successfully captures underlying concept drift patterns in data streams while maintaining real-time responsiveness. Ablation studies confirm the synergistic effects of all components and demonstrate how the dynamic sparsity mechanism provides a breakthrough solution to the fundamental challenges in continual learning. The proposed method overcomes the traditional trade-off between computational efficiency and model performance, establishing a new paradigm for online continual learning systems in open environments.
Key words: Continual Learning (CL), sparsity, online learning, parameter masking, computational efficiency
摘要: 无边界在线连续学习是一种任务无关的自主高效机器学习方法,通过连续适应新数据和抑制遗忘来实现模型的动态更新。现有在线连续学习方法通常追求模型准确率而牺牲计算效率,这使得在高速数据流场景中,模型因训练速度滞后,难以对数据流的变化及时响应。针对以上问题,提出一种面向效率-性能协同优化的在线稀疏连续学习框架,通过构建双向稀疏自适应调控机制突破传统方法的瓶颈。首先,设计参数重要性度量的动态稀疏拓扑优化框架,融合参数敏感性分析,实现非结构化参数剪枝;其次,建立记忆-效率双目标优化模型,基于在线类别分布估计动态调节计算预算分配,实现计算资源的最优配置;最后,构建梯度解耦优化策略,采用梯度掩码方法实现新旧知识的双向优化,在加速模型更新的同时保持知识拓扑的完整性。在CIFAR-10、CIFAR-100和Mini-ImageNet基准测试中,所提方法展现出显著优势:在CIFAR-10数据集上,当内存容量为100时,相较于ER (Experience Replay)基线平均在线准确率(AOA)和测试准确率(TA)分别实现3.51%和5.00%的显著提升;在CIFAR-100环境中,AOA和TA指标分别获得2.59%与1.55%的改进;Mini-ImageNet测试中,关键指标提升幅度达3.52%和0.13%。可视化分析表明,所提框架在保持实时响应能力的同时,成功捕获了数据流中的潜在概念漂移模式。消融实验结果验证了各模块的协同增强效应,揭示了动态稀疏机制对连续学习本质矛盾的突破性解决路径。所提方法突破了传统方法在计算效率与模型性能间的权衡困境,为开放环境下的在线连续学习系统建立了新范式。
关键词: 连续学习, 稀疏化, 在线学习, 参数掩码, 计算效率
CLC Number:
TP391.4
韩雨晨 徐峰磊 吕凡 姚睿 胡伏原. 高速数据流下无边界在线稀疏连续学习方法[J]. 《计算机应用》唯一官方网站, DOI: 10.11772/j.issn.1001-9081.2025040452.
0 / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2025040452