Journal of Computer Applications

    Next Articles

Orthogonal constraint and attention-guided augmentation for class-incremental learning

  

  • Received:2025-11-10 Revised:2026-02-03 Accepted:2026-02-06 Online:2026-02-13 Published:2026-02-13

基于类增量学习的正交约束与注意力引导增强

廖名燕,张法全*,吴逸阳,沈满德,陈昊   

  1. 武汉纺织大学 电子与电气工程学院,武汉 430200
  • 通讯作者: 张法全

Abstract: Abstract: In class-incremental learning, overcoming catastrophic forgetting is a core challenge. Replay-based methods use a small memory buffer to store and replay old samples to mitigate forgetting. The scarcity number of old class samples can lead to overfitting, reducing the model's generalization ability. An Orthogonal Constraint and Attention-Guided Augmentation method (OCAGA) was proposed. An Attention-Guided Data Augmentation mechanism (AGDA) was designed to enhance the diversity of old class samples, alleviating the imbalance between new and old classes. An Orthogonal Projection Loss (OPL) was introduced, which explicitly utilized class information to pull samples of the same class closer and push samples of different classes apart in the embedding space, thus enhancing feature discriminability. Experiments on the CIFAR-100 and ImageNet-100 datasets demonstrate that OCAGA outperforms compared with incremental Classifier and Representation Learning (iCaRL), achieving an increase in final accuracy of 2.55 and 4.12 percentage points, respectively, thereby validating its effectiveness in mitigating catastrophic forgetting and promoting knowledge retention. 

Key words: class-incremental learning, deep learning, data augmentation, orthogonal constraint, catastrophic forgetting

摘要: 摘  要: 在类增量学习中,克服灾难性遗忘是核心挑战。基于回放的方法利用有限内存缓冲区存储并回放历史样本以缓解遗忘,但旧类样本有限易导致过拟合,降低模型泛化能力。为此,提出基于正交约束与注意力引导增强方法(OCAGA)。一方面,设计基于注意力引导的数据增强机制(AGDA)以增强旧类样本的数量和多样性,缓解新旧类别之间的不平衡;另一方面,引入正交投影损失(OPL),通过显式利用类别信息,在嵌入空间中将同类样本拉近、异类样本推远,从而增强特征的判别性。在CIFAR-100和ImageNet-100数据集上的实验结果表明,OCAGA相较于iCaRL (incremental Classifier and Representation Learning)方法,最终准确率分别提升了2.55和4.12个百分点,验证了OCAGA在缓解灾难性遗忘、促进知识保持方面的有效性。

关键词: 关键词: 类增量学习, 深度学习, 数据增强, 正交约束, 灾难性遗忘

CLC Number: