Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Fast adversarial training method based on data augmentation and label noise
Yifei SONG, Yi LIU
Journal of Computer Applications    2024, 44 (12): 3798-3807.   DOI: 10.11772/j.issn.1001-9081.2023121835
Abstract164)   HTML12)    PDF (1924KB)(67)       Save

Adversarial Training (AT) has been an effective defense approach for protecting classification models against adversarial attacks. However, high computational cost of the generation of strong adversarial samples during the training process may lead to significantly large extra training time. To overcome this limitation, Fast Adversarial Training (FAT) based on single-step attacks was explored. Previous work improves FAT from different perspectives, such as sample initialization, loss regularization, and training strategies. However, Catastrophic Overfitting (CO) will be encountered when dealing with large perturbation budgets. Therefore, an FAT method based on data augmentation and label noise was proposed. Firstly, multiple image transformations were performed to the original samples and random noise was introduced to implement data enhancement. Secondly, a small amount of label noise was injected. Thirdly, the augmented data were used to generate adversarial samples for model training. Finally, the label noise rate was adjusted adaptively according to the adversarial robustness test results. Comprehensive experimental results on CIFAR-10 and CIFAR-100 datasets show that compared to FGSM-MEP (Fast Gradient Sign Method with prior from the Momentum of all Previous Epoch) method, the proposed method improves 4.63 and 5.38 percentage points respectively on AA (AutoAttack) on the two datasets under the condition of large perturbation budget. The experimental results demonstrate that the proposed method can effectively handle the catastrophic overfitting problem under large perturbation budgets and enhance the adversarial robustness of model significantly.

Table and Figures | Reference | Related Articles | Metrics