Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Clean-label multi-backdoor attack method based on feature regulation and color separation
Yingchun TANG, Rong HUANG, Shubo ZHOU, Xueqin JIANG
Journal of Computer Applications    2026, 46 (1): 124-134.   DOI: 10.11772/j.issn.1001-9081.2024121776
Abstract21)   HTML0)    PDF (2376KB)(2)       Save

To solve the problem of lack of stealth and flexibility in traditional backdoor attacks, a clean-label multi-backdoor attack method based on feature regulation and color separation was proposed to train poisoned network to embed triggers based on the information hiding framework. Firstly, image edge was used as trigger, a feature regulation strategy was designed and adversarial perturbation and a surrogate model were combined to assist poisoned network training, and enhance the significance of trigger features. Secondly, by proposing a color separation strategy to color the trigger, the trigger was given distinguishable RGB space colors and set one-hot target confidence corresponding to the color to guide training, thereby ensuring the distinguishability of trigger features. In order to verify the effectiveness of the proposed method, experiments were conducted on 3 datasets (CIFAR-10, ImageNet-10 and GTSRB) and 5 models. The results show that in the single-backdoor scenario, the proposed method achieves the Attack Success Rate (ASR) over 98% on all three datasets, outperforming the second-best method by 7.94, 1.70, and 8.61 percentage points, respectively; in the multi-backdoor scenario, the proposed method achieves the ASR over 90% on the ImageNet-10 dataset, outperforming the second-best method by 36.63 percentage points averagely. The results of ablation experiments verify the rationality of the feature regulation and color separation strategies as well as the contribution of adversarial perturbation and surrogate model. The results of the multi-backdoor experiment demonstrate the flexibility of the proposed attack method.

Table and Figures | Reference | Related Articles | Metrics