Journal of Computer Applications ›› 2026, Vol. 46 ›› Issue (1): 124-134.DOI: 10.11772/j.issn.1001-9081.2024121776
• Cyber security • Previous Articles Next Articles
Yingchun TANG1, Rong HUANG1,2(
), Shubo ZHOU1,2, Xueqin JIANG1,2
Received:2024-12-17
Revised:2025-03-31
Accepted:2025-04-07
Online:2026-01-10
Published:2026-01-10
Contact:
Rong HUANG
About author:TANG Yingchun, born in 2000, M. S. candidate. His research interests include backdoor attack.Supported by:通讯作者:
黄荣
作者简介:唐迎春(2000—),男,安徽合肥人,硕士研究生,主要研究方向:后门攻击基金资助:CLC Number:
Yingchun TANG, Rong HUANG, Shubo ZHOU, Xueqin JIANG. Clean-label multi-backdoor attack method based on feature regulation and color separation[J]. Journal of Computer Applications, 2026, 46(1): 124-134.
唐迎春, 黄荣, 周树波, 蒋学芹. 基于特征调控与颜色分离的净标签多后门攻击方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 124-134.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024121776
| 攻击方法 | CIFAR-10 | ImageNet-10 | GTSRB | |||
|---|---|---|---|---|---|---|
| ASR(↑) | MAL(↓) | ASR(↑) | MAL(↓) | ASR(↑) | MAL(↓) | |
| BadNets[ | 100.00 | 1.04 | 100.00 | 0.20 | 100.00 | 0.08 |
| INK[ | 99.92 | 0.01 | 98.48 | 0.12 | 97.81 | 0.08 |
| BadNets*[ | 57.51 | 80.20 | 0.10 | 47.80 | 0.83 | |
| INK*[ | 55.20 | 0.01 | 71.20 | 0.15 | 40.37 | 0.23 |
| LCBA[ | 0.43 | 94.90 | 0.57 | 77.03 | 0.20 | |
| Refool[ | 83.61 | 0.80 | 95.99 | 0.49 | 0.66 | |
| DA[ | 82.22 | 0.09 | 88.32 | 0.05 | 70.21 | |
| Inv[ | 89.32 | 0.80 | 0.10 | 86.01 | 0.43 | |
| SIG[ | 45.80 | 0.12 | 50.45 | -0.10 | 89.43 | 0.48 |
| HTBA[ | 67.87 | 0.10 | 63.00 | 61.22 | 0.58 | |
| SAA[ | 84.60 | 0.13 | 78.51 | 0.08 | 81.55 | 0.23 |
| 本文方法 | 99.15 | -0.98 | 99.90 | 0.02 | 98.33 | 0.30 |
Tab. 1 Quantitative single-backdoor performance comparison of different attack methods
| 攻击方法 | CIFAR-10 | ImageNet-10 | GTSRB | |||
|---|---|---|---|---|---|---|
| ASR(↑) | MAL(↓) | ASR(↑) | MAL(↓) | ASR(↑) | MAL(↓) | |
| BadNets[ | 100.00 | 1.04 | 100.00 | 0.20 | 100.00 | 0.08 |
| INK[ | 99.92 | 0.01 | 98.48 | 0.12 | 97.81 | 0.08 |
| BadNets*[ | 57.51 | 80.20 | 0.10 | 47.80 | 0.83 | |
| INK*[ | 55.20 | 0.01 | 71.20 | 0.15 | 40.37 | 0.23 |
| LCBA[ | 0.43 | 94.90 | 0.57 | 77.03 | 0.20 | |
| Refool[ | 83.61 | 0.80 | 95.99 | 0.49 | 0.66 | |
| DA[ | 82.22 | 0.09 | 88.32 | 0.05 | 70.21 | |
| Inv[ | 89.32 | 0.80 | 0.10 | 86.01 | 0.43 | |
| SIG[ | 45.80 | 0.12 | 50.45 | -0.10 | 89.43 | 0.48 |
| HTBA[ | 67.87 | 0.10 | 63.00 | 61.22 | 0.58 | |
| SAA[ | 84.60 | 0.13 | 78.51 | 0.08 | 81.55 | 0.23 |
| 本文方法 | 99.15 | -0.98 | 99.90 | 0.02 | 98.33 | 0.30 |
| 攻击方法 | ImageNet-10 | ||
|---|---|---|---|
| Avg-ASR | Max-ASR | Min-ASR | |
| SIG[ | 28.08 | 42.19 | 13.97 |
| HTBA[ | 46.12 | 59.80 | 32.44 |
| SAA[ | 57.97 | 64.54 | 51.40 |
| 本文方法 | 94.60 | 97.60 | 91.60 |
Tab. 2 Quantitative dual-backdoor performance comparison of different attack methods on ImageNet-10 dataset
| 攻击方法 | ImageNet-10 | ||
|---|---|---|---|
| Avg-ASR | Max-ASR | Min-ASR | |
| SIG[ | 28.08 | 42.19 | 13.97 |
| HTBA[ | 46.12 | 59.80 | 32.44 |
| SAA[ | 57.97 | 64.54 | 51.40 |
| 本文方法 | 94.60 | 97.60 | 91.60 |
| 中毒率 | CIFAR-10 | ImageNet-10 | GTSRB | |||
|---|---|---|---|---|---|---|
| ASR | MAL | ASR | MAL | ASR | MAL | |
| 0.5 | 95.24 | -1.45 | 95.10 | 0.00 | 48.37 | 0.01 |
| 1.0 | 95.46 | -0.88 | 96.20 | 96.93 | 0.40 | |
| 2.0 | 99.15 | 0.20 | 98.33 | |||
| 5.0 | 0.01 | 100.00 | 0.40 | 99.63 | 0.65 | |
| 10.0 | 100.00 | 6.49 | 100.00 | 10.09 | 0.35 | |
Tab. 3 Performance comparison of proposed method under different poisoning rates
| 中毒率 | CIFAR-10 | ImageNet-10 | GTSRB | |||
|---|---|---|---|---|---|---|
| ASR | MAL | ASR | MAL | ASR | MAL | |
| 0.5 | 95.24 | -1.45 | 95.10 | 0.00 | 48.37 | 0.01 |
| 1.0 | 95.46 | -0.88 | 96.20 | 96.93 | 0.40 | |
| 2.0 | 99.15 | 0.20 | 98.33 | |||
| 5.0 | 0.01 | 100.00 | 0.40 | 99.63 | 0.65 | |
| 10.0 | 100.00 | 6.49 | 100.00 | 10.09 | 0.35 | |
| 代理模型 | 单后门 | 多后门 | ||
|---|---|---|---|---|
| ASR | Avg-ASR | Max-ASR | Min-ASR | |
| × | 97.1 | 49.47 | 74.55 | 24.38 |
| √ | 99.9 | 96.10 | 97.60 | 94.60 |
Tab. 4 Impact of surrogate model on performance of proposed method
| 代理模型 | 单后门 | 多后门 | ||
|---|---|---|---|---|
| ASR | Avg-ASR | Max-ASR | Min-ASR | |
| × | 97.1 | 49.47 | 74.55 | 24.38 |
| √ | 99.9 | 96.10 | 97.60 | 94.60 |
| [1] | 张顺,龚怡宏,王进军.深度卷积神经网络的发展及其在计算机视觉领域的应用[J].计算机学报, 2019, 42(3): 453-482. |
| ZHANG S, GONG Y H, WANG J J. The development of deep convolution neural network and its applications on computer vision [J]. Chinese Journal of Computers, 2019, 42(3): 453-482. | |
| [2] | QIU H, YU B, GONG D, et al. SynFace: face recognition with synthetic data [C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 10860-10870. |
| [3] | YUAN L, CHEN Y, CUI G, et al. Revisiting out-of-distribution robustness in NLP: benchmarks, analysis, and LLMs evaluations [C]// Proceedings of the 37th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2023: 58478-58507. |
| [4] | CHEN J, TAM D, RAFFEL C, et al. An empirical survey of data augmentation for limited data learning in NLP [J]. Transactions of the Association for Computational Linguistics, 2023, 11: 191-211. |
| [5] | LENG Y, TAN X, ZHU L, et al. FastCorrect: fast error correction with edit alignment for automatic speech recognition [C]// Proceedings of the 35th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2021: 21708-21719. |
| [6] | KHEDDAR H, HEMIS M, HIMEUR Y. Automatic speech recognition using advanced deep learning approaches: a survey [J]. Information Fusion, 2024, 109: No.102422. |
| [7] | FAN Y, WU B, LI T, et al. Sparse adversarial attack via perturbation factorization [C]// Proceedings of the 2020 European Conference on Computer Vision. Cham: Springer, 2020: 35-50. |
| [8] | WEI H, TANG H, JIA X, et al. Physical adversarial attack meets computer vision: a decade survey [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(12): 9797-9817. |
| [9] | MOOSAVI-DEZFOOLI S M, FAWZI A, FAWZI O, et al. Universal adversarial perturbations [C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 86-94. |
| [10] | SHAFAHI A, HUANG W R, NAJIBI M, et al. Poison frogs! targeted clean-label poisoning attacks on neural networks [C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2018: 6106-6116. |
| [11] | 陈晋音,邹健飞,苏蒙蒙,等.深度学习模型的中毒攻击与防御综述[J].信息安全学报, 2020, 5(4): 14-29. |
| CHEN J Y, ZOU J F, SU M M, et al. Poisoning attack and defense on deep learning model: a survey [J]. Journal of Cyber Security, 2020, 5(4): 14-29. | |
| [12] | CARLINI N, JAGIELSKI M, CHOQUETTE-CHOO C A, et al. Poisoning web-scale training datasets is practical [C]// Proceedings of the 2024 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2024: 407-425. |
| [13] | 梁捷,郝晓燕,陈永乐.面向视觉分类模型的投毒攻击[J].计算机应用, 2023, 43(2): 467-473. |
| LIANG J, HAO X Y, CHEN Y L. Poisoning attack toward visual classification model [J]. Journal of Computer Applications, 2023, 43(2): 467-473. | |
| [14] | 杜巍,刘功申.深度学习中的后门攻击综述[J].信息安全学报, 2022, 7(3): 1-16. |
| DU W, LIU G S. A survey of backdoor attack in deep learning [J]. Journal of Cyber Security, 2022, 7(3): 1-16. | |
| [15] | ZHAO S, MA X, ZHENG X, et al. Clean-label backdoor attacks on video recognition models [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 14431-14440. |
| [16] | GU T, LIU K, DOLAN-GAVITT B, et al. BadNets: evaluating backdooring attacks on deep neural networks [J]. IEEE Access, 2019, 7: 47230-47244. |
| [17] | NGUYEN A T, TRAN A T. WaNet — imperceptible warping-based backdoor attack [EB/OL]. [2024-11-19]. . |
| [18] | ZHONG N, QIAN Z, ZHANG X. Imperceptible backdoor attack: from input space to feature representation [C]// Proceedings of the 31st International Joint Conference on Artificial Intelligence. California: ijcai.org, 2022: 1736-1742. |
| [19] | LI S, XUE M, ZHAO B Z H, et al. Invisible backdoor attacks on deep neural networks via steganography and regularization [J]. IEEE Transactions on Dependable and Secure Computing, 2021, 18(5): 2088-2105. |
| [20] | ZHANG J, CHEN D, HUANG Q, et al. Poison Ink: robust and invisible backdoor attack [J]. IEEE Transactions on Image Processing, 2022, 31: 5691-5705. |
| [21] | LI Y, LI Y, WU B, et al. Invisible backdoor attack with sample-specific triggers [C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 16443-16452. |
| [22] | MA B, ZHAO C, WANG D, et al. DIHBA: dynamic, invisible and high attack success rate boundary backdoor attack with low poison ratio [J]. Computers and Security, 2023, 129: No.103212. |
| [23] | TURNER A, TSIPRAS D, MĄDRY A. Label-consistent backdoor attacks [EB/OL]. [2024-11-19]. . |
| [24] | LIU Y, MA X, BAILEY J, et al. Reflection backdoor: a natural backdoor attack on deep neural networks [C]// Proceedings of the 2020 European Conference on Computer Vision, LNCS 12355. Cham: Springer, 2020: 182-199. |
| [25] | XU C, LIU W, ZHENG Y, et al. An imperceptible data augmentation based blackbox clean-label backdoor attack on deep neural networks [J]. IEEE Transactions on Circuits and Systems I: Regular Papers, 2023, 70(12): 2011-5024. |
| [26] | NING R, LI J, XIN C, et al. Invisible poison: a blackbox clean label backdoor attack to deep neural networks [C]// Proceedings of the 2021 IEEE Conference on Computer Communications. Piscataway: IEEE, 2021: 1-10. |
| [27] | XUE M, HE C, WANG J, et al. One-to-N & N-to-One: two advanced backdoor attacks against deep learning models [J]. IEEE Transactions on Dependable and Secure Computing, 2022, 19(3): 1562-1578. |
| [28] | BARNI M, KALLAS K, TONDI B. A new backdoor attack in CNNS by training set corruption without label poisoning [C]// Proceedings of the 2019 IEEE International Conference on Image Processing. Piscataway: IEEE, 2019: 101-105. |
| [29] | SAHA A, SUBRAMANYA A, PIRSIAVASH H. Hidden trigger backdoor attacks [C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2020: 11957-11965. |
| [30] | SOURI H, FOWL L, CHELLAPPA R, et al. Sleeper agent: scalable hidden trigger backdoors for neural networks trained from scratch [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 19165-19178. |
| [31] | TRAN B, LI J, MADRY A. Spectral signatures in backdoor attacks [C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2018: 8011-8021. |
| [32] | GAO Y, XU C, WANG D, et al. STRIP: a defence against Trojan attacks on deep neural networks [C]// Proceedings of the 35th Annual Computer Security Applications Conference. New York: ACM, 2019: 113-125. |
| [33] | WANG B, YAO Y, SHAN S, et al. Neural Cleanse: identifying and mitigating backdoor attacks in neural networks [C]// Proceedings of the 2019 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2019: 707-723. |
| [34] | DOAN B G, ABBASNEJAD E, RANASINGHE D C. Februus: input purify-cation defense against Trojan attacks on deep neural network systems [C]// Proceedings of the 36th Annual Computer Security Applications Conference. New York: ACM, 2020: 897-912. |
| [35] | LIU K, DOLAN-GAVITT B, GARG S. Fine-Pruning: defending against backdooring attacks on deep neural networks [C]// Proceedings of the 2018 International Symposium on Attacks, Intrusions, and Defenses, LNCS 11050. Cham: Springer, 2018: 273-294. |
| [36] | ZENG Y, CHEN S, PARK W, et al. Adversarial unlearning of backdoors via implicit hypergradient [EB/OL]. [2024-11-11]. . |
| [37] | SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization [C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 618-626. |
| [38] | ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 586-595. |
| [39] | LUO N, LI Y, WANG Y, et al. Enhancing clean label backdoor attack with two-phase specific triggers [EB/OL]. [2024-10-19]. . |
| [40] | KRIZHEVSKY A. Learning multiple layers of features from tiny images [R/OL]. [2024-11-10]. . |
| [41] | STALLKAMP J, SCHLIPSING M, SALMEN J, et al. The German Traffic Sign Recognition Benchmark: a multi-class classification competition [C]// Proceedings of the 2011 International Joint Conference on Neural Networks. Piscataway: IEEE, 2011: 1453-1460. |
| [42] | DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database [C]// Proceedings of the 2009 Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2009: 248-255. |
| [43] | SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [EB/OL]. [2024-11-10]. . |
| [44] | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C]// Proceedings of the 2016 International Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
| [45] | HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks [C]// Proceedings of the 2017 Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 2261-2269. |
| [46] | SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions [C]// Proceedings of the 2015 International Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 1-9. |
| [47] | LOSHCHILOV I, HUTTER F. SGDR: stochastic gradient descent with warm restarts [EB/OL]. [2024-11-19]. . |
| [1] | Jintao SU, Lina GE, Liguang XIAO, Jing ZOU, Zhe WANG. Detection and defense scheme for backdoor attacks in federated learning [J]. Journal of Computer Applications, 2025, 45(8): 2399-2408. |
| [2] | Ying TAN, Xinyu REN, Chaoli SUN, Sisi WANG. Two-stage infill sampling-based semi-supervised expensive multi-objective optimization algorithm [J]. Journal of Computer Applications, 2025, 45(5): 1605-1612. |
| [3] | Jie HUANG, Ruizi WU, Junli LI. Efficient adaptive robustness optimization algorithm for complex networks [J]. Journal of Computer Applications, 2024, 44(11): 3530-3539. |
| [4] | Xuebin CHEN, Changsheng QU. Overview of backdoor attacks and defense in federated learning [J]. Journal of Computer Applications, 2024, 44(11): 3459-3469. |
| [5] | Jie LIANG, Xiaoyan HAO, Yongle CHEN. Poisoning attack toward visual classification model [J]. Journal of Computer Applications, 2023, 43(2): 467-473. |
| [6] | XUE Feng, SHI Xuhua, SHI Feifan. Surrogate-based differential evolution constrained optimization [J]. Journal of Computer Applications, 2020, 40(4): 1091-1096. |
| [7] | YANG Ling ZHONG Yun-fei WANG Bin. Spot color separation of printing images based on fuzzy rules [J]. Journal of Computer Applications, 2012, 32(06): 1598-1600. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||