Journal of Computer Applications ›› 2024, Vol. 44 ›› Issue (6): 1807-1815.DOI: 10.11772/j.issn.1001-9081.2023060774
Special Issue: 人工智能
• Artificial intelligence • Previous Articles Next Articles
Received:
2023-06-19
Revised:
2023-08-15
Accepted:
2023-08-23
Online:
2023-09-11
Published:
2024-06-10
Contact:
Jinfu WU, Yi LIU
About author:
LIU Yi, born in 1976, Ph. D., professor. His research interests include cloud computing security, internet of things security, mobile computing.
Supported by:
通讯作者:
吴锦富,柳毅
作者简介:
吴锦富(1998—),男,广东梅州人,硕士研究生,主要研究方向:深度学习、图像分类、对抗样本;
基金资助:
CLC Number:
Jinfu WU, Yi LIU. Fast adversarial training method based on random noise and adaptive step size[J]. Journal of Computer Applications, 2024, 44(6): 1807-1815.
吴锦富, 柳毅. 基于随机噪声和自适应步长的快速对抗训练方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1807-1815.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2023060774
符号 | 描述 |
---|---|
梯度投影符号,用于对对抗扰动超出预算的对抗样本进行投影梯度裁剪,如 | |
求梯度符号,用于求取当前样本在目标网络中的梯度大小,和损失函数搭配使用,下标表示求梯度对象 | |
参数更新符号,用于表示训练过程中目标网络模型参数的更新 | |
Tab.1 Description of symbols
符号 | 描述 |
---|---|
梯度投影符号,用于对对抗扰动超出预算的对抗样本进行投影梯度裁剪,如 | |
求梯度符号,用于求取当前样本在目标网络中的梯度大小,和损失函数搭配使用,下标表示求梯度对象 | |
参数更新符号,用于表示训练过程中目标网络模型参数的更新 | |
常数 | 不同对抗攻击方法的鲁棒精度/% | ||
---|---|---|---|
干净样本 | FGSM | PGD-50 | |
0.003 5 | 80.67 | 58.88 | 49.15 |
0.004 3 | 81.62 | 60.80 | 48.57 |
0.005 0 | 82.27 | 62.07 | 48.25 |
0.010 0 | 84.13 | 66.26 | 45.80 |
Tab.2 Robustness test results on CIFAR-10 dataset with different constant values c
常数 | 不同对抗攻击方法的鲁棒精度/% | ||
---|---|---|---|
干净样本 | FGSM | PGD-50 | |
0.003 5 | 80.67 | 58.88 | 49.15 |
0.004 3 | 81.62 | 60.80 | 48.57 |
0.005 0 | 82.27 | 62.07 | 48.25 |
0.010 0 | 84.13 | 66.26 | 45.80 |
基准步长 | 不同对抗攻击方法的鲁棒精度/% | ||
---|---|---|---|
干净样本 | FGSM | PGD-50 | |
4.5/255 | 83.81 | 65.33 | 46.47 |
5.0/255 | 82.61 | 62.98 | 47.36 |
5.5/255 | 81.62 | 60.80 | 48.57 |
6.0/255 | 80.45 | 58.80 | 48.80 |
Tab.3 Robustness test results on CIFAR-10 dataset with different base step sizes γ/c
基准步长 | 不同对抗攻击方法的鲁棒精度/% | ||
---|---|---|---|
干净样本 | FGSM | PGD-50 | |
4.5/255 | 83.81 | 65.33 | 46.47 |
5.0/255 | 82.61 | 62.98 | 47.36 |
5.5/255 | 81.62 | 60.80 | 48.57 |
6.0/255 | 80.45 | 58.80 | 48.80 |
噪声参数 | 不同对抗攻击方法的鲁棒精度/% | ||
---|---|---|---|
干净样本 | FGSM | PGD-50 | |
0 (final) | 84.79 | 97.03 | 00.00 |
0 (best) | 67.61 | 43.69 | 38.17 |
1 | 81.62 | 60.80 | 48.57 |
2 | 80.43 | 60.35 | 48.21 |
Tab.4 Robustness test results on CIFAR-10 dataset with different levels of noise
噪声参数 | 不同对抗攻击方法的鲁棒精度/% | ||
---|---|---|---|
干净样本 | FGSM | PGD-50 | |
0 (final) | 84.79 | 97.03 | 00.00 |
0 (best) | 67.61 | 43.69 | 38.17 |
1 | 81.62 | 60.80 | 48.57 |
2 | 80.43 | 60.35 | 48.21 |
数据集 | 方法类型 | AT方法 | 不同对抗攻击方法生成的对抗样本的鲁棒精度/% | 训练时间/h | |||||
---|---|---|---|---|---|---|---|---|---|
干净样本 | FGSM | PGD-10 | PGD-50 | C&W | AA | ||||
CIFAR-10 | 多步AT | PGD-10-AT | 80.18 | 59.23 | 50.72 | 49.12 | 48.09 | 45.97 | 1.61 |
TRADES | 80.72 | 60.39 | 51.40 | 50.09 | 48.03 | 47.82 | 1.82 | ||
LAS-AT | 81.27 | 60.83 | 51.98 | 50.43 | 49.83 | 47.87 | 2.49 | ||
Fast-AT | Free-AT | 78.63 | 51.32 | 40.51 | 39.01 | 39.27 | 36.35 | 0.43 | |
FGSM-RS | 84.42 | 60.45 | 48.23 | 46.11 | 43.92 | 43.39 | 0.29 | ||
FGSM-GA | 81.70 | 59.10 | 49.11 | 47.24 | 46.96 | 43.50 | 0.88 | ||
N-FGSM | 80.37 | 59.79 | 49.53 | 48.06 | 47.03 | 45.11 | 0.29 | ||
ATAS | 83.02 | 58.21 | 46.95 | 44.89 | 45.21 | 42.51 | 0.36 | ||
基于预测攻击步长 | 83.97 | 59.47 | 48.39 | 46.64 | 45.05 | 43.67 | 0.37 | ||
本文方法 | 81.62 | 60.80 | 50.22 | 48.57 | 47.58 | 45.46 | 0.33 | ||
CIFAR-100 | 多步AT | PGD-10-AT | 54.11 | 30.11 | 28.07 | 27.22 | 24.87 | 23.07 | 1.78 |
TRADES | 52.87 | 29.16 | 26.21 | 25.61 | 23.12 | 22.04 | 1.95 | ||
LAS-AT | 55.15 | 31.72 | 29.22 | 28.33 | 25.81 | 24.27 | 2.51 | ||
Fast-AT | Free-AT | 50.54 | 27.51 | 19.60 | 18.59 | 16.33 | 15.12 | 0.44 | |
FGSM-RS | 59.32 | 33.80 | 26.37 | 24.26 | 21.02 | 19.77 | 0. 29 | ||
FGSM-GA | 56.32 | 32.94 | 27.12 | 25.82 | 23.76 | 22.10 | 0.88 | ||
N-FGSM | 54.37 | 32.43 | 27.44 | 26.32 | 23.89 | 22.47 | 0.29 | ||
ATAS | 56.87 | 31.87 | 26.32 | 25.21 | 22.33 | 21.05 | 0.37 | ||
基于预测攻击步长 | 54.12 | 32.80 | 27.34 | 26.54 | 22.32 | 21.22 | 0.37 | ||
本文方法 | 55.30 | 33.33 | 27.86 | 26.99 | 24.32 | 22.83 | 0.33 |
Tab. 5 Comparison of classification accuracy of clean examples, robust accuracy of adversarial examples and model training time among different AT methods
数据集 | 方法类型 | AT方法 | 不同对抗攻击方法生成的对抗样本的鲁棒精度/% | 训练时间/h | |||||
---|---|---|---|---|---|---|---|---|---|
干净样本 | FGSM | PGD-10 | PGD-50 | C&W | AA | ||||
CIFAR-10 | 多步AT | PGD-10-AT | 80.18 | 59.23 | 50.72 | 49.12 | 48.09 | 45.97 | 1.61 |
TRADES | 80.72 | 60.39 | 51.40 | 50.09 | 48.03 | 47.82 | 1.82 | ||
LAS-AT | 81.27 | 60.83 | 51.98 | 50.43 | 49.83 | 47.87 | 2.49 | ||
Fast-AT | Free-AT | 78.63 | 51.32 | 40.51 | 39.01 | 39.27 | 36.35 | 0.43 | |
FGSM-RS | 84.42 | 60.45 | 48.23 | 46.11 | 43.92 | 43.39 | 0.29 | ||
FGSM-GA | 81.70 | 59.10 | 49.11 | 47.24 | 46.96 | 43.50 | 0.88 | ||
N-FGSM | 80.37 | 59.79 | 49.53 | 48.06 | 47.03 | 45.11 | 0.29 | ||
ATAS | 83.02 | 58.21 | 46.95 | 44.89 | 45.21 | 42.51 | 0.36 | ||
基于预测攻击步长 | 83.97 | 59.47 | 48.39 | 46.64 | 45.05 | 43.67 | 0.37 | ||
本文方法 | 81.62 | 60.80 | 50.22 | 48.57 | 47.58 | 45.46 | 0.33 | ||
CIFAR-100 | 多步AT | PGD-10-AT | 54.11 | 30.11 | 28.07 | 27.22 | 24.87 | 23.07 | 1.78 |
TRADES | 52.87 | 29.16 | 26.21 | 25.61 | 23.12 | 22.04 | 1.95 | ||
LAS-AT | 55.15 | 31.72 | 29.22 | 28.33 | 25.81 | 24.27 | 2.51 | ||
Fast-AT | Free-AT | 50.54 | 27.51 | 19.60 | 18.59 | 16.33 | 15.12 | 0.44 | |
FGSM-RS | 59.32 | 33.80 | 26.37 | 24.26 | 21.02 | 19.77 | 0. 29 | ||
FGSM-GA | 56.32 | 32.94 | 27.12 | 25.82 | 23.76 | 22.10 | 0.88 | ||
N-FGSM | 54.37 | 32.43 | 27.44 | 26.32 | 23.89 | 22.47 | 0.29 | ||
ATAS | 56.87 | 31.87 | 26.32 | 25.21 | 22.33 | 21.05 | 0.37 | ||
基于预测攻击步长 | 54.12 | 32.80 | 27.34 | 26.54 | 22.32 | 21.22 | 0.37 | ||
本文方法 | 55.30 | 33.33 | 27.86 | 26.99 | 24.32 | 22.83 | 0.33 |
1 | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
2 | HUANG G, LIU Z, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 2261-2269. |
3 | TIAN Y, PEI K, JANA S, et al. DeepTest: automated testing of deep-neural-network-driven autonomous cars[C]// Proceedings of the 40th International Conference on Software Engineering. New York: ACM, 2018: 303-314. |
4 | FAYJIE A R, HOSSAIN S, OUALID D, et al. Driverless car: autonomous driving using deep reinforcement learning in urban environment[C]// Proceedings of the 2018 15th International Conference on Ubiquitous Robots. Piscataway: IEEE, 2018: 896-901. |
5 | DENG Y, BAO F, KONG Y, et al. Deep direct reinforcement learning for financial signal representation and trading [J]. IEEE Transactions on Neural Networks and Learning Systems, 2017, 28(3): 653-664. |
6 | SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks [EB/OL]. (2014-02-19) [2023-04-10]. . |
7 | MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks [EB/OL]. (2019-09-04) [2023-04-12]. . |
8 | SHAFAHI A, NAJIBI M, GHIASI A, et al. Adversarial training for free![C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates, 2019: 3358-3369. |
9 | WONG E, RICE L, KOLTER J Z, et al. Fast is better than free: revisiting adversarial training [EB/OL]. [2023-04-20]. . |
10 | GOODFELLOW I J, SHLENS J, SZEGEDY C, et al. Explaining and harnessing adversarial examples [EB/OL]. (2015-03-20) [2023-03-25]. . |
11 | ANDRIUSHCHENKO M, FLAMMARION N. Understanding and improving fast adversarial training[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates, 2020: 16048-16059. |
12 | KIM H, LEE W, LEE J. Understanding catastrophic overfitting in single-step adversarial training[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2021, 35(9): 8119-8127. |
13 | CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]// Proceedings of the 2017 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2017: 39-57. |
14 | CROCE F, HEIN M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks[C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020, 119: 2206-2216. |
15 | KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial examples in the physical world [EB/OL]. (2017-02-11) [2023-04-23]. . |
16 | DONG Y, LIAO F, PANG T, et al. Discovering adversarial examples with momentum [EB/OL]. (2018-03-22) [2023-05-03]. . |
17 | LIN J, SONG C, HE K, et al. Nesterov accelerated gradient and scale invariance for adversarial attacks [EB/OL]. (2020-02-03) [2023-05-03]. . |
18 | WANG X, HE K. Enhancing the transferability of adversarial attacks through variance tuning[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 1924-1933. |
19 | CROCE F, HEIN M. Minimally distorted adversarial examples with a fast adaptive boundary attack[C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 2196-2205. |
20 | ANDRIUSHCHENKO M, CROCE F, FLAMMARION N, et al. Square attack: a query-efficient black-box adversarial attack via random search[C]// Proceedings of the 16th European Conference on Computer Vision. Berlin: Springer, 2020: 484-501. |
21 | 张思思,左信,刘建伟.深度学习中的对抗样本问题[J].计算机学报,2019,42(8):1886-1904. |
ZHANG S S, ZUO X, LIU J W. The problem of the adversarial examples in deep learning [J]. Chinese Journal of Computers, 2019, 42(8): 1886-1904. | |
22 | 姜妍,张立国.面向深度学习模型的对抗攻击与防御方法综述[J].计算机工程,2021,47(1):1-11. |
JIANG Y, ZHANG L G. Survey of adversarial attacks and defense methods for deep learning model [J]. Computer Engineering, 2021, 47(1): 1-11. | |
23 | 陈梦轩,张振永,纪守领,等.图像对抗样本研究综述[J].计算机科学,2022,49(2):92-106. |
CHEN M X, ZHANG Z Y, JI S L, et al. Survey of research progress on adversarial examples in images [J]. Computer Science, 2022, 49(2): 92-106. | |
24 | DE JORGE P, BIBI A, VOLPI R, et al. Make some noise: reliable and efficient single-step adversarial training[EB/OL]. [2023-05-30]. . |
25 | HUANG Z, FAN Y, LIU C, et al. Fast adversarial training with adaptive step size [EB/OL]. [2023-05-07]. . |
26 | 广东工业大学,佳都科技集团股份有限公司. 一种基于预测攻击步长的对抗训练方法及系统:CN202310481128.3[P]. 2023-07-25. |
Guangdong University of Technology, PCI Technology Group Corporation Limited. An adversarial training method and system based on predicted attack step size: CN202310481128.3[P]. 2023-07-25. | |
27 | CAI Q-Z, LIU C, SONG D. Curriculum adversarial training[C]// Proceedings of the 27th International Joint Conference on Artificial Intelligence. Pato Alto: AAAI Press, 2018: 3740-3747. |
28 | WANG Y, MA X, BAILEY J, et al. On the convergence and robustness of adversarial training[C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 6586-6595. |
29 | ZHANG J, XU X, HAN B, et al. Attacks which do not kill training make adversarial learning stronger[C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 11258-11267. |
30 | QIAN N. On the momentum term in gradient descent learning algorithms [J]. Neural Networks, 1999, 12(1):145-151. |
31 | RICE L, WO E, KOLTER J Z. Overfitting in adversarially robust deep learning[C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 8093-8104. |
32 | ZHANG H, YU Y, JIAO J, et al. Theoretically principled trade-off between robustness and accuracy[C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 7472-7482. |
33 | JIA X, ZHANG Y, WU B, et al. LAS-AT: adversarial training with learnable attack strategy [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 13388-13398. |
[1] | Yunchuan HUANG, Yongquan JIANG, Juntao HUANG, Yan YANG. Molecular toxicity prediction based on meta graph isomorphism network [J]. Journal of Computer Applications, 2024, 44(9): 2964-2969. |
[2] | Jing QIN, Zhiguang QIN, Fali LI, Yueheng PENG. Diagnosis of major depressive disorder based on probabilistic sparse self-attention neural network [J]. Journal of Computer Applications, 2024, 44(9): 2970-2974. |
[3] | Xiyuan WANG, Zhancheng ZHANG, Shaokang XU, Baocheng ZHANG, Xiaoqing LUO, Fuyuan HU. Unsupervised cross-domain transfer network for 3D/2D registration in surgical navigation [J]. Journal of Computer Applications, 2024, 44(9): 2911-2918. |
[4] | Yexin PAN, Zhe YANG. Optimization model for small object detection based on multi-level feature bidirectional fusion [J]. Journal of Computer Applications, 2024, 44(9): 2871-2877. |
[5] | Shunyong LI, Shiyi LI, Rui XU, Xingwang ZHAO. Incomplete multi-view clustering algorithm based on self-attention fusion [J]. Journal of Computer Applications, 2024, 44(9): 2696-2703. |
[6] | Yuhan LIU, Genlin JI, Hongping ZHANG. Video pedestrian anomaly detection method based on skeleton graph and mixed attention [J]. Journal of Computer Applications, 2024, 44(8): 2551-2557. |
[7] | Yanjie GU, Yingjun ZHANG, Xiaoqian LIU, Wei ZHOU, Wei SUN. Traffic flow forecasting via spatial-temporal multi-graph fusion [J]. Journal of Computer Applications, 2024, 44(8): 2618-2625. |
[8] | Qianhong SHI, Yan YANG, Yongquan JIANG, Xiaocao OUYANG, Wubo FAN, Qiang CHEN, Tao JIANG, Yuan LI. Multi-granularity abrupt change fitting network for air quality prediction [J]. Journal of Computer Applications, 2024, 44(8): 2643-2650. |
[9] | Yiqun ZHAO, Zhiyu ZHANG, Xue DONG. Anisotropic travel time computation method based on dense residual connection physical information neural networks [J]. Journal of Computer Applications, 2024, 44(7): 2310-2318. |
[10] | Song XU, Wenbo ZHANG, Yifan WANG. Lightweight video salient object detection network based on spatiotemporal information [J]. Journal of Computer Applications, 2024, 44(7): 2192-2199. |
[11] | Xun SUN, Ruifeng FENG, Yanru CHEN. Monocular 3D object detection method integrating depth and instance segmentation [J]. Journal of Computer Applications, 2024, 44(7): 2208-2215. |
[12] | Zheng WU, Zhiyou CHENG, Zhentian WANG, Chuanjian WANG, Sheng WANG, Hui XU. Deep learning-based classification of head movement amplitude during patient anaesthesia resuscitation [J]. Journal of Computer Applications, 2024, 44(7): 2258-2263. |
[13] | Huanhuan LI, Tianqiang HUANG, Xuemei DING, Haifeng LUO, Liqing HUANG. Public traffic demand prediction based on multi-scale spatial-temporal graph convolutional network [J]. Journal of Computer Applications, 2024, 44(7): 2065-2072. |
[14] | Zhi ZHANG, Xin LI, Naifu YE, Kaixi HU. DKP: defending against model stealing attacks based on dark knowledge protection [J]. Journal of Computer Applications, 2024, 44(7): 2080-2086. |
[15] | Yajuan ZHAO, Fanjun MENG, Xingjian XU. Review of online education learner knowledge tracing [J]. Journal of Computer Applications, 2024, 44(6): 1683-1698. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||