《计算机应用》唯一官方网站

• •    下一篇

基于样本预处理的低成本对抗样本防御算法

陈虓1,2昌燕1,2,王丹琛3张仕斌1,2   

  1. 1.成都信息工程大学 网络空间安全学院 2.先进密码技术与系统安全四川省重点实验室(成都信息工程大学) 3. 四川省数字经济研究中心
  • 收稿日期:2023-09-12 修回日期:2023-11-09 发布日期:2024-01-10 出版日期:2024-01-10
  • 通讯作者: 昌燕
  • 作者简介:陈虓( 1999—),男,四川金堂人,硕士研究生,主要研究方向:对抗样本;昌燕(1979—),女,内蒙古阿拉善左旗人,教授,博士,CCF会员,主要研究方向:量子计算、信息安全;王丹琛(1983—),女,甘肃兰州人,博士,主要研究方向:数据安全、新基建、数字经济;张仕斌(1971—),男,重庆人,教授,博士,CCF会员,主要研究方向:量子计算、信息安全。
  • 基金资助:
    国家自然科学基金(62272068)、成都市重点研发支撑计划(2021-YF09-00114-GX)

Low-cost adversarial examples defense algorithm based on examples preprocessing

CHEN Xiao1,2, CHANG Yan1,2, WANG Danchen, ZHANG Shibin1,2   

  1. 1.School of Cybersecurity, Chengdu University of Information Technology 2.Sichuan Provincial Key Laboratory of Advanced Cryptography and System Security (Chengdu University of Information Technology) 3. Sichuan Digital Economy Research Center
  • Received:2023-09-12 Revised:2023-11-09 Online:2024-01-10 Published:2024-01-10
  • Contact: CHANG Yan
  • About author:CHEN XIAO, born in 1999, M.S. candidate. His research interests include adversarial examples. CHANG Yan, born in 1979, Ph. D., professor. Her research interests include quantum computing, information security. WANG Danchen, born in 1983, Ph. D. Her research interests include data security, new infrastructure, digital economy. ZHANG Shibin, born in 1971, Ph. D., professor. His research interests include quantum computing, information security.
  • Supported by:
    National Natural Science Foundation of China (62272068), Key Research and Development Support Plan of Chengdu City (2021-YF09-00114-GX)

摘要: 摘  要: 为尽可能防御现有的各种针对人工智能算法(特别是人工神经网络)的攻击方法,同时降低由此带来的额外开销,提出基于样本预处理的藤牌算法。通过切割图像非重要信息部分、邻近像素值统一化和图像放缩等三种方法对样本进行预处理,破坏对抗扰动,生成对模型威胁更小的新样本,以确保模型识别的高准确率。实验结果表明,藤牌算法可以在比同类算法开销更小的情况下,防御针对MNIST、CIFAR10数据集和squeezenet1_1、mnasnet1_3、mobilenet_v3_large神经网络模型的对抗攻击,防御后的样本准确率最低可达88.50%;同时在处理干净样本时也不会过多降低样本准确率,防御效果和防御成本都优于对比算法。

关键词: 攻击, 防御, 图像放缩, 图像切割, 像素值统一化

Abstract: In order to defend against existing attacks on artificial intelligence algorithms (especially artificial neural networks) as much as possible, while reducing the additional costs, the Rattan algorithm based on examples preprocessing was proposed. By cutting the unimportant information part of the image, unifying the neighboring pixel values and image scaling, the examples were preprocessed to destroy the adversarial disturbance and generate new examples that pose less threat to the model, so as to ensure high accuracy of model recognition. Experimental results show that the Rattan algorithm can defend against some adversarial attacks against MNIST, CIFAR10 datasets and neural network models such as squeezenet1_1, mnasnet1_3 and mobilenet_v3_large with less overhead than similar algorithms, and the minimum accuracy of the defended examples can reach 88.50%; meanwhile, it will not reduce the examples accuracy too much when processing clean examples, and the defense effect and defense cost are better than the comparison algorithm.

Key words: attacks, defense, image scaling, image cutting, pixel value normalization

中图分类号: