Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Low-cost adversarial example defense algorithm based on example preprocessing
Xiao CHEN, Yan CHANG, Danchen WANG, Shibin ZHANG
Journal of Computer Applications    2024, 44 (9): 2756-2762.   DOI: 10.11772/j.issn.1001-9081.2023091249
Abstract204)   HTML2)    PDF (1915KB)(400)       Save

In order to defend against existing attacks on artificial intelligence algorithms (especially artificial neural networks) as much as possible, and reduce the additional costs, the rattan algorithm based on example preprocessing was proposed. By cutting the unimportant information part of the image, normalizing the neighboring pixel values and scaling image, the examples were preprocessed to destroy the adversarial disturbance and generate new examples with less threat to the model, ensuring high accuracy of model recognition. Experimental results show that the rattan algorithm can defend against some adversarial attacks against MNIST, CIFAR10 datasets and neural network models such as squeezenet1_1, mnasnet1_3 and mobilenet_v3_large with less overhead than similar algorithms, and the minimum example accuracy after defense can reach 88.50%; meanwhile, it does not reduce the example accuracy too much while processing clean examples, and the defense effect and defense cost are better than those of the comparison algorithms such as Fast Gradient Sign Method (FGSM) and Momentum Iterative Method (MIM).

Table and Figures | Reference | Related Articles | Metrics