Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Cross-model universal perturbation generation method based on geometric relationship
Jici ZHANG, Chunlong FAN, Cailong LI, Xuedong ZHENG
Journal of Computer Applications    2023, 43 (11): 3428-3435.   DOI: 10.11772/j.issn.1001-9081.2022111677
Abstract182)   HTML5)    PDF (3981KB)(151)       Save

Adversarial attacks add designed perturbations to the input samples of neural network models to make them output wrong results with high confidence. The research on adversarial attacks mainly aim at the application scenarios of a single model, and the attacks on multiple models are mainly realized through cross-model transfer attacks, but there are few studies on universal cross-model attack methods. By analyzing the geometric relationship of multi-model attack perturbations, the orthogonality of the adversarial directions of different models and the orthogonality of the adversarial direction and the decision boundary of a single model were clarified, and the universal cross-model attack algorithm and corresponding optimization strategy were designed accordingly. On CIFAR10, SVHN datasets and six common neural network models, the proposed algorithm was verified by multi-angle cross-model adversarial attacks. Experimental results show that the attack success rate of the algorithm in a given experimental scenario is 1.0, and the L2-norm is not greater than 0.9. Compared with the cross-model transfer attack, the proposed algorithm has the average attack success rate on the six models increased by up to 57% and has better universality.

Table and Figures | Reference | Related Articles | Metrics