《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (10): 3093-3098.DOI: 10.11772/j.issn.1001-9081.2022091468

• 人工智能 • 上一篇    

基于BERT模型的文本对抗样本生成方法

李宇航, 杨玉丽, 马垚, 于丹, 陈永乐()   

  1. 太原理工大学 计算机科学与技术学院(大数据学院),太原 030600
  • 收稿日期:2022-10-08 修回日期:2023-02-19 接受日期:2023-02-23 发布日期:2023-04-17 出版日期:2023-10-10
  • 通讯作者: 陈永乐
  • 作者简介:李宇航(1998—),男,山西临汾人,硕士研究生,CCF会员,主要研究方向:人工智能
    杨玉丽(1979—),女,山西临汾人,讲师,博士,CCF会员,主要研究方向:可信云服务计算、区块链
    马垚(1982—),男,山西太原人,讲师,博士,CCF会员,主要研究方向:Web安全
    于丹(1988—),女,北京人,博士,CCF会员,主要研究方向:无线传感网络、物联网;
  • 基金资助:
    山西省基础研究计划项目(20210302123131)

Text adversarial example generation method based on BERT model

Yuhang LI, Yuli YANG, Yao MA, Dan YU, Yongle CHEN()   

  1. College of Computer Science and Technology (College of Data Science),Taiyuan University of Technology,Taiyuan Shanxi 030600,China
  • Received:2022-10-08 Revised:2023-02-19 Accepted:2023-02-23 Online:2023-04-17 Published:2023-10-10
  • Contact: Yongle CHEN
  • About author:LI Yuhang, born in 1998, M. S. candidate. His research interests include artificial intelligence.
    YANG Yuli, born in 1979, Ph. D., lecturer. Her research interests include trusted cloud service computing, blockchain.
    MA Yao, born in 1982, Ph. D., lecturer. His research interests include Web security.
    YU Dan, born in 1988, Ph. D. Her research interests include wireless sensor network, internet of things.
  • Supported by:
    Basic Research Program of Shanxi Province(20210302123131)

摘要:

针对现有对抗样本生成方法需要大量访问目标模型,导致攻击效果较差的问题,提出了基于BERT (Bidirectional Encoder Representations from Transformers)模型的文本对抗样本生成方法(TAEGM)。首先采用注意力机制,在不访问目标模型的情况下,定位显著影响分类结果的关键单词;其次通过BERT模型对关键单词进行单词级扰动,从而生成候选样本;最后对候选样本进行聚类,并从对分类结果影响更大的簇中选择对抗样本。在Yelp Reviews、AG News和IMDB Review数据集上的实验结果表明,相较于攻击成功率(SR)次优的对抗样本生成方法CLARE(ContextuaLized AdversaRial Example generation model),TAEGM在保证对抗攻击SR的前提下,对目标模型的访问次数(QC)平均减少了62.3%,时间平均减少了68.6%。在此基础之上,进一步的实验结果验证了TAEGM生成的对抗样本不仅具有很好的迁移性,还可以通过对抗训练提升模型的鲁棒性。

关键词: 对抗样本, 注意力机制, BERT, 对抗攻击, 聚类算法

Abstract:

Aiming at the problem that the existing adversarial example generation methods require a lot of queries to the target model, which leads to poor attack effects, a Text Adversarial Examples Generation Method based on BERT (Bidirectional Encoder Representations from Transformers) model (TAEGM) was proposed. Firstly, the attention mechanism was adopted to locate the keywords that significantly influence the classification results without query of the target model. Secondly, word-level perturbation of keywords was performed by BERT model to generate candidate adversarial examples. Finally, the candidate examples were clustered, and the adversarial examples were selected from the clusters that have more influence on the classification results. Experimental results on Yelp Reviews, AG News, and IMDB Review datasets show that compared to the suboptimal adversarial example generation method CLARE (ContextuaLized AdversaRial Example generation model) on Success Rate (SR), TAEGM can reduce the Query Counts (QC) to the target model by 62.3% and time consumption by 68.6% averagely while ensuring the SR of adversarial attacks. Based on the above, further experimental results verify that the adversarial examples generated by TAEGM not only have good transferability, but also improve the robustness of the model through adversarial training.

Key words: adversarial example, attention mechanism, BERT (Bidirectional Encoder Representations from Transformers), adversarial attack, clustering algorithm

中图分类号: