《计算机应用》唯一官方网站 ›› 2022, Vol. 42 ›› Issue (9): 2732-2741.DOI: 10.11772/j.issn.1001-9081.2021071339
收稿日期:
2021-07-26
修回日期:
2021-09-29
接受日期:
2021-10-08
发布日期:
2021-10-25
出版日期:
2022-09-10
通讯作者:
魏佳璇
作者简介:
杜世康(1997—),男,甘肃武威人,硕士研究生,主要研究方向:对抗机器学习;基金资助:
Jiaxuan WEI1(), Shikang DU1, Zhixuan YU1,2, Ruisheng ZHANG1
Received:
2021-07-26
Revised:
2021-09-29
Accepted:
2021-10-08
Online:
2021-10-25
Published:
2022-09-10
Contact:
Jiaxuan WEI
About author:
DU Shikang, born in 1997, M. S. candidate. His research interests include adversarial machine learning.Supported by:
摘要:
在深度学习中图像分类任务研究里发现,对抗攻击现象给深度学习模型的安全应用带来了严峻挑战,引发了研究人员的广泛关注。首先,围绕深度学习中用于生成对抗扰动的对抗攻击技术,对图像分类任务中重要的白盒对抗攻击算法进行了详细介绍,同时分析了各个攻击算法的优缺点;然后,分别从移动终端、人脸识别和自动驾驶三个现实中的应用场景出发,介绍了白盒对抗攻击技术的应用现状;此外,选择了一些典型的白盒对抗攻击算法针对不同的目标模型进行了对比实验并分析了实验结果;最后,对白盒对抗攻击技术进行了总结,并展望了其有价值的研究方向。
中图分类号:
魏佳璇, 杜世康, 于志轩, 张瑞生. 图像分类中的白盒对抗攻击技术综述[J]. 计算机应用, 2022, 42(9): 2732-2741.
Jiaxuan WEI, Shikang DU, Zhixuan YU, Ruisheng ZHANG. Review of white-box adversarial attack technologies in image classification[J]. Journal of Computer Applications, 2022, 42(9): 2732-2741.
攻击算法 | 扰动范数 | 攻击 类型 | 攻击 强度 | 算法优势 | 算法劣势 |
---|---|---|---|---|---|
L-BFGS[ | 单步 | *** | 对抗样本有良好的迁移攻击能力,是第一个提出的对抗攻击算法 | 算法需要花费大量时间优化超参数 | |
C&W[ | 迭代 | ***** | 针对大部分蒸馏防御模型的攻击能力强且生成的扰动小 | 算法攻击效率低,需耗时寻找合适的超参数 | |
FGSM[ | 单步 | *** | 生成效率非常高且扰动具有良好的迁移攻击能力 | 计算一次梯度生成对抗样本,对抗样本的扰动强度较大 | |
I-FGSM[ | 迭代 | **** | 多步迭代生成对抗样本,攻击能力强 | 容易过拟合,迁移攻击能力较差 | |
PGD[ | 迭代 | ***** | 比I-FGSM算法的攻击能力强 | 迁移攻击能力较差 | |
MI-FGSM[ | 迭代 | **** | 兼具较好的攻击能力和迁移攻击能力,算法收敛速度更快 | 攻击能力比PGD算法差 | |
DeepFool[ | 迭代 | **** | 精确计算得到的扰动更小 | 不具备目标攻击能力 | |
UAPs[ | 迭代 | **** | 生成的通用对抗扰动具有较好的迁移攻击能力 | 无法保证对特定数据点的攻击成功率 | |
ATN[ | 迭代 | **** | 可同时针对多个目标模型进行攻击,对抗样本更具多样性 | 训练生成网络寻找合适的超参数 | |
UAN[ | 迭代 | **** | 生成扰动的速度快且攻击能力强于UAPs算法 | 生成模型的训练需花费一定时间 | |
AdvGAN[ | 迭代 | **** | 生成的对抗样本在视觉上与真实样本非常相似 | 对抗训练过程不稳定 | |
JSMA[ | 迭代 | *** | 生成的对抗样本与真实样本相似度高 | 生成的对抗样本不具备迁移攻击能力 | |
单像素攻击[ | 迭代 | ** | 可仅修改一个像素点进行攻击 | 计算量大,仅适用于尺寸较小的数据集 | |
stAdv[ | — | 迭代 | *** | 针对对抗训练防御有较好的攻击效果 | 只针对特定防御策略的模型攻击效果好 |
BPDA[ | 迭代 | *** | 可有效针对混淆梯度防御的模型进行攻击 | 只针对混淆梯度的防御进行攻击 |
表1 对抗攻击算法总结
Tab. 1 Summary of adversarial attack algorithms
攻击算法 | 扰动范数 | 攻击 类型 | 攻击 强度 | 算法优势 | 算法劣势 |
---|---|---|---|---|---|
L-BFGS[ | 单步 | *** | 对抗样本有良好的迁移攻击能力,是第一个提出的对抗攻击算法 | 算法需要花费大量时间优化超参数 | |
C&W[ | 迭代 | ***** | 针对大部分蒸馏防御模型的攻击能力强且生成的扰动小 | 算法攻击效率低,需耗时寻找合适的超参数 | |
FGSM[ | 单步 | *** | 生成效率非常高且扰动具有良好的迁移攻击能力 | 计算一次梯度生成对抗样本,对抗样本的扰动强度较大 | |
I-FGSM[ | 迭代 | **** | 多步迭代生成对抗样本,攻击能力强 | 容易过拟合,迁移攻击能力较差 | |
PGD[ | 迭代 | ***** | 比I-FGSM算法的攻击能力强 | 迁移攻击能力较差 | |
MI-FGSM[ | 迭代 | **** | 兼具较好的攻击能力和迁移攻击能力,算法收敛速度更快 | 攻击能力比PGD算法差 | |
DeepFool[ | 迭代 | **** | 精确计算得到的扰动更小 | 不具备目标攻击能力 | |
UAPs[ | 迭代 | **** | 生成的通用对抗扰动具有较好的迁移攻击能力 | 无法保证对特定数据点的攻击成功率 | |
ATN[ | 迭代 | **** | 可同时针对多个目标模型进行攻击,对抗样本更具多样性 | 训练生成网络寻找合适的超参数 | |
UAN[ | 迭代 | **** | 生成扰动的速度快且攻击能力强于UAPs算法 | 生成模型的训练需花费一定时间 | |
AdvGAN[ | 迭代 | **** | 生成的对抗样本在视觉上与真实样本非常相似 | 对抗训练过程不稳定 | |
JSMA[ | 迭代 | *** | 生成的对抗样本与真实样本相似度高 | 生成的对抗样本不具备迁移攻击能力 | |
单像素攻击[ | 迭代 | ** | 可仅修改一个像素点进行攻击 | 计算量大,仅适用于尺寸较小的数据集 | |
stAdv[ | — | 迭代 | *** | 针对对抗训练防御有较好的攻击效果 | 只针对特定防御策略的模型攻击效果好 |
BPDA[ | 迭代 | *** | 可有效针对混淆梯度防御的模型进行攻击 | 只针对混淆梯度的防御进行攻击 |
算法 | CNN | ResNet34 | VGG19 |
---|---|---|---|
C&W | 17.16 | 14.98 | 31.27 |
PGD | 1.50 | 0.00 | 6.39 |
DeepFool | 9.94 | 4.50 | 5.64 |
UAN | 16.38 | 13.65 | 28.35 |
表2 CIFAR10验证集上的对抗样本分类准确率 (%)
Tab. 2 Accuracy of adversarial examples classification on CIFAR10 validation set
算法 | CNN | ResNet34 | VGG19 |
---|---|---|---|
C&W | 17.16 | 14.98 | 31.27 |
PGD | 1.50 | 0.00 | 6.39 |
DeepFool | 9.94 | 4.50 | 5.64 |
UAN | 16.38 | 13.65 | 28.35 |
算法 | eps值 | |||
---|---|---|---|---|
0.05 | 0.1 | 0.2 | 0.3 | |
FGSM | 82.16 | 45.59 | 18.74 | 9.80 |
I-FGSM | 65.12 | 9.70 | 0.78 | 0.69 |
PGD | 63.71 | 9.15 | 0.74 | 0.63 |
MI-FGSM | 66.55 | 12.83 | 1.07 | 0.77 |
表3 MNIST验证集上的对抗样本分类准确率 (%)
Tab. 3 Accuracy of adversarial examples classification on MNIST validation set
算法 | eps值 | |||
---|---|---|---|---|
0.05 | 0.1 | 0.2 | 0.3 | |
FGSM | 82.16 | 45.59 | 18.74 | 9.80 |
I-FGSM | 65.12 | 9.70 | 0.78 | 0.69 |
PGD | 63.71 | 9.15 | 0.74 | 0.63 |
MI-FGSM | 66.55 | 12.83 | 1.07 | 0.77 |
1 | TABELINI L, BERRIEL R, PAIXÃO T M, et al. PolyLaneNet: lane estimation via deep polynomial regression[C]// Proceedings of the 25th International Conference on Pattern Recognition. Piscataway: IEEE, 2021: 6150-6156. 10.1109/icpr48806.2021.9412265 |
2 | SUN Y F, XU Q, LI Y L, et al. Perceive where to focus: learning visibility-aware part-level features for partial person re-identification[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 393-402. 10.1109/cvpr.2019.00048 |
3 | DAHL G E, STOKES J W, DENG L, et al. Large-scale malware classification using random projections and neural networks[C]// Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2013: 3422-3426. 10.1109/icassp.2013.6638293 |
4 | GROSSE K, PAPERNOT N, MANOHARAN P, et al. Adversarial perturbations against deep neural networks for malware classification[EB/OL]. (2016-06-16) [2021-06-15].. 10.7551/mitpress/10761.003.0012 |
5 | DU M, JIA R X, SONG D. Robust anomaly detection and backdoor attack detection via differential privacy[EB/OL]. (2019-11-16) [2021-06-15].. |
6 | SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[EB/OL]. (2014-02-19) [2021-06-15].. |
7 | DENG J, DONG W, SOCHER R. ImageNet: a large-scale hierarchical image database[C]// Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2009: 248-255. 10.1109/cvpr.2009.5206848 |
8 | HE K M, ZHANG X Y, REN S Q, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. 10.1109/cvpr.2016.90 |
9 | CARLINI N, MISHRA P, VAIDYA T, et al. Hidden voice commands[C]// Proceedings of the 25th USENIX Security Symposium. Berkeley: USENIX Association, 2016: 513-530. |
10 | SHARIF M, BHAGAVATULA S, BAUER L, et al. Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition[C]// Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2016: 1528-1540. 10.1145/2976749.2978392 |
11 | EYKHOLT K, EVTIMOV I, FERNANDES E, et al. Robust physical-world attacks on deep learning models[C]// Proceedings of the 2018 IEE/CVFE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 1625-1634. 10.1109/cvpr.2018.00175 |
12 | CARLINI N. A complete list of all (arXiv) adversarial example papers[EB/OL]. (2019-06-15) [2021-06-15].. |
13 | CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]// Proceedings of the 2017 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2017: 39-57. 10.1109/sp.2017.49 |
14 | GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[EB/OL]. (2015-03-20) [2021-06-15].. |
15 | PAPERNOT N, McDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning[C]// Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security. New York: ACM, 2017: 506-519. 10.1145/3052973.3053009 |
16 | TRAMEÈR F, KURAKIN A, PAPERNOT N, et al. Ensemble adversarial training: attacks and defenses[EB/OL]. (2020-04-26) [2021-06-15].. |
17 | MOOSAVI-DEZFOOLI S M, FAWZI A, FAWZI O, et al. Universal adversarial perturbations[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 86-94. 10.1109/cvpr.2017.17 |
18 | HAYES J, DANEZIS G. Learning universal adversarial perturbations with generative models[C]// Proceedings of the 2018 IEEE Symposium on Security and Privacy Workshops. Piscataway: IEEE, 2018: 43-49. 10.1109/spw.2018.00015 |
19 | 陈岳峰,毛潇锋,李裕宏,等. AI安全——对抗样本技术综述与应用[J]. 信息安全研究, 2019, 5(11): 1000-1007. 10.3969/j.issn.2096-1057.2019.11.009 |
CHEN Y F, MAO X F, LI Y H, et al. AI security — Research and application on adversarial example[J]. Journal of Information Security Research, 2019, 5(11): 1000-1007. 10.3969/j.issn.2096-1057.2019.11.009 | |
20 | 潘文雯,王新宇,宋明黎,等. 对抗样本生成技术综述[J]. 软件学报, 2020, 31(1): 67-81. |
PAN W W, WANG X Y, SONG M L, et al. Survey on generating adversarial examples[J]. Journal of Software, 2020, 31(1): 67-81. | |
21 | 张玉清,董颖,柳彩云,等. 深度学习应用于网络空间安全的现状、趋势与展望[J]. 计算机研究与发展, 2018, 55(6): 1117-1142. 10.7544/issn1000-1239.2018.20170649 |
ZHANG Y Q, DONG Y, LIU C Y, et al. Situation, trends and prospects of deep learning applied to cyberspace security[J]. Journal of Computer Research and Development, 2018, 55(6): 1117-1142. 10.7544/issn1000-1239.2018.20170649 | |
22 | PAPERNOT N, McDANIEL P, WU X, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// Proceedings of the 2016 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2016: 582-597. 10.1109/sp.2016.41 |
23 | LIU D C, NOCEDAL J. On the limited memory BFGS method for large scale optimization[J]. Mathematical Programming, 1989, 45(1/2/3): 503-528. 10.1007/bf01589116 |
24 | KINGMA D P, BA J L. Adam: a method for stochastic optimization[EB/OL]. (2017-01-30) [2021-06-15].. |
25 | KURAKIN A, GOODFELLOW I J, BENGIO S. Adversarial examples in the physical world[EB/OL]. (2017-02-11) [2021-06-15].. 10.1201/9781351251389-8 |
26 | MĄDRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL]. (2019-09-04) [2021-06-15].. |
27 | DONG Y P, LIAO F Z, PANG T Y, et al. Boosting adversarial attacks with momentum[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 9185-9293. 10.1109/cvpr.2018.00957 |
28 | XIE C H, ZHANG Z S, ZHOU Y Y, et al. Improving transferability of adversarial examples with input diversity[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2725-2734. 10.1109/cvpr.2019.00284 |
29 | DONG Y P, PANG T Y, SU H, et al. Evading defenses to transferable adversarial examples by translation-invariant attacks[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4307-4316. 10.1109/cvpr.2019.00444 |
30 | KURAKIN A, GOODFELLOW I, BENGIO S. Adversarial machine learning at scale[EB/OL]. (2017-02-11) [2021-06-15].. 10.1201/9781351251389-8 |
31 | POLYAK B T. Some methods of speeding up the convergence of iteration methods[J]. USSR Computational Mathematics and Mathematical Physics, 1964, 4(5): 1-17. 10.1016/0041-5553(64)90137-5 |
32 | SUTSKEVER I, MARTENS J, DAHL G, et al. On the importance of initialization and momentum in deep learning[C]// Proceedings of the 30th International Conference on Machine Learning. New York: JMLR.org, 2013: 1139-1147. |
33 | MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 2574-2582. 10.1109/cvpr.2016.282 |
34 | BALUJA S, FISCHER I. Adversarial transformation networks: learning to generate adversarial examples[EB/OL]. (2017-03-28) [2021-06-15].. 10.1609/aaai.v32i1.11672 |
35 | XIAO C, LI B, ZHU J, et al. Generating adversarial examples with adversarial networks[C]// Proceedings of the 27th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2018: 3905-3911. 10.24963/ijcai.2018/543 |
36 | JOHNSON J, ALAHI A, LI F F. Perceptual losses for real-time style transfer and super-resolution[C]// Proceedings of the 2016 European Conference on Computer Vision, LNCS 9906. Cham: Springer, 2016: 694-711. |
37 | GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial networks[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014: 2672-2680. |
38 | ISOLA P, ZHU J Y, ZHOU T H, et al. Image-to-image translation with conditional adversarial networks[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 5967-5976. 10.1109/cvpr.2017.632 |
39 | PAPERNOT N, McDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings[C]// Proceedings of the 2016 IEEE European Symposium on Security and Privacy. Piscataway: IEEE, 2016: 372-387. 10.1109/eurosp.2016.36 |
40 | SIMONYAN K, VEDALDI A, ZISSERMAN A. Deep inside convolutional networks: visualising image classification models and saliency maps[EB/OL]. (2014-04-19) [2021-06-15].. |
41 | SU J W, VARGAS D V, SAKURAI K. One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 828-841. 10.1109/tevc.2019.2890858 |
42 | STORN R, PRICE K. Differential evolution — a simple and efficient heuristic for global optimization over continuous spaces[J]. Journal of Global Optimization, 1997, 11(4): 341-359. 10.1023/a:1008202821328 |
43 | LECUN Y, CORTES C, BURGES C J C. The MNIST database of handwritten digits[DB/OL]. [2021-06-15].. |
44 | XIAO C W, ZHU J Y, LI B, et al. Spatially transformed adversarial examples[EB/OL]. (2018-01-09) [2021-06-15].. |
45 | ATHALYE A, CARLINI N, WAGNER D. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples[C]// Proceedings of the 35th International Conference on Machine Learning. New York: JMLR.org, 2018: 274-283. |
46 | GUO Y, WEI X X, WANG G Q, et al. Meaningful adversarial stickers for face recognition in physical world[EB/OL]. (2021-04-14) [2021-06-15].. |
47 | KURAKIN A. Objects detection machine learning TensorFlow demo[EB/OL]. [2021-06-15].. |
48 | SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 2818-2826. 10.1109/cvpr.2016.308 |
49 | KRIZHEVSKY A. Learning multiple layers of features from tiny images[EB/OL]. (2009-04-08) [2021-06-15].. |
50 | LE T, WANG S H, LEE D. MALCOM: generating malicious comments to attack neural fake news detection models[C]// Proceedings of the 2020 IEEE International Conference on Data Mining. Piscataway: IEEE, 2020: 282-291. 10.1109/icdm50108.2020.00037 |
51 | NIE Y X, WILLIAMS A, DINAN E, et al. Adversarial NLI: a new benchmark for natural language understanding[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: Association for Computational Linguistics, 2020: 4885-4901. 10.18653/v1/2020.acl-main.441 |
52 | ŻELASKO P, JOSHI S, SHAO Y W, et al. Adversarial attacks and defenses for speech recognition systems[EB/OL]. (2021-03-31) [2021-06-15].. |
53 | CHEN Y X, ZHANG J S, YUAN X J, et al. SoK: a modularized approach to study the security of automatic speech recognition systems[J]. ACM Transactions on Privacy and Security, 2022, 25(3): No.17. 10.1145/3510582 |
[1] | 李敬虎, 邢前国, 郑向阳, 李琳, 王丽丽. 基于深度学习的无人机影像夜光藻赤潮提取方法[J]. 《计算机应用》唯一官方网站, 2022, 42(9): 2969-2974. |
[2] | 尹靖涵, 瞿绍军, 姚泽楷, 胡玄烨, 秦晓雨, 华璞靖. 基于YOLOv5的雾霾天气下交通标志识别模型[J]. 《计算机应用》唯一官方网站, 2022, 42(9): 2876-2884. |
[3] | 刘亚姣, 于海涛, 王江, 于利峰, 张春晖. 基于深度学习的型钢表面多形态微小缺陷检测算法[J]. 《计算机应用》唯一官方网站, 2022, 42(8): 2601-2608. |
[4] | 杨博, 张恒巍, 李哲铭, 徐开勇. 基于图像翻转变换的对抗样本生成方法[J]. 《计算机应用》唯一官方网站, 2022, 42(8): 2319-2325. |
[5] | 王一宁, 赵青杉, 秦品乐, 胡玉兰, 宗春梅. 基于轻量密集神经网络的医学图像超分辨率重建算法[J]. 《计算机应用》唯一官方网站, 2022, 42(8): 2586-2592. |
[6] | 张显杰, 张之明. 基于卷积神经网络和Transformer的手写体英文文本识别[J]. 《计算机应用》唯一官方网站, 2022, 42(8): 2394-2400. |
[7] | 程南江, 余贞侠, 陈琳, 乔贺辙. 基于领域自适应的多源多标签行人属性识别[J]. 《计算机应用》唯一官方网站, 2022, 42(8): 2401-2406. |
[8] | 韩亚茹, 闫连山, 姚涛. 基于元学习的深度哈希检索算法[J]. 《计算机应用》唯一官方网站, 2022, 42(7): 2015-2021. |
[9] | 刘万军, 王佳铭, 曲海成, 董利兵, 曹欣宇. 基于频谱空间域特征注意的音乐流派分类算法[J]. 《计算机应用》唯一官方网站, 2022, 42(7): 2072-2077. |
[10] | 董宁, 程晓荣, 张铭泉. 基于物联网平台的动态权重损失函数入侵检测系统[J]. 《计算机应用》唯一官方网站, 2022, 42(7): 2118-2124. |
[11] | 秦庭威, 赵鹏程, 秦品乐, 曾建朝, 柴锐, 黄永琦. 基于残差注意力机制的点云配准算法[J]. 《计算机应用》唯一官方网站, 2022, 42(7): 2184-2191. |
[12] | 王震宇, 张雷, 高文彬, 权威铭. 基于渐进式神经网络架构搜索的人体运动识别[J]. 《计算机应用》唯一官方网站, 2022, 42(7): 2058-2064. |
[13] | 江静, 陈渝, 孙界平, 琚生根. 融合后验概率校准训练的文本分类算法[J]. 《计算机应用》唯一官方网站, 2022, 42(6): 1789-1795. |
[14] | 文敏, 王荣存, 姜淑娟. 基于关系图卷积网络的源代码漏洞检测[J]. 《计算机应用》唯一官方网站, 2022, 42(6): 1814-1821. |
[15] | 韩玉民, 郝晓燕. 基于子词嵌入和相对注意力的材料实体识别[J]. 《计算机应用》唯一官方网站, 2022, 42(6): 1862-1868. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||