Journal of Computer Applications ›› 2024, Vol. 44 ›› Issue (12): 3798-3807.DOI: 10.11772/j.issn.1001-9081.2023121835
• Artificial intelligence • Previous Articles Next Articles
Yifei SONG, Yi LIU()
Received:
2024-01-02
Revised:
2024-03-14
Accepted:
2024-03-18
Online:
2024-03-28
Published:
2024-12-10
Contact:
Yi LIU
About author:
SONG Yifei, born in 2000, M. S. candidate. His research interests include deep learning, adversarial examples, image processing.
Supported by:
通讯作者:
柳毅
作者简介:
宋逸飞(2000—),男,福建莆田人,硕士研究生,主要研究方向:深度学习、对抗样本、图像处理;
基金资助:
CLC Number:
Yifei SONG, Yi LIU. Fast adversarial training method based on data augmentation and label noise[J]. Journal of Computer Applications, 2024, 44(12): 3798-3807.
宋逸飞, 柳毅. 基于数据增强和标签噪声的快速对抗训练方法[J]. 《计算机应用》唯一官方网站, 2024, 44(12): 3798-3807.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2023121835
扰动 预算 | 数据集 | 方法类型 | 方法 | 干净 样本 精度/% | 不同攻击下对抗鲁棒精度/% | 训练 时间/h | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
FGSM | PGD-10 | PGD-50 | C&W | APGD-CE | AA | ||||||
16/255 | CIFAR-10 | 多步对抗训练 | PGD-AT | 65.21 | 46.13 | 40.55 | 33.67 | 30.59 | 33.03 | 26.25 | 5.14 |
单步对抗训练 | FGSM-RS | 50.11 | 38.28 | 26.22 | 20.37 | 18.96 | 19.40 | 14.88 | 1.39 | ||
GradAlign | 58.30 | 40.86 | 33.08 | 25.64 | 22.56 | 23.94 | 17.01 | 3.05 | |||
ATAS | 63.98 | 43.57 | 31.39 | 28.15 | 26.18 | 25.97 | 21.09 | 1.58 | |||
N-FGSM | 62.50 | 41.99 | 34.22 | 27.67 | 25.43 | 25.11 | 20.40 | 1.39 | |||
FGSM-MEP | 56.70 | 38.06 | 33.01 | 26.83 | 20.74 | 26.29 | 17.70 | 2.08 | |||
随机噪声和自适应步长方法 | 62.71 | 41.83 | 34.64 | 28.06 | 25.92 | 25.87 | 20.99 | 1.62 | |||
预测攻击步长 | 63.52 | 41.48 | 34.25 | 27.66 | 25.31 | 25.56 | 20.40 | 1.77 | |||
本文方法 | 62.88 | 43.03 | 37.47 | 28.66 | 28.47 | 27.59 | 22.33 | 1.90 | |||
CIFAR-100 | 多步对抗训练 | PGD-AT | 40.48 | 24.51 | 21.40 | 17.39 | 14.99 | 17.02 | 12.58 | 5.16 | |
单步对抗训练 | FGSM-RS | 30.19 | 15.33 | 12.71 | 9.79 | 8.42 | 9.51 | 6.90 | 1.39 | ||
GradAlign | 31.88 | 15.73 | 12.59 | 9.75 | 8.23 | 9.47 | 6.62 | 3.08 | |||
ATAS | 55.63 | 30.35 | 15.31 | 8.49 | 10.97 | 6.30 | 5.05 | 1.59 | |||
N-FGSM | 37.91 | 20.59 | 17.11 | 14.34 | 12.25 | 13.71 | 9.50 | 1.39 | |||
FGSM-MEP | 42.60 | 17.07 | 12.92 | 8.91 | 7.25 | 8.56 | 5.58 | 2.10 | |||
随机噪声和自适应步长方法 | 39.12 | 20.97 | 17.53 | 14.65 | 12.67 | 14.09 | 9.86 | 1.64 | |||
预测攻击步长 | 38.94 | 20.78 | 17.30 | 14.39 | 12.33 | 13.84 | 9.48 | 1.78 | |||
本文方法 | 40.13 | 22.85 | 19.42 | 14.85 | 13.33 | 14.26 | 10.96 | 1.91 | |||
8/255 | CIFAR-10 | 单步对抗训练 | GradAlign | 81.76 | 59.10 | 49.21 | 46.89 | 47.46 | 44.64 | 43.45 | 3.05 |
ATAS | 81.93 | 58.19 | 47.01 | 44.93 | 45.30 | 42.77 | 42.51 | 1.58 | |||
N-FGSM | 80.57 | 59.68 | 49.57 | 48.09 | 47.08 | 45.99 | 45.10 | 1.39 | |||
随机噪声和自适应步长方法 | 81.62 | 60.80 | 50.22 | 48.57 | 47.58 | 46.22 | 45.46 | 1.62 | |||
预测攻击步长 | 83.97 | 59.47 | 48.39 | 46.64 | 45.05 | 43.98 | 43.67 | 1.77 | |||
本文方法 | 81.64 | 62.50 | 51.11 | 49.49 | 50.04 | 49.34 | 47.02 | 1.90 | |||
CIFAR-100 | 单步对抗训练 | GradAlign | 54.35 | 30.94 | 22.92 | 22.20 | 21.20 | 19.22 | 18.88 | 3.08 | |
ATAS | 56.87 | 31.87 | 26.32 | 25.21 | 22.33 | 21.87 | 21.05 | 1.59 | |||
N-FGSM | 54.37 | 32.43 | 27.44 | 26.32 | 23.89 | 22.86 | 22.47 | 1.39 | |||
随机噪声和自适应步长方法 | 55.30 | 33.33 | 27.86 | 26.99 | 24.32 | 23.07 | 22.83 | 1.64 | |||
预测攻击步长 | 54.12 | 32.80 | 27.34 | 26.54 | 22.32 | 21.36 | 21.22 | 1.78 | |||
本文方法 | 56.47 | 36.98 | 29.12 | 28.36 | 26.45 | 28.21 | 24.21 | 1.91 |
Tab. 1 Robustness test results and model training time using ResNet18 under perturbation budget of 16/255 and 8/255
扰动 预算 | 数据集 | 方法类型 | 方法 | 干净 样本 精度/% | 不同攻击下对抗鲁棒精度/% | 训练 时间/h | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
FGSM | PGD-10 | PGD-50 | C&W | APGD-CE | AA | ||||||
16/255 | CIFAR-10 | 多步对抗训练 | PGD-AT | 65.21 | 46.13 | 40.55 | 33.67 | 30.59 | 33.03 | 26.25 | 5.14 |
单步对抗训练 | FGSM-RS | 50.11 | 38.28 | 26.22 | 20.37 | 18.96 | 19.40 | 14.88 | 1.39 | ||
GradAlign | 58.30 | 40.86 | 33.08 | 25.64 | 22.56 | 23.94 | 17.01 | 3.05 | |||
ATAS | 63.98 | 43.57 | 31.39 | 28.15 | 26.18 | 25.97 | 21.09 | 1.58 | |||
N-FGSM | 62.50 | 41.99 | 34.22 | 27.67 | 25.43 | 25.11 | 20.40 | 1.39 | |||
FGSM-MEP | 56.70 | 38.06 | 33.01 | 26.83 | 20.74 | 26.29 | 17.70 | 2.08 | |||
随机噪声和自适应步长方法 | 62.71 | 41.83 | 34.64 | 28.06 | 25.92 | 25.87 | 20.99 | 1.62 | |||
预测攻击步长 | 63.52 | 41.48 | 34.25 | 27.66 | 25.31 | 25.56 | 20.40 | 1.77 | |||
本文方法 | 62.88 | 43.03 | 37.47 | 28.66 | 28.47 | 27.59 | 22.33 | 1.90 | |||
CIFAR-100 | 多步对抗训练 | PGD-AT | 40.48 | 24.51 | 21.40 | 17.39 | 14.99 | 17.02 | 12.58 | 5.16 | |
单步对抗训练 | FGSM-RS | 30.19 | 15.33 | 12.71 | 9.79 | 8.42 | 9.51 | 6.90 | 1.39 | ||
GradAlign | 31.88 | 15.73 | 12.59 | 9.75 | 8.23 | 9.47 | 6.62 | 3.08 | |||
ATAS | 55.63 | 30.35 | 15.31 | 8.49 | 10.97 | 6.30 | 5.05 | 1.59 | |||
N-FGSM | 37.91 | 20.59 | 17.11 | 14.34 | 12.25 | 13.71 | 9.50 | 1.39 | |||
FGSM-MEP | 42.60 | 17.07 | 12.92 | 8.91 | 7.25 | 8.56 | 5.58 | 2.10 | |||
随机噪声和自适应步长方法 | 39.12 | 20.97 | 17.53 | 14.65 | 12.67 | 14.09 | 9.86 | 1.64 | |||
预测攻击步长 | 38.94 | 20.78 | 17.30 | 14.39 | 12.33 | 13.84 | 9.48 | 1.78 | |||
本文方法 | 40.13 | 22.85 | 19.42 | 14.85 | 13.33 | 14.26 | 10.96 | 1.91 | |||
8/255 | CIFAR-10 | 单步对抗训练 | GradAlign | 81.76 | 59.10 | 49.21 | 46.89 | 47.46 | 44.64 | 43.45 | 3.05 |
ATAS | 81.93 | 58.19 | 47.01 | 44.93 | 45.30 | 42.77 | 42.51 | 1.58 | |||
N-FGSM | 80.57 | 59.68 | 49.57 | 48.09 | 47.08 | 45.99 | 45.10 | 1.39 | |||
随机噪声和自适应步长方法 | 81.62 | 60.80 | 50.22 | 48.57 | 47.58 | 46.22 | 45.46 | 1.62 | |||
预测攻击步长 | 83.97 | 59.47 | 48.39 | 46.64 | 45.05 | 43.98 | 43.67 | 1.77 | |||
本文方法 | 81.64 | 62.50 | 51.11 | 49.49 | 50.04 | 49.34 | 47.02 | 1.90 | |||
CIFAR-100 | 单步对抗训练 | GradAlign | 54.35 | 30.94 | 22.92 | 22.20 | 21.20 | 19.22 | 18.88 | 3.08 | |
ATAS | 56.87 | 31.87 | 26.32 | 25.21 | 22.33 | 21.87 | 21.05 | 1.59 | |||
N-FGSM | 54.37 | 32.43 | 27.44 | 26.32 | 23.89 | 22.86 | 22.47 | 1.39 | |||
随机噪声和自适应步长方法 | 55.30 | 33.33 | 27.86 | 26.99 | 24.32 | 23.07 | 22.83 | 1.64 | |||
预测攻击步长 | 54.12 | 32.80 | 27.34 | 26.54 | 22.32 | 21.36 | 21.22 | 1.78 | |||
本文方法 | 56.47 | 36.98 | 29.12 | 28.36 | 26.45 | 28.21 | 24.21 | 1.91 |
干净样本 精度/% | 不同攻击下对抗鲁棒精度/% | ||
---|---|---|---|
FGSM | AA | ||
0 (final) | 72.37 | 81.30 | 0.01 |
0 (best) | 45.21 | 26.47 | 11.05 |
1 | 55.37 | 39.19 | 19.25 |
2 | 62.88 | 43.03 | 22.33 |
3 | 59.93 | 40.79 | 21.88 |
Tab. 2 Robustness test results of proposed method under different noise levels
干净样本 精度/% | 不同攻击下对抗鲁棒精度/% | ||
---|---|---|---|
FGSM | AA | ||
0 (final) | 72.37 | 81.30 | 0.01 |
0 (best) | 45.21 | 26.47 | 11.05 |
1 | 55.37 | 39.19 | 19.25 |
2 | 62.88 | 43.03 | 22.33 |
3 | 59.93 | 40.79 | 21.88 |
干净样本 精度/% | 不同攻击下对抗鲁棒精度/% | ||
---|---|---|---|
FGSM | AA | ||
78.89 | 44.62 | 14.26 | |
71.31 | 44.28 | 18.93 | |
62.88 | 43.03 | 22.33 | |
55.85 | 39.46 | 21.54 |
Tab. 3 Robustness test results of proposed method under different attack step sizes
干净样本 精度/% | 不同攻击下对抗鲁棒精度/% | ||
---|---|---|---|
FGSM | AA | ||
78.89 | 44.62 | 14.26 | |
71.31 | 44.28 | 18.93 | |
62.88 | 43.03 | 22.33 | |
55.85 | 39.46 | 21.54 |
干净样本 精度/% | 不同攻击下对抗鲁棒精度/% | ||
---|---|---|---|
FGSM | AA | ||
0.3 | 63.53 | 42.93 | 21.63 |
0.4 | 62.88 | 43.03 | 22.33 |
0.5 | 62.70 | 42.91 | 22.01 |
0.6 | 62.56 | 42.87 | 21.45 |
Tab. 4 Robustness test results of proposed method under different maximum label noise rates
干净样本 精度/% | 不同攻击下对抗鲁棒精度/% | ||
---|---|---|---|
FGSM | AA | ||
0.3 | 63.53 | 42.93 | 21.63 |
0.4 | 62.88 | 43.03 | 22.33 |
0.5 | 62.70 | 42.91 | 22.01 |
0.6 | 62.56 | 42.87 | 21.45 |
干净样本 精度/% | 不同攻击下对抗鲁棒精度/% | ||
---|---|---|---|
FGSM | AA | ||
0.00 | 63.23 | 42.58 | 21.51 |
0.01 | 62.88 | 43.03 | 22.33 |
0.02 | 62.32 | 42.22 | 21.60 |
0.03 | 61.52 | 42.14 | 21.63 |
Tab. 5 Robustness test results of proposed method under different label noise enhancement rates
干净样本 精度/% | 不同攻击下对抗鲁棒精度/% | ||
---|---|---|---|
FGSM | AA | ||
0.00 | 63.23 | 42.58 | 21.51 |
0.01 | 62.88 | 43.03 | 22.33 |
0.02 | 62.32 | 42.22 | 21.60 |
0.03 | 61.52 | 42.14 | 21.63 |
1 | SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search [J]. Nature, 2016, 529(7587): 484-489. |
2 | HE K, ZHANG X, REN S, et al. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 1026-1034. |
3 | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg: ACL, 2019: 4171-4186. |
4 | SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[EB/OL]. [2023-11-30].. |
5 | MIAO Y, DONG Y, ZHU J, et al. Isometric 3D adversarial examples in the physical world[C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 19716-19731. |
6 | TAN J, MASON B, JAVADI H, et al. Parameters or privacy: a provable tradeoff between overparameterization and membership inference [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 17488-17500. |
7 | JIA S, YIN B, YAO T, et al. Adv-Attribute: inconspicuous and transferable adversarial attack on face recognition [C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 34136-34147. |
8 | BALCAN M F, PUKDEE R, RAVIKUMAR P, et al. Nash equilibria and pitfalls of adversarial training in adversarial robustness games[C]// Proceedings of the 26th International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2023: 9607-9636. |
9 | GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples [EB/OL]. [2023-12-01].. |
10 | MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL]. [2023-12-01]. . |
11 | WONG E, RICE L, KOLTER J Z. Fast is better than free: revisiting adversarial training[EB/OL]. [2023-12-01].. |
12 | TRAMÈR F, KURAKIN A, PAPERNOT N, et al. Ensemble adversarial training: attacks and defenses[EB/OL]. [2023-12-02]. . |
13 | KIM H, LEE W, LEE J. Understanding catastrophic overfitting in single-step adversarial training [C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2021: 8119-8127. |
14 | JIA X, ZHANG Y, WEI X, et al. Prior-guided adversarial initialization for fast adversarial training [C]// Proceedings of the 2022 European Conference on Computer Vision, LNCS 13664. Cham: Springer, 2022: 567-584. |
15 | CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks [C]// Proceedings of the 2017 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2017: 39-57. |
16 | CROCE F, HEIN M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks[C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 2206-2216. |
17 | PAPERNOT N, McDANIEL P, JHA S, et al. The limitations of deep learning in adversarial settings [C]// Proceedings of the 2016 IEEE European Symposium on Security and Privacy. Piscataway: IEEE, 2016: 372-387. |
18 | KURAKIN A, GOODFELLOW I J, BENGIO S. Adversarial machine learning at scale [EB/OL]. [2023-12-11].. |
19 | MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 2574-2582. |
20 | XIAO C, LI B, ZHU J Y, et al. Generating adversarial examples with adversarial networks[C]// Proceedings of the 27th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2018: 3905-3911. |
21 | RUIZ N, BARGAL S A, SCLAROFF S. Protecting against image translation deepfakes by leaking universal perturbations from black-box neural networks [EB/OL]. [2023-12-15].. |
22 | PAPERNOT N, McDANIEL P, GOODFELLOW I, et al. Practical black-box attacks against machine learning[C]// Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security. New York: ACM, 2017: 506-519. |
23 | MENG D, CHEN H. MagNet: a two-pronged defense against adversarial examples [C]// Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2017: 135-147. |
24 | GUPTA P, RAHTU E. CIIDefence: defeating adversarial attacks by fusing class-specific image inpainting and image denoising [C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 6707-6716. |
25 | LIU Y, CHENG Y, GAO L, et al. Practical evaluation of adversarial robustness via adaptive auto attack [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 15084-15093. |
26 | JIA J, CAO X, WANG B, et al. Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing[EB/OL]. [2023-12-16].. |
27 | BAI T, LUO J, ZHAO J, et al. Recent advances in adversarial training for adversarial robustness [C]// Proceedings of the 30th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2021: 4312-4321. |
28 | VIVEK B S, BABU R V. Single-step adversarial training with dropout scheduling[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 947-956. |
29 | SHAFAHI A, NAJIBI M, GHIASI A, et al. Adversarial training for free! [C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 3358-3369. |
30 | RICE L, WONG E, KOLTER J Z. Overfitting in adversarially robust deep learning [C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 8093-8104. |
31 | GOLGOONI Z, SABERI M, ESKANDAR M, et al. ZeroGrad: mitigating and explaining catastrophic overfitting in FGSM adversarial training [EB/OL]. [2023-12-19].. |
32 | ANDRIUSHCHENKO M, FLAMMARION N. Understanding and improving fast adversarial training [C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 16048-16059. |
33 | SRIRAMANAN G, ADDEPALLI S, BABURAJ A. Towards efficient and effective adversarial training [C]// Proceedings of the 35th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2021: 11821-11833. |
34 | HUANG Z, FAN Y, LIU C, et al. Fast adversarial training with adaptive step size [J]. IEEE Transactions on Image Processing, 2023, 32: 6102-6114. |
35 | DE JORGE P, BIBI A, VOLPI R, et al. Make some noise: reliable and efficient single-step adversarial training[C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 12881-12893. |
36 | 吴锦富,柳毅 .基于随机噪声和自适应步长的快速对抗训练方法[J].计算机应用, 2024, 44(6):1807-1815. |
WU J F, LIU Y. Fast adversarial training method based on random noise and adaptive step size [J]. Journal of Computer Applications, 2024, 44(6):1807-1815. | |
37 | 广东工业大学 .一种基于数据增强和步长调整的单步对抗训练方法及系统: 202311147432.0 [P]. 2023-12-05. |
Guangdong University of Technology. A single step adversarial training method and system based on data enhancement and step size adjustment: 202311147432.0 [P]. 2023-12-05. | |
38 | 杨时康,柳毅 .一种基于可学习攻击步长的联合对抗训练方法[J].计算机应用研究, 2024, 41(6):1845-1850. |
YANG S K, LIU Y. Joint adversarial training method based on learnable attack step size [J]. Application Research of Computers, 2024, 41(6):1845-1850. | |
39 | ZHANG J, XU X, HAN B, et al. Attacks which do not kill training make adversarial learning stronger [C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 11278-11287. |
40 | SANYAL A, DOKANIA P K, KANADE V, et al. How benign is benign overfitting? [EB/OL]. [2023-12-22].. |
41 | DONHAUSER K, ŢIFREA A, AERNI M, et al. Interpolation can hurt robust generalization even when there is no noise[C]// Proceedings of the 35th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2021: 23465-23477. |
42 | ZHANG J, XU X, HAN B, et al. NoiLIn: improving adversarial training and correcting stereotype of noisy labels [EB/OL]. [2023-12-25]. . |
43 | KRIZHEVSKY A. Learning multiple layers of features from tiny images [R/OL]. [2023-12-28].. |
[1] | Yunchuan HUANG, Yongquan JIANG, Juntao HUANG, Yan YANG. Molecular toxicity prediction based on meta graph isomorphism network [J]. Journal of Computer Applications, 2024, 44(9): 2964-2969. |
[2] | Jing QIN, Zhiguang QIN, Fali LI, Yueheng PENG. Diagnosis of major depressive disorder based on probabilistic sparse self-attention neural network [J]. Journal of Computer Applications, 2024, 44(9): 2970-2974. |
[3] | Xiyuan WANG, Zhancheng ZHANG, Shaokang XU, Baocheng ZHANG, Xiaoqing LUO, Fuyuan HU. Unsupervised cross-domain transfer network for 3D/2D registration in surgical navigation [J]. Journal of Computer Applications, 2024, 44(9): 2911-2918. |
[4] | Shunyong LI, Shiyi LI, Rui XU, Xingwang ZHAO. Incomplete multi-view clustering algorithm based on self-attention fusion [J]. Journal of Computer Applications, 2024, 44(9): 2696-2703. |
[5] | Yexin PAN, Zhe YANG. Optimization model for small object detection based on multi-level feature bidirectional fusion [J]. Journal of Computer Applications, 2024, 44(9): 2871-2877. |
[6] | Yuhan LIU, Genlin JI, Hongping ZHANG. Video pedestrian anomaly detection method based on skeleton graph and mixed attention [J]. Journal of Computer Applications, 2024, 44(8): 2551-2557. |
[7] | Yanjie GU, Yingjun ZHANG, Xiaoqian LIU, Wei ZHOU, Wei SUN. Traffic flow forecasting via spatial-temporal multi-graph fusion [J]. Journal of Computer Applications, 2024, 44(8): 2618-2625. |
[8] | Qianhong SHI, Yan YANG, Yongquan JIANG, Xiaocao OUYANG, Wubo FAN, Qiang CHEN, Tao JIANG, Yuan LI. Multi-granularity abrupt change fitting network for air quality prediction [J]. Journal of Computer Applications, 2024, 44(8): 2643-2650. |
[9] | Zheng WU, Zhiyou CHENG, Zhentian WANG, Chuanjian WANG, Sheng WANG, Hui XU. Deep learning-based classification of head movement amplitude during patient anaesthesia resuscitation [J]. Journal of Computer Applications, 2024, 44(7): 2258-2263. |
[10] | Huanhuan LI, Tianqiang HUANG, Xuemei DING, Haifeng LUO, Liqing HUANG. Public traffic demand prediction based on multi-scale spatial-temporal graph convolutional network [J]. Journal of Computer Applications, 2024, 44(7): 2065-2072. |
[11] | Zhi ZHANG, Xin LI, Naifu YE, Kaixi HU. DKP: defending against model stealing attacks based on dark knowledge protection [J]. Journal of Computer Applications, 2024, 44(7): 2080-2086. |
[12] | Yiqun ZHAO, Zhiyu ZHANG, Xue DONG. Anisotropic travel time computation method based on dense residual connection physical information neural networks [J]. Journal of Computer Applications, 2024, 44(7): 2310-2318. |
[13] | Song XU, Wenbo ZHANG, Yifan WANG. Lightweight video salient object detection network based on spatiotemporal information [J]. Journal of Computer Applications, 2024, 44(7): 2192-2199. |
[14] | Xun SUN, Ruifeng FENG, Yanru CHEN. Monocular 3D object detection method integrating depth and instance segmentation [J]. Journal of Computer Applications, 2024, 44(7): 2208-2215. |
[15] | Yaxing BING, Yangping WANG, Jiu YONG, Haomou BAI. Six degrees of freedom object pose estimation algorithm based on filter learning network [J]. Journal of Computer Applications, 2024, 44(6): 1920-1926. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||