Journal of Computer Applications ›› 2023, Vol. 43 ›› Issue (11): 3428-3435.DOI: 10.11772/j.issn.1001-9081.2022111677
Special Issue: 人工智能
• Artificial intelligence • Previous Articles Next Articles
Jici ZHANG, Chunlong FAN(), Cailong LI, Xuedong ZHENG
Received:
2022-11-11
Revised:
2023-04-06
Accepted:
2023-04-11
Online:
2023-05-08
Published:
2023-11-10
Contact:
Chunlong FAN
About author:
ZHANG Jici, born in 1998, M. S. candidate. Her research interests include deep learning, adversarial attack.Supported by:
通讯作者:
范纯龙
作者简介:
张济慈(1998—),女,辽宁海城人,硕士研究生,CCF会员,主要研究方向:深度学习、对抗攻击基金资助:
CLC Number:
Jici ZHANG, Chunlong FAN, Cailong LI, Xuedong ZHENG. Cross-model universal perturbation generation method based on geometric relationship[J]. Journal of Computer Applications, 2023, 43(11): 3428-3435.
张济慈, 范纯龙, 李彩龙, 郑学东. 基于几何关系的跨模型通用扰动生成方法[J]. 《计算机应用》唯一官方网站, 2023, 43(11): 3428-3435.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2022111677
模型类别 | 同种训练方式下的不同模型 | ||||
---|---|---|---|---|---|
Mode1 | Mode2 | Mode3 | Mode4 | ||
不同训练方式下的同种模型 | MDN | DenseNet1 | DenseNet2 | DenseNet3 | DenseNet4 |
MGN | GoogleNet1 | GoogleNet2 | GoogleNet3 | GoogleNet4 | |
MMN | MobileNet1 | MobileNet2 | MobileNet3 | MobileNet4 | |
MNN | NiN1 | NiN2 | NiN3 | NiN4 | |
MVGG | VGG1 | VGG2 | VGG3 | VGG4 | |
MRN | ResNet1 | ResNet2 | ResNet3 | ResNet4 |
Tab. 1 Model training methods
模型类别 | 同种训练方式下的不同模型 | ||||
---|---|---|---|---|---|
Mode1 | Mode2 | Mode3 | Mode4 | ||
不同训练方式下的同种模型 | MDN | DenseNet1 | DenseNet2 | DenseNet3 | DenseNet4 |
MGN | GoogleNet1 | GoogleNet2 | GoogleNet3 | GoogleNet4 | |
MMN | MobileNet1 | MobileNet2 | MobileNet3 | MobileNet4 | |
MNN | NiN1 | NiN2 | NiN3 | NiN4 | |
MVGG | VGG1 | VGG2 | VGG3 | VGG4 | |
MRN | ResNet1 | ResNet2 | ResNet3 | ResNet4 |
数据集 | 跨模型类别 | ηcross | SSIM | PSNR/dB | 总迭代次数 | |
---|---|---|---|---|---|---|
CIFAR10 | MDN | 1.0 | 0.470 | 0.994 | 43.962 | 1 540 |
MGN | 1.0 | 0.347 | 0.997 | 46.452 | 1 376 | |
MMN | 1.0 | 0.308 | 0.997 | 47.711 | 1 707 | |
MNN | 1.0 | 0.413 | 0.995 | 45.037 | 1 688 | |
MVGG | 1.0 | 0.451 | 0.994 | 44.154 | 1 498 | |
MRN | 1.0 | 0.472 | 0.994 | 43.750 | 1 487 | |
SVHN | MDN | 1.0 | 0.877 | 0.972 | 38.120 | 1 649 |
MGN | 1.0 | 0.765 | 0.978 | 39.263 | 1 465 | |
MMN | 1.0 | 0.687 | 0.982 | 40.246 | 2 218 | |
MNN | 1.0 | 0.596 | 0.985 | 41.549 | 2 025 | |
MVGG | 1.0 | 0.632 | 0.983 | 41.131 | 1 291 | |
MRN | 1.0 | 0.670 | 0.981 | 40.714 | 1 343 |
Tab. 2 Cross-model attack performance of algorithm 2 across same model under different training methods
数据集 | 跨模型类别 | ηcross | SSIM | PSNR/dB | 总迭代次数 | |
---|---|---|---|---|---|---|
CIFAR10 | MDN | 1.0 | 0.470 | 0.994 | 43.962 | 1 540 |
MGN | 1.0 | 0.347 | 0.997 | 46.452 | 1 376 | |
MMN | 1.0 | 0.308 | 0.997 | 47.711 | 1 707 | |
MNN | 1.0 | 0.413 | 0.995 | 45.037 | 1 688 | |
MVGG | 1.0 | 0.451 | 0.994 | 44.154 | 1 498 | |
MRN | 1.0 | 0.472 | 0.994 | 43.750 | 1 487 | |
SVHN | MDN | 1.0 | 0.877 | 0.972 | 38.120 | 1 649 |
MGN | 1.0 | 0.765 | 0.978 | 39.263 | 1 465 | |
MMN | 1.0 | 0.687 | 0.982 | 40.246 | 2 218 | |
MNN | 1.0 | 0.596 | 0.985 | 41.549 | 2 025 | |
MVGG | 1.0 | 0.632 | 0.983 | 41.131 | 1 291 | |
MRN | 1.0 | 0.670 | 0.981 | 40.714 | 1 343 |
数据集 | 跨模型 类别 | ηcross | SSIM | PSNR/dB | 总迭代 次数 | |
---|---|---|---|---|---|---|
CIFAR10 | Mode1 | 1.0 | 0.500 | 0.993 | 43.325 | 1 741 |
Mode2 | 1.0 | 0.483 | 0.993 | 43.555 | 1 767 | |
Mode3 | 1.0 | 0.576 | 0.991 | 41.971 | 1 896 | |
Mode4 | 1.0 | 0.562 | 0.992 | 42.246 | 1 872 | |
SVHN | Mode1 | 1.0 | 0.881 | 0.971 | 38.037 | 1 642 |
Mode2 | 1.0 | 0.886 | 0.971 | 37.932 | 1 600 | |
Mode3 | 1.0 | 0.895 | 0.970 | 37.776 | 1 718 | |
Mode4 | 1.0 | 0.896 | 0.971 | 37.787 | 1 624 |
Tab. 3 Cross-model attack performance of algorithm 2 across different models under same training method
数据集 | 跨模型 类别 | ηcross | SSIM | PSNR/dB | 总迭代 次数 | |
---|---|---|---|---|---|---|
CIFAR10 | Mode1 | 1.0 | 0.500 | 0.993 | 43.325 | 1 741 |
Mode2 | 1.0 | 0.483 | 0.993 | 43.555 | 1 767 | |
Mode3 | 1.0 | 0.576 | 0.991 | 41.971 | 1 896 | |
Mode4 | 1.0 | 0.562 | 0.992 | 42.246 | 1 872 | |
SVHN | Mode1 | 1.0 | 0.881 | 0.971 | 38.037 | 1 642 |
Mode2 | 1.0 | 0.886 | 0.971 | 37.932 | 1 600 | |
Mode3 | 1.0 | 0.895 | 0.970 | 37.776 | 1 718 | |
Mode4 | 1.0 | 0.896 | 0.971 | 37.787 | 1 624 |
数据集 | 跨模型类别 | ηcross | SSIM | PSNR/dB | |
---|---|---|---|---|---|
CIFAR10 | MDN | 1.0 | 0.407 | 0.995 | 45.322 |
MGN | 1.0 | 0.288 | 0.998 | 48.057 | |
MMN | 1.0 | 0.282 | 0.998 | 48.524 | |
MNN | 1.0 | 0.378 | 0.996 | 45.850 | |
MVGG | 1.0 | 0.379 | 0.996 | 45.684 | |
MRN | 1.0 | 0.388 | 0.996 | 45.410 | |
SVHN | MDN | 1.0 | 0.776 | 0.977 | 39.559 |
MGN | 1.0 | 0.671 | 0.983 | 40.715 | |
MMN | 1.0 | 0.638 | 0.984 | 41.173 | |
MNN | 1.0 | 0.553 | 0.987 | 42.451 | |
MVGG | 1.0 | 0.537 | 0.987 | 42.701 | |
MRN | 1.0 | 0.567 | 0.986 | 42.424 |
Tab. 4 Cross-model attack performance of algorithms across same model under different training methods with L2 norm optimization
数据集 | 跨模型类别 | ηcross | SSIM | PSNR/dB | |
---|---|---|---|---|---|
CIFAR10 | MDN | 1.0 | 0.407 | 0.995 | 45.322 |
MGN | 1.0 | 0.288 | 0.998 | 48.057 | |
MMN | 1.0 | 0.282 | 0.998 | 48.524 | |
MNN | 1.0 | 0.378 | 0.996 | 45.850 | |
MVGG | 1.0 | 0.379 | 0.996 | 45.684 | |
MRN | 1.0 | 0.388 | 0.996 | 45.410 | |
SVHN | MDN | 1.0 | 0.776 | 0.977 | 39.559 |
MGN | 1.0 | 0.671 | 0.983 | 40.715 | |
MMN | 1.0 | 0.638 | 0.984 | 41.173 | |
MNN | 1.0 | 0.553 | 0.987 | 42.451 | |
MVGG | 1.0 | 0.537 | 0.987 | 42.701 | |
MRN | 1.0 | 0.567 | 0.986 | 42.424 |
数据集 | 跨模型类别 | ηcross | SSIM | PSNR/dB | |
---|---|---|---|---|---|
CIFAR10 | Mode1 | 1.0 | 0.444 | 0.994 | 44.335 |
Mode2 | 1.0 | 0.430 | 0.995 | 44.597 | |
Mode3 | 1.0 | 0.516 | 0.993 | 42.927 | |
Mode4 | 1.0 | 0.505 | 0.993 | 43.204 | |
SVHN | Mode1 | 1.0 | 0.800 | 0.975 | 39.094 |
Mode2 | 1.0 | 0.802 | 0.976 | 39.014 | |
Mode3 | 1.0 | 0.810 | 0.975 | 38.835 | |
Mode4 | 1.0 | 0.815 | 0.975 | 38.834 |
Tab. 5 Cross-model attack performance of algorithms across different models under same training method with L2 norm optimization
数据集 | 跨模型类别 | ηcross | SSIM | PSNR/dB | |
---|---|---|---|---|---|
CIFAR10 | Mode1 | 1.0 | 0.444 | 0.994 | 44.335 |
Mode2 | 1.0 | 0.430 | 0.995 | 44.597 | |
Mode3 | 1.0 | 0.516 | 0.993 | 42.927 | |
Mode4 | 1.0 | 0.505 | 0.993 | 43.204 | |
SVHN | Mode1 | 1.0 | 0.800 | 0.975 | 39.094 |
Mode2 | 1.0 | 0.802 | 0.976 | 39.014 | |
Mode3 | 1.0 | 0.810 | 0.975 | 38.835 | |
Mode4 | 1.0 | 0.815 | 0.975 | 38.834 |
源模型 | 对比算法 | 目标模型的成功率 | 平均攻击 成功率 | |||||
---|---|---|---|---|---|---|---|---|
DenseNet121 | GoogleNet | MobileNet | NiN | VGG11 | ResNet18 | |||
DenseNet121 | SINIFGSM | 0.969* | 0.742 | 0.714 | 0.529 | 0.638 | 0.642 | 0.706 |
VMIFGSM | 0.999* | 0.857 | 0.810 | 0.645 | 0.749 | 0.768 | 0.805 | |
VNIFGSM | 0.999* | 0.839 | 0.806 | 0.628 | 0.740 | 0.762 | 0.796 | |
GoogleNet | SINIFGSM | 0.640 | 0.988* | 0.677 | 0.406 | 0.512 | 0.478 | 0.617 |
VMIFGSM | 0.762 | 1.000* | 0.717 | 0.457 | 0.571 | 0.558 | 0.678 | |
VNIFGSM | 0.749 | 1.000* | 0.714 | 0.452 | 0.568 | 0.557 | 0.673 | |
MobileNet | SINIFGSM | 0.546 | 0.591 | 0.992* | 0.377 | 0.486 | 0.515 | 0.585 |
VMIFGSM | 0.628 | 0.656 | 1.000* | 0.415 | 0.543 | 0.584 | 0.638 | |
VNIFGSM | 0.624 | 0.669 | 1.000* | 0.442 | 0.546 | 0.573 | 0.642 | |
NiN | SINIFGSM | 0.471 | 0.445 | 0.496 | 0.907* | 0.473 | 0.422 | 0.536 |
VMIFGSM | 0.585 | 0.545 | 0.550 | 0.973* | 0.563 | 0.525 | 0.624 | |
VNIFGSM | 0.584 | 0.555 | 0.553 | 0.971* | 0.572 | 0.548 | 0.631 | |
VGG11 | SINIFGSM | 0.688 | 0.647 | 0.676 | 0.545 | 0.989* | 0.659 | 0.701 |
VMIFGSM | 0.793 | 0.742 | 0.769 | 0.654 | 0.996* | 0.772 | 0.788 | |
VNIFGSM | 0.778 | 0.727 | 0.749 | 0.632 | 0.996* | 0.761 | 0.774 | |
ResNet18 | SINIFGSM | 0.638 | 0.568 | 0.658 | 0.471 | 0.661 | 0.980* | 0.663 |
VMIFGSM | 0.775 | 0.699 | 0.764 | 0.577 | 0.752 | 0.994* | 0.756 | |
VNIFGSM | 0.765 | 0.680 | 0.762 | 0.581 | 0.741 | 0.997* | 0.754 |
Tab. 6 Attack success rates of comparison algorithms on CIFAR10 dataset and six common models
源模型 | 对比算法 | 目标模型的成功率 | 平均攻击 成功率 | |||||
---|---|---|---|---|---|---|---|---|
DenseNet121 | GoogleNet | MobileNet | NiN | VGG11 | ResNet18 | |||
DenseNet121 | SINIFGSM | 0.969* | 0.742 | 0.714 | 0.529 | 0.638 | 0.642 | 0.706 |
VMIFGSM | 0.999* | 0.857 | 0.810 | 0.645 | 0.749 | 0.768 | 0.805 | |
VNIFGSM | 0.999* | 0.839 | 0.806 | 0.628 | 0.740 | 0.762 | 0.796 | |
GoogleNet | SINIFGSM | 0.640 | 0.988* | 0.677 | 0.406 | 0.512 | 0.478 | 0.617 |
VMIFGSM | 0.762 | 1.000* | 0.717 | 0.457 | 0.571 | 0.558 | 0.678 | |
VNIFGSM | 0.749 | 1.000* | 0.714 | 0.452 | 0.568 | 0.557 | 0.673 | |
MobileNet | SINIFGSM | 0.546 | 0.591 | 0.992* | 0.377 | 0.486 | 0.515 | 0.585 |
VMIFGSM | 0.628 | 0.656 | 1.000* | 0.415 | 0.543 | 0.584 | 0.638 | |
VNIFGSM | 0.624 | 0.669 | 1.000* | 0.442 | 0.546 | 0.573 | 0.642 | |
NiN | SINIFGSM | 0.471 | 0.445 | 0.496 | 0.907* | 0.473 | 0.422 | 0.536 |
VMIFGSM | 0.585 | 0.545 | 0.550 | 0.973* | 0.563 | 0.525 | 0.624 | |
VNIFGSM | 0.584 | 0.555 | 0.553 | 0.971* | 0.572 | 0.548 | 0.631 | |
VGG11 | SINIFGSM | 0.688 | 0.647 | 0.676 | 0.545 | 0.989* | 0.659 | 0.701 |
VMIFGSM | 0.793 | 0.742 | 0.769 | 0.654 | 0.996* | 0.772 | 0.788 | |
VNIFGSM | 0.778 | 0.727 | 0.749 | 0.632 | 0.996* | 0.761 | 0.774 | |
ResNet18 | SINIFGSM | 0.638 | 0.568 | 0.658 | 0.471 | 0.661 | 0.980* | 0.663 |
VMIFGSM | 0.775 | 0.699 | 0.764 | 0.577 | 0.752 | 0.994* | 0.756 | |
VNIFGSM | 0.765 | 0.680 | 0.762 | 0.581 | 0.741 | 0.997* | 0.754 |
1 | GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[EB/OL]. (2015-03-20) [2022-12-16].. |
2 | MĄDRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL]. (2019-09-04) [2022-12-16].. 10.48550/arXiv.1706.06083 |
3 | MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 2574-2582. 10.1109/cvpr.2016.282 |
4 | CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]// Proceedings of the 2017 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2017: 39-57. 10.1109/sp.2017.49 |
5 | SU J, VARGAS D V, SAKURAI K. One pixel attack for fooling deep neural networks[J]. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 828-841. 10.1109/tevc.2019.2890858 |
6 | CHEN P Y, ZHANG H, SHARMA Y, et al. ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]// Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2017: 15-26. 10.1145/3128572.3140448 |
7 | LI Y, LI L, WANG L, et al. NATTACK: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks[C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 3866-3876. 10.48550/arXiv.1905.00441 |
8 | MOOSAVI-DEZFOOLI S M, FAWZI A, FAWZI O, et al. Universal adversarial perturbations[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 86-94. 10.1109/cvpr.2017.17 |
9 | ZHANG C, BENZ P, IMTIAZ T, et al. CD-UAP: class discriminative universal adversarial perturbation[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2020: 6754-6761. 10.1609/aaai.v34i04.6154 |
10 | MOPURI K R, GANESHAN A, BABU R V. Generalizable data-free objective for crafting universal adversarial perturbations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(10): 2452-2465. 10.1109/tpami.2018.2861800 |
11 | MOPURI K R, GARG U, BABU R V. Fast feature fool: a data independent approach to universal adversarial perturbations[C]// Proceedings of the 2017 British Machine Vision Conference. Durham: BMVA Press, 2017: No.30. 10.5244/c.31.30 |
12 | MOPURI K R, UPPALA P K, BABU R V. Ask, acquire, and attack: data-free UAP generation using class impressions[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11213. Cham: Springer, 2018: 20-35. |
13 | WU L, ZHU Z, TAI C, et al. Understanding and enhancing the transferability of adversarial examples[EB/OL]. (2018-02-27) [2022-12-16].. |
14 | LI Y, ZHANG Y, ZHANG R, et al. Generative transferable adversarial attack[C]// Proceedings of the 3rd International Conference on Video and Image Processing. New York: ACM, 2019: 84-89. 10.1145/3376067.3376112 |
15 | XIE C, ZHANG Z, ZHOU Y, et al. Improving transferability of adversarial examples with input diversity[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2725-2734. 10.1109/cvpr.2019.00284 |
16 | LIN J, SONG C, HE K, et al. Nesterov accelerated gradient and scale invariance for adversarial attacks[EB/OL]. [2022-12-16].. |
17 | WANG G, YAN H, WEI X. Improving adversarial transferability with spatial momentum[EB/OL]. [2022-12-16].. 10.1007/978-3-031-18907-4_46 |
18 | WANG X, HE K. Enhancing the transferability of adversarial attacks through variance tuning[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021:1924-1933. 10.1109/cvpr46437.2021.00196 |
19 | LIU Y, CHEN X, LIU C, et al. Delving into transferable adversarial examples and black-box attacks[EB/OL]. [2022-12-16].. |
20 | WASEDA F, NISHIKAWA S, LE T N, et al. Closer look at the transferability of adversarial examples: how they fool different models differently[C]// Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2023: 1360-1368. 10.1109/wacv56688.2023.00141 |
21 | HE Z, WANG W, XUAN X, et al. A new ensemble method for concessively targeted multi-model attack[EB/OL]. [2022-12-16].. |
22 | WU F, GAZO R, HAVIAROVA E, et al. Efficient project gradient descent for ensemble adversarial attack[EB/OL].[2022-12-16].. 10.48550/arXiv.1906.03333 |
23 | ILYAS A, SANTURKAR S, TSIPRAS D, et al. Adversarial examples are not bugs, they are features[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2019: 125-136. 10.23915/distill.00019 |
24 | SHAMIR A, MELAMED O, BenSHMUEL O. The dimpled manifold model of adversarial examples in machine learning[EB/OL]. [2022-12-16].. |
25 | KNUTH D E. The Art of Computer Programming: Volume 3, Sorting and Searching[M]. Reading, MA: Addison Wesley, 1973. |
26 | KRIZHEVSKY A. Learning multiple layers of features from tiny images[R/OL]. [2022-12-16].. 10.1016/j.tics.2007.09.004 |
27 | NETZER Y, WANG T, COATES A, et al. Reading digits in natural images with unsupervised feature learning[EB/OL]. [2022-12-16].. |
28 | LIN M, CHEN Q, YAN S. Network in network[EB/OL]. [2022-12-16].. |
29 | SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. [2022-12-16].. |
30 | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. 10.1109/cvpr.2016.90 |
31 | HUANG G, LIU Z, L van der MAATEN, et al. Densely connected convolutional networks[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 2261-2269. 10.1109/cvpr.2017.243 |
32 | SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]// Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 1-9. 10.1109/cvpr.2015.7298594 |
33 | HOWARD A G, ZHU M, CHEN B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[EB/OL]. [2022-12-16].. 10.48550/arXiv.1704.04861 |
[1] | Shunyong LI, Shiyi LI, Rui XU, Xingwang ZHAO. Incomplete multi-view clustering algorithm based on self-attention fusion [J]. Journal of Computer Applications, 2024, 44(9): 2696-2703. |
[2] | Jing QIN, Zhiguang QIN, Fali LI, Yueheng PENG. Diagnosis of major depressive disorder based on probabilistic sparse self-attention neural network [J]. Journal of Computer Applications, 2024, 44(9): 2970-2974. |
[3] | Xiyuan WANG, Zhancheng ZHANG, Shaokang XU, Baocheng ZHANG, Xiaoqing LUO, Fuyuan HU. Unsupervised cross-domain transfer network for 3D/2D registration in surgical navigation [J]. Journal of Computer Applications, 2024, 44(9): 2911-2918. |
[4] | Yexin PAN, Zhe YANG. Optimization model for small object detection based on multi-level feature bidirectional fusion [J]. Journal of Computer Applications, 2024, 44(9): 2871-2877. |
[5] | Yunchuan HUANG, Yongquan JIANG, Juntao HUANG, Yan YANG. Molecular toxicity prediction based on meta graph isomorphism network [J]. Journal of Computer Applications, 2024, 44(9): 2964-2969. |
[6] | Yuhan LIU, Genlin JI, Hongping ZHANG. Video pedestrian anomaly detection method based on skeleton graph and mixed attention [J]. Journal of Computer Applications, 2024, 44(8): 2551-2557. |
[7] | Yanjie GU, Yingjun ZHANG, Xiaoqian LIU, Wei ZHOU, Wei SUN. Traffic flow forecasting via spatial-temporal multi-graph fusion [J]. Journal of Computer Applications, 2024, 44(8): 2618-2625. |
[8] | Qianhong SHI, Yan YANG, Yongquan JIANG, Xiaocao OUYANG, Wubo FAN, Qiang CHEN, Tao JIANG, Yuan LI. Multi-granularity abrupt change fitting network for air quality prediction [J]. Journal of Computer Applications, 2024, 44(8): 2643-2650. |
[9] | Yiqun ZHAO, Zhiyu ZHANG, Xue DONG. Anisotropic travel time computation method based on dense residual connection physical information neural networks [J]. Journal of Computer Applications, 2024, 44(7): 2310-2318. |
[10] | Song XU, Wenbo ZHANG, Yifan WANG. Lightweight video salient object detection network based on spatiotemporal information [J]. Journal of Computer Applications, 2024, 44(7): 2192-2199. |
[11] | Xun SUN, Ruifeng FENG, Yanru CHEN. Monocular 3D object detection method integrating depth and instance segmentation [J]. Journal of Computer Applications, 2024, 44(7): 2208-2215. |
[12] | Zheng WU, Zhiyou CHENG, Zhentian WANG, Chuanjian WANG, Sheng WANG, Hui XU. Deep learning-based classification of head movement amplitude during patient anaesthesia resuscitation [J]. Journal of Computer Applications, 2024, 44(7): 2258-2263. |
[13] | Huanhuan LI, Tianqiang HUANG, Xuemei DING, Haifeng LUO, Liqing HUANG. Public traffic demand prediction based on multi-scale spatial-temporal graph convolutional network [J]. Journal of Computer Applications, 2024, 44(7): 2065-2072. |
[14] | Zhi ZHANG, Xin LI, Naifu YE, Kaixi HU. DKP: defending against model stealing attacks based on dark knowledge protection [J]. Journal of Computer Applications, 2024, 44(7): 2080-2086. |
[15] | Yajuan ZHAO, Fanjun MENG, Xingjian XU. Review of online education learner knowledge tracing [J]. Journal of Computer Applications, 2024, 44(6): 1683-1698. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||