《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (10): 3091-3100.DOI: 10.11772/j.issn.1001-9081.2024101407
• 人工智能 • 上一篇
收稿日期:
2024-10-09
修回日期:
2024-12-22
接受日期:
2024-12-27
发布日期:
2025-01-06
出版日期:
2025-10-10
通讯作者:
宫智宇
作者简介:
宫智宇(2000—),男,山西怀仁人,硕士研究生,主要研究方向:神经网络、深度学习基金资助:
Received:
2024-10-09
Revised:
2024-12-22
Accepted:
2024-12-27
Online:
2025-01-06
Published:
2025-10-10
Contact:
Zhiyu GONG
About author:
GONG Zhiyu, born in 2000, M. S. candidate. His research interests include neural networks, deep learning.Supported by:
摘要:
针对残差网络(ResNet)在图像分类中容易受未知重尾噪声影响导致识别准确率下降的问题,提出一种多分布重尾噪声自适应残差网络(MHTNA-ResNet)模型。首先,为抑制重尾噪声对最终预测的影响,设计一个多分布重尾噪声自适应层(MHTNA),该层使用多种重尾分布创建噪声模板,扰动干净的训练数据,使ResNet通过训练获得对重尾噪声图像的识别能力;其次,MHTNA在训练中进行自适应训练,使用最大似然估计法求解更新的噪声模板参数,并根据求解参数重新生成噪声模板,控制噪声始终遵循重尾分布;最后,测试时屏蔽MHTNA,对测试图像进行重尾噪声攻击,从而检验模型的抗噪能力。实验结果表明,与PRIME模型相比,面对重尾噪声的攻击,在CIFAR10、CIFAR100和MINI-ImageNet数据集上所提模型的分类准确率分别平均提升了3.86、7.10和5.46个百分点。可见,所提模型可以有效提高ResNet面对重尾噪声干扰时的鲁棒性。
中图分类号:
宫智宇, 王士同. 面向重尾噪声图像分类的残差网络学习方法[J]. 计算机应用, 2025, 45(10): 3091-3100.
Zhiyu GONG, Shitong WANG. Learning method of residual network for heavy-tailed noisy image classification[J]. Journal of Computer Applications, 2025, 45(10): 3091-3100.
网络层 | ResNet18 | ResNet50 |
---|---|---|
conv1_x | 7×7,64,stride=2 | |
3×3 max pool,stride=2 | ||
conv2_x | ||
conv3_x | ||
conv4_x | ||
conv5_x | ||
其他 | average pool,fc,Softmax |
表1 ResNet18与ResNet50的网络结构
Tab. 1 Network structures of ResNet18 and ResNet50
网络层 | ResNet18 | ResNet50 |
---|---|---|
conv1_x | 7×7,64,stride=2 | |
3×3 max pool,stride=2 | ||
conv2_x | ||
conv3_x | ||
conv4_x | ||
conv5_x | ||
其他 | average pool,fc,Softmax |
方法 | Clean/% | 不同分布和不同参数重尾噪声攻击下的准确率/% | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cauchy | Student-t | Laplace | Erlang | ||||||||||
γ=5 | γ=10 | γ=15 | v=0.5 | v=0.4 | v=0.3 | σ=10 | σ=15 | σ=30 | a=0.100,b=3 | a=0.050,b=3 | a=0.045,b=3 | ||
ResNet18 | 95.19 | 23.79 | 15.31 | 13.17 | 17.01 | 14.03 | 11.78 | 53.79 | 28.24 | 14.31 | 43.05 | 18.59 | 17.20 |
M | 94.12 | 88.54 | 71.81 | 51.23 | 87.48 | 75.76 | 46.21 | 90.63 | 83.49 | 50.91 | 86.28 | 60.16 | 49.35 |
MHTNA-M | 90.18 | 76.67 | 58.91 | 90.17 | 82.79 | 60.54 | 90.98 | 85.15 | 53.76 | 87.13 | 63.27 | 53.51 | |
A | 93.19 | 91.57 | 78.94 | 58.71 | 90.38 | 83.23 | 57.41 | 92.17 | 87.83 | 50.39 | 89.37 | 59.35 | 52.87 |
MHTNA-A | 93.27 | 81.23 | 62.03 | 84.71 | 61.37 | 51.88 | 89.93 | 61.46 | 54.16 | ||||
MA | 93.79 | 91.73 | 91.02 | 92.41 | 88.41 | ||||||||
MHTNA-MA | 93.65 | 92.68 | 86.15 | 73.29 | 91.59 | 88.63 | 71.59 | 92.87 | 89.45 | 66.72 | 91.25 | 70.52 | 64.93 |
表2 消融实验结果
Tab. 2 Ablation experimental results
方法 | Clean/% | 不同分布和不同参数重尾噪声攻击下的准确率/% | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cauchy | Student-t | Laplace | Erlang | ||||||||||
γ=5 | γ=10 | γ=15 | v=0.5 | v=0.4 | v=0.3 | σ=10 | σ=15 | σ=30 | a=0.100,b=3 | a=0.050,b=3 | a=0.045,b=3 | ||
ResNet18 | 95.19 | 23.79 | 15.31 | 13.17 | 17.01 | 14.03 | 11.78 | 53.79 | 28.24 | 14.31 | 43.05 | 18.59 | 17.20 |
M | 94.12 | 88.54 | 71.81 | 51.23 | 87.48 | 75.76 | 46.21 | 90.63 | 83.49 | 50.91 | 86.28 | 60.16 | 49.35 |
MHTNA-M | 90.18 | 76.67 | 58.91 | 90.17 | 82.79 | 60.54 | 90.98 | 85.15 | 53.76 | 87.13 | 63.27 | 53.51 | |
A | 93.19 | 91.57 | 78.94 | 58.71 | 90.38 | 83.23 | 57.41 | 92.17 | 87.83 | 50.39 | 89.37 | 59.35 | 52.87 |
MHTNA-A | 93.27 | 81.23 | 62.03 | 84.71 | 61.37 | 51.88 | 89.93 | 61.46 | 54.16 | ||||
MA | 93.79 | 91.73 | 91.02 | 92.41 | 88.41 | ||||||||
MHTNA-MA | 93.65 | 92.68 | 86.15 | 73.29 | 91.59 | 88.63 | 71.59 | 92.87 | 89.45 | 66.72 | 91.25 | 70.52 | 64.93 |
网络 | MHTNA | Clean/% | 不同分布和不同参数非重尾噪声攻击下的准确率/% | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Gaussian | Pepper | Poisson | |||||||||
v=5 | v=10 | v=20 | r=0.005 | r=0.010 | r=0.050 | p=0.2 | p=0.5 | p=1.0 | |||
ResNet18 | w/o | 95.19 | 90.18 | 72.77 | 29.45 | 86.88 | 70.56 | 25.94 | 53.79 | 28.24 | 14.31 |
w/ | 93.65 | 94.18 | 92.22 | 79.21 | 90.57 | 83.62 | 43.70 | 93.48 | 90.83 | 79.98 | |
ResNet50 | w/o | 75.33 | 60.46 | 36.30 | 11.98 | 66.37 | 51.35 | 6.97 | 71.38 | 55.76 | 32.01 |
w/ | 75.31 | 75.03 | 69.17 | 45.86 | 72.39 | 67.68 | 26.85 | 73.25 | 64.39 | 44.63 |
表3 非重尾噪声攻击时的测试结果
Tab. 3 Test results of non-heavy-tailed noise attacks
网络 | MHTNA | Clean/% | 不同分布和不同参数非重尾噪声攻击下的准确率/% | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Gaussian | Pepper | Poisson | |||||||||
v=5 | v=10 | v=20 | r=0.005 | r=0.010 | r=0.050 | p=0.2 | p=0.5 | p=1.0 | |||
ResNet18 | w/o | 95.19 | 90.18 | 72.77 | 29.45 | 86.88 | 70.56 | 25.94 | 53.79 | 28.24 | 14.31 |
w/ | 93.65 | 94.18 | 92.22 | 79.21 | 90.57 | 83.62 | 43.70 | 93.48 | 90.83 | 79.98 | |
ResNet50 | w/o | 75.33 | 60.46 | 36.30 | 11.98 | 66.37 | 51.35 | 6.97 | 71.38 | 55.76 | 32.01 |
w/ | 75.31 | 75.03 | 69.17 | 45.86 | 72.39 | 67.68 | 26.85 | 73.25 | 64.39 | 44.63 |
方法 | Clean/% | 不同分布和不同参数重尾噪声攻击下的准确率/% | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cauchy | Student-t | Laplace | Erlang | ||||||||||
γ=1.0 | γ=1.5 | γ=5.0 | v=1.0 | v=0.8 | v=0.5 | σ=5 | σ=10 | σ=15 | a=0.5,b=3 | a=0.5,b=10 | a=0.1,b=3 | ||
LAS-AT | 85.66 | 82.98 | 81.19 | 71.95 | 82.76 | 79.41 | 63.93 | 84.95 | 83.02 | 79.30 | 85.62 | 85.04 | 81.27 |
DAJAT | 85.71 | 83.82 | 82.87 | 74.71 | 83.79 | 81.20 | 66.99 | 85.18 | 83.45 | 79.63 | 85.55 | 85.18 | 82.34 |
PD | 89.76 | 88.14 | 86.92 | 78.33 | 88.10 | 85.57 | 69.09 | 89.39 | 87.28 | 83.49 | 89.63 | 89.08 | 85.78 |
NoL | 94.17 | 89.59 | 86.09 | 55.06 | 89.62 | 80.74 | 32.20 | 93.61 | 86.72 | 71.15 | 80.06 | ||
PRIME | 92.96 | 92.06 | 92.74 | 92.15 | |||||||||
MHTNA | 92.68 | 92.93 | 91.87 | 93.07 | 93.51 | 91.59 | 92.87 | 89.45 | 93.41 | 92.87 | 91.25 |
表4 各方法在CIFAR10数据集上的测试结果
Tab. 4 Test results of various methods on CIFAR10 dataset
方法 | Clean/% | 不同分布和不同参数重尾噪声攻击下的准确率/% | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cauchy | Student-t | Laplace | Erlang | ||||||||||
γ=1.0 | γ=1.5 | γ=5.0 | v=1.0 | v=0.8 | v=0.5 | σ=5 | σ=10 | σ=15 | a=0.5,b=3 | a=0.5,b=10 | a=0.1,b=3 | ||
LAS-AT | 85.66 | 82.98 | 81.19 | 71.95 | 82.76 | 79.41 | 63.93 | 84.95 | 83.02 | 79.30 | 85.62 | 85.04 | 81.27 |
DAJAT | 85.71 | 83.82 | 82.87 | 74.71 | 83.79 | 81.20 | 66.99 | 85.18 | 83.45 | 79.63 | 85.55 | 85.18 | 82.34 |
PD | 89.76 | 88.14 | 86.92 | 78.33 | 88.10 | 85.57 | 69.09 | 89.39 | 87.28 | 83.49 | 89.63 | 89.08 | 85.78 |
NoL | 94.17 | 89.59 | 86.09 | 55.06 | 89.62 | 80.74 | 32.20 | 93.61 | 86.72 | 71.15 | 80.06 | ||
PRIME | 92.96 | 92.06 | 92.74 | 92.15 | |||||||||
MHTNA | 92.68 | 92.93 | 91.87 | 93.07 | 93.51 | 91.59 | 92.87 | 89.45 | 93.41 | 92.87 | 91.25 |
方法 | Clean/% | 不同分布和不同参数重尾噪声攻击下的准确率/% | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cauchy | Student-t | Laplace | Erlang | ||||||||||
γ=1.0 | γ=1.5 | γ=5.0 | v=1.0 | v=0.8 | v=0.5 | σ=5 | σ=10 | σ=15 | a=0.5,b=3 | a=0.5,b=10 | a=0.1,b=3 | ||
LAS-AT | 67.31 | 60.40 | 56.08 | 28.58 | 60.95 | 48.97 | 14.59 | 66.87 | 62.46 | 51.00 | 67.30 | 66.89 | 58.78 |
DAJAT | 68.74 | 63.36 | 60.09 | 39.08 | 63.22 | 54.73 | 23.92 | 68.03 | 64.42 | 55.19 | 68.56 | 67.77 | 61.08 |
PD | 65.72 | 62.84 | 61.44 | 47.33 | 63.22 | 58.92 | 35.88 | 65.42 | 63.27 | 57.14 | 65.67 | 65.21 | 60.66 |
NoL | 28.71 | 70.83 | 59.62 | 28.47 | 64.26 | 53.61 | 53.17 | ||||||
PRIME | 74.59 | 69.05 | 67.04 | 74.15 | 73.93 | 70.29 | |||||||
MHTNA | 75.31 | 75.57 | 75.32 | 68.79 | 75.23 | 75.16 | 69.47 | 75.04 | 72.29 | 67.64 | 75.37 | 74.81 | 69.97 |
表 5 各方法在CIFAR100数据集上的测试结果
Tab. 5 Test results of various methods on CIFAR100 dataset
方法 | Clean/% | 不同分布和不同参数重尾噪声攻击下的准确率/% | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cauchy | Student-t | Laplace | Erlang | ||||||||||
γ=1.0 | γ=1.5 | γ=5.0 | v=1.0 | v=0.8 | v=0.5 | σ=5 | σ=10 | σ=15 | a=0.5,b=3 | a=0.5,b=10 | a=0.1,b=3 | ||
LAS-AT | 67.31 | 60.40 | 56.08 | 28.58 | 60.95 | 48.97 | 14.59 | 66.87 | 62.46 | 51.00 | 67.30 | 66.89 | 58.78 |
DAJAT | 68.74 | 63.36 | 60.09 | 39.08 | 63.22 | 54.73 | 23.92 | 68.03 | 64.42 | 55.19 | 68.56 | 67.77 | 61.08 |
PD | 65.72 | 62.84 | 61.44 | 47.33 | 63.22 | 58.92 | 35.88 | 65.42 | 63.27 | 57.14 | 65.67 | 65.21 | 60.66 |
NoL | 28.71 | 70.83 | 59.62 | 28.47 | 64.26 | 53.61 | 53.17 | ||||||
PRIME | 74.59 | 69.05 | 67.04 | 74.15 | 73.93 | 70.29 | |||||||
MHTNA | 75.31 | 75.57 | 75.32 | 68.79 | 75.23 | 75.16 | 69.47 | 75.04 | 72.29 | 67.64 | 75.37 | 74.81 | 69.97 |
方法 | Clean/% | 不同分布和不同参数重尾噪声攻击下的准确率/% | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cauchy | Student-t | Laplace | Erlang | ||||||||||
γ=1.0 | γ=1.5 | γ=5.0 | v=1.0 | v=0.8 | v=0.5 | σ=5 | σ=10 | σ=15 | a=0.5,b=3 | a=0.5,b=10 | a=0.1,b=3 | ||
LAS-AT | 71.43 | 67.50 | 65.23 | 50.62 | 67.53 | 60.13 | 27.61 | 65.97 | 61.37 | 58.05 | 67.74 | 67.23 | 57.43 |
DAJAT | 71.61 | 68.27 | 64.04 | 51.58 | 67.92 | 59.97 | 28.16 | 67.16 | 61.81 | 58.63 | 67.28 | 66.87 | 60.31 |
PD | 73.24 | 70.37 | 68.39 | 54.21 | 71.62 | 63.87 | 30.18 | 70.86 | 67.84 | 63.81 | 71.53 | 70.91 | 63.37 |
NoL | 75.87 | 43.26 | 73.98 | 60.21 | 31.52 | 72.37 | 62.83 | 77.27 | 58.73 | ||||
PRIME | 79.36 | 73.26 | 79.58 | 78.89 | |||||||||
MHTNA | 81.43 | 81.41 | 81.12 | 68.08 | 81.26 | 79.93 | 48.69 | 81.1 | 80.15 | 74.23 | 80.64 | 80.18 | 77.64 |
表6 各方法在MINI-ImageNet数据集上的测试结果
Tab. 6 Test results of various methods on MINI-ImageNet dataset
方法 | Clean/% | 不同分布和不同参数重尾噪声攻击下的准确率/% | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Cauchy | Student-t | Laplace | Erlang | ||||||||||
γ=1.0 | γ=1.5 | γ=5.0 | v=1.0 | v=0.8 | v=0.5 | σ=5 | σ=10 | σ=15 | a=0.5,b=3 | a=0.5,b=10 | a=0.1,b=3 | ||
LAS-AT | 71.43 | 67.50 | 65.23 | 50.62 | 67.53 | 60.13 | 27.61 | 65.97 | 61.37 | 58.05 | 67.74 | 67.23 | 57.43 |
DAJAT | 71.61 | 68.27 | 64.04 | 51.58 | 67.92 | 59.97 | 28.16 | 67.16 | 61.81 | 58.63 | 67.28 | 66.87 | 60.31 |
PD | 73.24 | 70.37 | 68.39 | 54.21 | 71.62 | 63.87 | 30.18 | 70.86 | 67.84 | 63.81 | 71.53 | 70.91 | 63.37 |
NoL | 75.87 | 43.26 | 73.98 | 60.21 | 31.52 | 72.37 | 62.83 | 77.27 | 58.73 | ||||
PRIME | 79.36 | 73.26 | 79.58 | 78.89 | |||||||||
MHTNA | 81.43 | 81.41 | 81.12 | 68.08 | 81.26 | 79.93 | 48.69 | 81.1 | 80.15 | 74.23 | 80.64 | 80.18 | 77.64 |
[1] | 季长清,高志勇,秦静,等. 基于卷积神经网络的图像分类算法综述[J]. 计算机应用, 2022, 42(4):1044-1049. |
JI C Q, GAO Z Y, QIN J, et al. Review of image classification algorithms based on convolutional neural network[J]. Journal of Computer Applications, 2022, 42(4): 1044-1049. | |
[2] | 杨洪刚,陈洁洁,徐梦飞. 双线性内卷神经网络用于眼底疾病图像分类[J]. 计算机应用, 2023, 43(1):259-264. |
YANG H G, CHEN J J, XU M F. Bilinear involution neural network for image classification of fundus diseases[J]. Journal of Computer Applications, 2023, 43(1): 259-264. | |
[3] | JIANG H, DIAO Z, SHI T, et al. A review of deep learning-based multiple-lesion recognition from medical images: classification, detection and segmentation[J]. Computers in Biology and Medicine, 2023, 157: No.106726. |
[4] | MALL P K, SINGH P K, SRIVASTAV S, et al. A comprehensive review of deep neural networks for medical image processing: recent developments and future opportunities[J]. Healthcare Analytics, 2023, 4: No.100216. |
[5] | CHAN L, LI S, BAI Q, et al. Review of image classification algorithms based on convolutional neural networks[J]. Remote Sensing, 2021, 13(22): No.4712. |
[6] | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
[7] | XU W, FU Y L, ZHU D. ResNet and its application to medical image processing: research progress and challenges[J]. Computer Methods and Programs in Biomedicine, 2023, 240: No.107660. |
[8] | 董明宇,严迪群. 基于ResNet的音频场景声替换造假的检测算法[J]. 计算机应用, 2022, 42(6):1724-1728. |
DONG M Y, YAN D Q. Detection algorithm of audio scene sound replacement falsification based on ResNet[J]. Journal of Computer Applications, 2022, 42(6): 1724-1728. | |
[9] | GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[EB/OL]. [2024-09-19].. |
[10] | MOMENY M, LATIF A M, SARRAM M A, et al. A noise robust convolutional neural network for image classification[J]. Results in Engineering, 2021, 10: No. 100225. |
[11] | 张如艳,王士同. 基于重尾噪声分布特性的多分类人脸识别方法[J]. 电子与信息学报, 2012, 34(3): 523-528. |
ZHANG R Y, WANG S T. Multi-classification recognition method applied to facial image based on distribution characteristic of heavy-tailed noise[J]. Journal of Electronics and Information Technology, 2012, 34(3): 523-528. | |
[12] | BAI L. A new nonconvex approach for image restoration with Gamma noise[J]. Computers and Mathematics with Applications, 2019, 77(10): 2627-2639. |
[13] | 汪晓艳,丁义明. 一种去除图像中Cauchy噪声的滤波算法[J]. 数学物理学报, 2018, 38(4):823-832. |
WANG X Y, DING Y M. A filtering algorithm for removing Cauchy noise in images[J]. Acta Mathematica Scientia, 2018, 38(4): 823-832. | |
[14] | SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[EB/OL]. [2024-09-19].. |
[15] | CROCE F, ANDRIUSHCHENKO M, SEHWAG V, et al. RobustBench: a standardized adversarial robustness benchmark[EB/OL]. [2024-09-19].. |
[16] | MODAS A, RADE R, ORTIZ-JIMÉNEZ G, et al. PRIME: a few primitives can boost robustness to common corruptions[C]// Proceedings of the 2022 European Conference on Computer Vision, LNCS 13685. Cham: Springer, 2022: 623-640. |
[17] | HENDRYCKS D, DIETTERICH T. Benchmarking neural network robustness to common corruptions and perturbations[EB/OL]. [2024-09-19].. |
[18] | MINTUN E, KIRILLOV A, XIE S. On interaction between augmentations and corruptions in natural corruption robustness[C]// Proceedings of the 35th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2021: 3571-3583. |
[19] | PANDA P, ROY K. Implicit adversarial data augmentation and robustness with noise-based learning[J]. Neural Networks, 2021, 141: 120-132. |
[20] | 曹玉东,刘海燕,贾旭,等. 基于深度学习的图像质量评价方法综述[J]. 计算机工程与应用, 2021,57(23):27-36. |
CAO Y D, LIU H Y, JIA X, et al. Overview of image quality assessment method based on deep learning[J]. Computer Engineering and Applications, 2021, 57(23): 27-36. | |
[21] | MAHARANA K, MONDAL S, NEMADE B. A review: data pre-processing and data augmentation techniques[J]. Global Transitions Proceedings, 2022, 3(1): 91-99. |
[22] | GU S, CHUNG F L, WANG S. A novel deep fuzzy classifier by stacking adversarial interpretable TSK fuzzy sub-classifiers with smooth gradient information[J]. IEEE Transactions on Fuzzy Systems, 2020, 28(7): 1369-1382. |
[23] | ZHOU T, ISHIBUCHI H, WANG S. Stacked-structure-based hierarchical Takagi-Sugeno-Kang fuzzy classification through feature augmentation[J]. IEEE Transactions on Emerging Topics in Computational Intelligence, 2017, 1(6): 421-436. |
[24] | WANG S, WANG J, CHUNG F L. Kernel density estimation, kernel methods, and fast learning in large data sets[J]. IEEE Transactions on Cybernetics, 2014, 44(1): 1-20. |
[25] | NARAYANAN D, SHOEYBI M, CASPER J, et al. Efficient large-scale language model training on GPU clusters using megatron-LM[C]// Proceedings of the 2021 International Conference for High Performance Computing, Networking, Storage and Analysis. New York: ACM, 2021: No.58. |
[26] | KRIZHEVSKY A. Learning multiple layers of features from tiny images [R/OL]. [2024-03-06].. |
[27] | CAI Q, PAN Y, YAO T, et al. Memory matching networks for one-shot image recognition[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 4080-4088. |
[28] | RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252. |
[29] | JIA X, ZHANG Y, WU B, et al. LAS-AT: adversarial training with learnable attack strategy[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 13388-13398. |
[30] | ADDEPALLI S, JAIN S, BABU R V. Efficient and effective augmentation strategy for adversarial training[C]// Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2022: 1488-1501. |
[31] | SEHWAG V, MAHLOUJIFAR S, HANDINA T, et al. Robust learning meets generative models: can proxy distributions improve adversarial robustness?[EB/OL]. [2024-09-20].. |
[1] | 景攀峰, 梁宇栋, 李超伟, 郭俊茹, 郭晋育. 基于师生学习的半监督图像去雾算法[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2975-2983. |
[2] | 张维, 龚中伟, 李志新, 罗佩华, 宋玲玲. 学习行为增强的知识追踪模型[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2747-2754. |
[3] | 张宏俊, 潘高军, 叶昊, 陆玉彬, 缪宜恒. 结合深度学习和张量分解的多源异构数据分析方法[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2838-2847. |
[4] | 李进, 刘立群. 基于残差Swin Transformer的SAR与可见光图像融合[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2949-2956. |
[5] | 殷兵, 凌震华, 林垠, 奚昌凤, 刘颖. 兼容缺失模态推理的情感识别方法[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2764-2772. |
[6] | 李维刚, 邵佳乐, 田志强. 基于双注意力机制和多尺度融合的点云分类与分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 3003-3010. |
[7] | 许志雄, 李波, 边小勇, 胡其仁. 对抗样本嵌入注意力U型网络的3D医学图像分割[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 3011-3016. |
[8] | 王静, 刘嘉星, 宋婉莹, 薛嘉兴, 丁温欣. 基于空间变换网络和特征分布校准的小样本皮肤图像分类模型[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2720-2726. |
[9] | 廖炎华, 鄢元霞, 潘文林. 基于YOLOv9的交通路口图像的多目标检测算法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2555-2565. |
[10] | 葛丽娜, 王明禹, 田蕾. 联邦学习的高效性研究综述[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2387-2398. |
[11] | 彭鹏, 蔡子婷, 刘雯玲, 陈才华, 曾维, 黄宝来. 基于CNN和双向GRU混合孪生网络的语音情感识别方法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2515-2521. |
[12] | 张硕, 孙国凯, 庄园, 冯小雨, 王敬之. 面向区块链节点分析的eclipse攻击动态检测方法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2428-2436. |
[13] | 张子墨, 赵雪专. 多尺度稀疏图引导的视觉图神经网络[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2188-2194. |
[14] | 索晋贤, 张丽萍, 闫盛, 王东奇, 张雅雯. 可解释的深度知识追踪方法综述[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2043-2055. |
[15] | 王震洲, 郭方方, 宿景芳, 苏鹤, 王建超. 面向智能巡检的视觉模型鲁棒性优化方法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2361-2368. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||