Journal of Computer Applications ›› 2023, Vol. 43 ›› Issue (3): 674-684.DOI: 10.11772/j.issn.1001-9081.2022020198
• Artificial intelligence • Previous Articles Next Articles
Boyi FU1,2, Yuncong PENG1,2, Xin LAN1,2, Xiaolin QIN1,2()
Received:
2022-02-22
Revised:
2022-05-18
Accepted:
2022-05-26
Online:
2022-08-16
Published:
2023-03-10
Contact:
Xiaolin QIN
About author:
FU Boyi, born in 1998, M. S. candidate. Her research interests include label noise, image semantic understanding, object detection.Supported by:
伏博毅1,2, 彭云聪1,2, 蓝鑫1,2, 秦小林1,2()
通讯作者:
秦小林
作者简介:
伏博毅(1998—),女,湖南岳阳人,硕士研究生,CCF会员,主要研究方向:标签噪声、图像语义理解、目标检测基金资助:
CLC Number:
Boyi FU, Yuncong PENG, Xin LAN, Xiaolin QIN. Survey of label noise learning algorithms based on deep learning[J]. Journal of Computer Applications, 2023, 43(3): 674-684.
伏博毅, 彭云聪, 蓝鑫, 秦小林. 基于深度学习的标签噪声学习算法综述[J]. 《计算机应用》唯一官方网站, 2023, 43(3): 674-684.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2022020198
算法 | 不同噪声率r下的准确率 | ||||
---|---|---|---|---|---|
20% | 40% | 50% | 70% | 80% | |
CE-loss | 0.868 | — | 0.794 | — | 0.629 |
Mixup | 0.956 | — | 0.871 | — | 0.716 |
MentorNet | 0.849 | 0.644 | — | 0.300 | — |
Co-Teaching | 0.850 | 0.823 | 0.814 | 0.621 | 0.608 |
Co-Teaching+ | 0.895 | — | 0.857 | — | 0.674 |
Meta-Learning | 0.923 | — | 0.893 | — | 0.774 |
CleanLab | 0.908 | 0.871 | — | 0.410 | — |
DivideMix | 0.961 | — | 0.946 | — | 0.932 |
Tab. 1 Comparisons of test accuracy on CIFAR-10 under diferrent noise ratios
算法 | 不同噪声率r下的准确率 | ||||
---|---|---|---|---|---|
20% | 40% | 50% | 70% | 80% | |
CE-loss | 0.868 | — | 0.794 | — | 0.629 |
Mixup | 0.956 | — | 0.871 | — | 0.716 |
MentorNet | 0.849 | 0.644 | — | 0.300 | — |
Co-Teaching | 0.850 | 0.823 | 0.814 | 0.621 | 0.608 |
Co-Teaching+ | 0.895 | — | 0.857 | — | 0.674 |
Meta-Learning | 0.923 | — | 0.893 | — | 0.774 |
CleanLab | 0.908 | 0.871 | — | 0.410 | — |
DivideMix | 0.961 | — | 0.946 | — | 0.932 |
算法 | Top1 | Top5 |
---|---|---|
CleanLab | 0.674 0 | 0.886 8 |
Co-Teaching | 0.614 8 | 0.847 0 |
MentorNet | 0.578 0 | 0.799 2 |
DivideMix | 0.752 0 | 0.908 4 |
Conv.net(r=100%) | — | 0.820 0 |
Conv.net(r=10%) | — | 0.819 0 |
bottom-up(r=10%) | — | 0.835 0 |
bottom-up(r=20%) | — | 0.834 0 |
Tab. 2 Top1 and Top5 accuracies on ImageNet-2012 test set
算法 | Top1 | Top5 |
---|---|---|
CleanLab | 0.674 0 | 0.886 8 |
Co-Teaching | 0.614 8 | 0.847 0 |
MentorNet | 0.578 0 | 0.799 2 |
DivideMix | 0.752 0 | 0.908 4 |
Conv.net(r=100%) | — | 0.820 0 |
Conv.net(r=10%) | — | 0.819 0 |
bottom-up(r=10%) | — | 0.835 0 |
bottom-up(r=20%) | — | 0.834 0 |
算法 | 不同噪声率r下的准确率 | |||||
---|---|---|---|---|---|---|
20% | 40% | 50% | 60% | 80% | 90% | |
CE-Loss | 0.470 0 | 0.343 4 | — | 0.193 7 | 0.073 4 | — |
MAE-Loss | 0.333 3 | 0.265 6 | — | 0.122 6 | 0.020 1 | — |
SCE-Loss | 0.473 2 | 0.338 7 | — | 0.187 9 | 0.072 8 | — |
Taylor-Loss | 0.591 1 | 0.509 9 | — | 0.383 1 | 0.159 6 | — |
BT-Loss | 0.733 0 | 0.625 5 | 0.578 0 | — | — | — |
Mixup | 0.678 0 | — | 0.573 0 | — | 0.308 0 | 0.146 0 |
Co-Teaching+ | 0.656 0 | — | 0.518 0 | — | 0.279 0 | 0.137 0 |
Meta-Learning | 0.685 0 | — | 0.592 0 | — | 0.424 0 | 0.195 0 |
DivideMix | 0.773 0 | — | 0.476 0 | — | 0.602 0 | 0.315 0 |
Tab. 3 Comparisons of test accuracy on CIFAR-100 under diferrent noise ratios
算法 | 不同噪声率r下的准确率 | |||||
---|---|---|---|---|---|---|
20% | 40% | 50% | 60% | 80% | 90% | |
CE-Loss | 0.470 0 | 0.343 4 | — | 0.193 7 | 0.073 4 | — |
MAE-Loss | 0.333 3 | 0.265 6 | — | 0.122 6 | 0.020 1 | — |
SCE-Loss | 0.473 2 | 0.338 7 | — | 0.187 9 | 0.072 8 | — |
Taylor-Loss | 0.591 1 | 0.509 9 | — | 0.383 1 | 0.159 6 | — |
BT-Loss | 0.733 0 | 0.625 5 | 0.578 0 | — | — | — |
Mixup | 0.678 0 | — | 0.573 0 | — | 0.308 0 | 0.146 0 |
Co-Teaching+ | 0.656 0 | — | 0.518 0 | — | 0.279 0 | 0.137 0 |
Meta-Learning | 0.685 0 | — | 0.592 0 | — | 0.424 0 | 0.195 0 |
DivideMix | 0.773 0 | — | 0.476 0 | — | 0.602 0 | 0.315 0 |
算法 | 适配性 | 高噪声 | 训练 损耗 | 超参数 敏感性 | 弱正则化 |
---|---|---|---|---|---|
CleanLab[ | Y | Y | 多 | N | N |
DivideMix[ | Y | Y | 多 | N | N |
样本重加权[ | Y | N | 少 | ★ | Y |
MLNT[ | Y | Y | 多 | N | N |
BT-Loss[ | Y | Y | 少 | Y | N |
Taylor-Loss[ | Y | Y | 少 | Y | N |
鲁棒结构[ | N | N | 中 | N | Y |
模型正则化[ | Y | ★ | 少 | ★ | Y |
Self-training[ | Y | ★ | 多 | N | N |
Co-training[ | Y | ★ | 多 | N | N |
MentorNet[ | N | Y | 多 | N | N |
标签平滑[ | Y | N | 少 | ★ | Y |
元伪标签[ | Y | Y | 多 | N | N |
(虚拟)对抗训练[ | Y | Y | 多 | N | N |
Tab. 4 Comparison of attributes of various algorithms
算法 | 适配性 | 高噪声 | 训练 损耗 | 超参数 敏感性 | 弱正则化 |
---|---|---|---|---|---|
CleanLab[ | Y | Y | 多 | N | N |
DivideMix[ | Y | Y | 多 | N | N |
样本重加权[ | Y | N | 少 | ★ | Y |
MLNT[ | Y | Y | 多 | N | N |
BT-Loss[ | Y | Y | 少 | Y | N |
Taylor-Loss[ | Y | Y | 少 | Y | N |
鲁棒结构[ | N | N | 中 | N | Y |
模型正则化[ | Y | ★ | 少 | ★ | Y |
Self-training[ | Y | ★ | 多 | N | N |
Co-training[ | Y | ★ | 多 | N | N |
MentorNet[ | N | Y | 多 | N | N |
标签平滑[ | Y | N | 少 | ★ | Y |
元伪标签[ | Y | Y | 多 | N | N |
(虚拟)对抗训练[ | Y | Y | 多 | N | N |
1 | LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: common objects in context[C]// Proceedings of the 2014 European Conference on Computer Vision, LNCS 8693. Cham: Springer, 2014: 740-755. |
2 | DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database[C]// Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2009: 248-255. 10.1109/cvpr.2009.5206848 |
3 | RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3): 211-252. 10.1007/s11263-015-0816-y |
4 | SAMBASIVAN N, KAPANIA S, HIGHFILL H, et al. “Everyone wants to do the model work, not the data work”: data cascades in high-stakes AI[C]// Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. New York: ACM, 2021: No.39. 10.1145/3411764.3445518 |
5 | NORTHCUTT C G, ATHALYE A, MUELLER J. Pervasive label errors in test sets destabilize machine learning benchmarks[EB/OL]. (2021-11-07)[2021-11-23].. |
6 | ROLNICK D, VEIT A, BELONGIE S, et al. Deep learning is robust to massive label noise[EB/OL]. (2018-02-26) [2021-12-03].. |
7 | DRORY A, RATZON O, AVIDAN S, et al. The resistance to label noise in K-NN and DNN depends on its concentration[EB/OL]. (2020-12-03) [2021-11-20].. |
8 | FRENAY B, VERLEYSEN M. Classification in the presence of label noise: a survey[J]. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25(5): 845-869. 10.1109/tnnls.2013.2292894 |
9 | SUKHBAATAR S, BRUNA J, PALURI M, et al. Training convolutional networks with noisy labels[EB/OL]. (2015-04-10) [2021-11-24].. |
10 | ZHANG C Y, BENGIO S, HARDT M, et al. Understanding deep learning (still) requires rethinking generalization[J]. Communications of the ACM, 2021, 64(3): 107-115. 10.1145/3446776 |
11 | SHORTEN C, KHOSHGOFTAAR T M. A survey on image data augmentation for deep learning[J]. Journal of Big Data, 2019, 6: No.60. 10.1186/s40537-019-0197-0 |
12 | SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15: 1929-1958. |
13 | CROWLEY J L. Pattern recognition and machine learning [EB/OL]. [2022-03-18]. . |
14 | XIA X B, LIU T L, HAN B, et al. Part-dependent label noise: towards instance-dependent label noise[C/OL]// Proceedings of the 34th Conference on Neural Information Processing System. [2021-11-22].. |
15 | CHENG H, ZHU Z W, LI X Y, et al. Learning with instance-dependent label noise: a sample sieve approach[EB/OL]. (2021-03-22) [2021-11-22].. |
16 | GHOSH A, MANWANI N, SASTRY P S. Making risk minimization tolerant to label noise[J]. Neurocomputing, 2015, 160: 93-107. 10.1016/j.neucom.2014.09.081 |
17 | LI S K, GE S M, HUA Y Y, et al. Coupled-view deep classifier learning from multiple noisy annotators[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2020: 4667-4674. 10.1609/aaai.v34i04.5898 |
18 | LI S K, LIU T L, TAN J Y, et al. Trustable co-label learning from multiple noisy annotators[J]. IEEE Transactions on Multimedia, 2021(Early Access):1-1. 10.1109/tmm.2021.3137752 |
19 | LIU W, JIANG Y G, LUO J B, et al. Noise resistant graph ranking for improved web image search[C]// Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2011: 849-856. 10.1109/cvpr.2011.5995315 |
20 | ARPIT D, JASTRZĘBSKI S, BALLAS N, et al. A closer look at memorization in deep networks[C]// Proceedings of the 34th International Conference on Machine Learning. New York: JMLR.org, 2017: 233-242. |
21 | ZHOU M Y, WU J, LIU Y P, et al. DaST: data-free substitute training for adversarial attacks[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 231-240. 10.1109/cvpr42600.2020.00031 |
22 | KARIMI D, DOU H R, WARFIELD S K, et al. Deep learning with noisy labels: exploring techniques and remedies in medical image analysis[J]. Medical Image Analysis, 2020, 65: No.101759. 10.1016/j.media.2020.101759 |
23 | HARTONO P, HASHIMOTO S. Learning from imperfect data[J]. Applied Soft Computing, 2007, 7(1): 353-363. 10.1016/j.asoc.2005.07.005 |
24 | BRODLEY C E, FRIEDL M A. Identifying mislabeled training data[J]. Journal of Artificial Intelligence Research, 1999, 11: 131-167. 10.1613/jair.606 |
25 | MARCUS M P, SANTORINI B, MARCINKIEWICZ M A. Building a large annotated corpus of English: the Penn Treebank[J]. Computational Linguistics, 1993, 19(2): 313-330. |
26 | SNOW R, O’CONNOR B, JURAFSKY D, et al. Cheap and fast — but is it good? evaluating non-expert annotations for natural language tasks[C]// Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2008: 254-263. 10.3115/1613715.1613751 |
27 | SCULLEY D, CORMACK G V. Filtering email spam in the presence of noisy user feedback[C/OL]// Proceeding of the 5th Conference on Email and Anti-Spam. [2021-11-25].. |
28 | RÄTSCH G, ONODA T, MÜLLER K R. Soft margins for AdaBoost[J]. Machine Learning, 2001, 42(3): 287-320. 10.1023/a:1007618119488 |
29 | SAFAVIAN S R, LANDGREBE D. A survey of decision tree classifier methodology[J]. IEEE Transactions on Systems, Man, and Cybernetics, 1991, 21(3): 660-674. 10.1109/21.97458 |
30 | DAWID A P, SKENE A M. Maximum likelihood estimation of observer error-rates using the EM algorithm[J]. Journal of the Royal Statistical Society: Series C (Applied Statistics), 1979, 28(1): 20-28. 10.2307/2346806 |
31 | 张增辉,姜高霞,王文剑. 基于动态概率抽样的标签噪声过滤方法[J]. 计算机应用, 2021, 41(12):3485-3491. |
ZHANG Z H, JIANG G X, WANG W J. Label noise filtering method based on dynamic probability sampling[J]. Journal of Computer Applications, 2021, 41(12): 3485-3491. | |
32 | 陈庆强,王文剑,姜高霞. 基于数据分布的标签噪声过滤[J]. 清华大学学报(自然科学版), 2019, 59(4):262-269. |
CHEN Q Q, WANG W J, JIANG G X. Label noise filtering based on the data distribution[J]. Journal of Tsinghua University (Science and Technology), 2019, 59(4): 262-269. | |
33 | 孟晓超,姜高霞,王文剑. 基于主动学习的标签噪声清洗方法[J]. 陕西师范大学学报(自然科学版), 2020, 48(2):9-16. |
MENG X C, JIANG G X, WANG W J. A method of label noise cleaning based on active learning[J]. Journal of Shaanxi Normal University (Natural Science Edition), 2020, 48(2): 9-16. | |
34 | 陈倩,杨旻,魏鹏飞. 标签带噪声数据的重加权半监督分类方法[J]. 烟台大学学报(自然科学与工程版), 2019, 32(3):205-209. |
CHEN Q, YANG M, WEI P F. Reweighting semi-supervised classification for noisy labels[J]. Journal of Yantai University (Natural Science and Engineering Edition), 2019, 32(3): 205-209. | |
35 | NORTHCUTT C G, JIANG L, CHUANG I L. Confident learning: Estimating uncertainty in dataset labels[J]. Journal of Artificial Intelligence Research, 2021, 70: 1373-1411. 10.1613/jair.1.12125 |
36 | LI J N, SOCHER R, HOI S C H. DivideMix: learning with noisy labels as semi-supervised learning[EB/OL]. (2020-02-18) [2021-11-24].. |
37 | REN M Y, ZENG W Y, YANG B, et al. Learning to reweight examples for robust deep learning[C]// Proceedings of the 35th International Conference on Machine Learning. New York: JMLR.org, 2018: 4334-4343. |
38 | SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 2818-2826. 10.1109/cvpr.2016.308 |
39 | CHOROWSKI J, JAITLY N. Towards better decoding and language model integration in sequence to sequence models[C]// Proceedings of the Interspeech 2017. [S.l.]: International Speech Communication Association, 2017: 523-527. 10.21437/interspeech.2017-343 |
40 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017:6000-6010. |
41 | ZOPH B, VASUDEVAN V, SHLENS J, et al. Learning transferable architectures for scalable image recognition[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 8697-8710. 10.1109/cvpr.2018.00907 |
42 | REAL E, AGGARWAL A, HUANG Y P, et al. Regularized evolution for image classifier architecture search[C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2019: 4780-4789. 10.1609/aaai.v33i01.33014780 |
43 | HUANG Y P, CHENG Y L, BAPNA A, et al. GPipe: efficient training of giant neural networks using pipeline parallelism[C/OL]// Proceedings of the 33rd Conference on Neural Information Processing System. [2021-11-22].. |
44 | ARAZO E, ORTEGO D, ALBERT P, et al. Unsupervised label noise modeling and loss correction[C]// Proceedings of the 2019 International Conference on Machine Learning. New York: PMLR, 2019: 312-321. |
45 | LUKASIK M, BHOJANAPALLI S, MENON A K, et al. Does label smoothing mitigate label noise?[C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 6448-6458. 10.18653/v1/2020.emnlp-main.405 |
46 | 余孟池,牟甲鹏,蔡剑,等. 噪声标签重标注方法[J]. 计算机科学, 2020, 47(6):79-84. 10.11896/jsjkx.190600041 |
YU M C, MU J P, CAI J, et al. Noisy label classification learning based on relabeling method[J]. Computer Science, 2020, 47(6): 79-84. 10.11896/jsjkx.190600041 | |
47 | HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. (2015-03-09) [2021-11-24].. |
48 | CHO J H, HARIHARAN B. On the efficacy of knowledge distillation[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 4793-4801. 10.1109/iccv.2019.00489 |
49 | XIE Q Z, LUONG M T, HOVY E, et al. Self-training with noisy student improves ImageNet classification[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 10684-10695. 10.1109/cvpr42600.2020.01070 |
50 | YANG C L, XIE L X, QIAO S Y, et al. Training deep neural networks in generations: a more tolerant teacher educates better students[C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2019: 5628-5635. 10.1609/aaai.v33i01.33015628 |
51 | YIM J, JOO D, BAE J, et al. A gift from knowledge distillation: fast optimization, network minimization and transfer learning[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 7130-7138. 10.1109/cvpr.2017.754 |
52 | PHAM H, DAI Z H, XIE Q Z, et al. Meta pseudo labels[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 11552-11563. 10.1109/cvpr46437.2021.01139 |
53 | 王晓莉,薛丽. 标签噪声学习算法综述[J]. 计算机系统应用, 2021, 30(1):10-18. |
WANG X L, XUE L. Review on label noise learning algorithms[J]. Computer Systems and Applications, 2021, 30(1): 10-18. | |
54 | PATRINI G, ROZZA A, MENON A K, et al. Making deep neural networks robust to label noise: a loss correction approach[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 2233-2241. 10.1109/cvpr.2017.240 |
55 | GHOSH A, KUMAR H, SASTRY P S. Robust loss functions under label noise for deep neural networks[C]// Proceedings of the 31st AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2017: 1919-1925. 10.1609/aaai.v31i1.10894 |
56 | WANG Y S, MA X J, CHEN Z Y, et al. Symmetric cross entropy for robust learning with noisy labels[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 322-330. 10.1109/iccv.2019.00041 |
57 | ZHANG Z L, SABUNCU M R. Generalized cross entropy loss for training deep neural networks with noisy labels[C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2018: 8792-8802. |
58 | MA X J, HUANG H X, WANG Y S, et al. Normalized loss functions for deep learning with noisy labels[C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 6543-6553. |
59 | FENG L, SHU S L, LIN Z Y, et al. Can cross entropy loss be robust to label noise?[C]// Proceedings of the 29th International Joint Conferences on Artificial Intelligence. California: ijcai.org, 2020: 2206-2212. 10.24963/ijcai.2020/305 |
60 | AMID E, WARMUTH M K, ANIL R, et al. Robust bi-tempered logistic loss based on Bregman divergences[EB/OL]. (2019-09-23) [2021-11-20].. |
61 | AMID E, WARMUTH M K, SRINIVASAN S. Two-temperature logistic regression based on the Tsallis divergence[C]// Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics. New York: JMLR.org, 2019: 2388-2396. |
62 | ZHOU X, LIU X M, WANG C Y, et al. Learning with noisy labels via sparse regularization[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 72-81. 10.1109/iccv48922.2021.00014 |
63 | GOODFELLOW I J, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[EB/OL]. (2015-03-20) [2021-12-04].. |
64 | MIYATO T, DAI A M, GOODFELLOW I. Adversarial training methods for semi-supervised text classification[EB/OL]. (2021-11-16) [2021-12-04].. |
65 | MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[EB/OL]. (2019-09-04) [2021-12-04].. 10.48550/arXiv.1706.06083 |
66 | SHAFAHI A, NAJIBI M, GHIASI A, et al. Adversarial training for free![C/OL]// Proceedings of the 33rd Conference on Neural Information Processing System. [2021-11-22].. 10.1609/aaai.v34i04.6017 |
67 | ZHANG D H, ZHANG T Y, LU Y P, et al. You only propagate once: Accelerating adversarial training via maximal principle[C/OL]// Proceedings of the 33rd Conference on Neural Information Processing System. [2021-11-22].. |
68 | ZHU C, CHENG Y, GAN Z, et al. FreeLB: enhanced adversarial training for natural language understanding[EB/OL]. (2020-04-23) [2021-11-24].. |
69 | MIYATO T, MAEDA S, KOYAMA M, et al. Virtual adversarial training: a regularization method for supervised and semi-supervised learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(8): 1979-1993. 10.1109/tpami.2018.2858821 |
70 | SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[EB/OL]. (2014-02-19) [2021-11-24].. |
71 | MOOSAVI-DEZFOOLI S M, FAWZI A, FROSSARD P. DeepFool: a simple and accurate method to fool deep neural networks[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 2574-2582. 10.1109/cvpr.2016.282 |
72 | CARLINI N, WAGNER D. Towards evaluating the robustness of neural networks[C]// Proceedings of the 2017 IEEE Symposium on Security and Privacy. Piscataway: IEEE, 2017: 39-57. 10.1109/sp.2017.49 |
73 | BARANDELA R, GASCA E. Decontamination of training samples for supervised pattern recognition methods[C]// Proceedings of the 2000 Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), LNCS 1876. Berlin: Springer, 2000: 621-630. |
74 | GUYON I, MATIĆ N, VAPNIK V. Discovering informative patterns and data cleaning[M]// FAYYAD U M, PIATETSKY-SHAPIRO G, SMYTH P, et al. Advances in Knowledge Discovery and Data Mining. Menlo Park, CA: AAAI Press, 1996: 181-203. |
75 | SUKHBAATAR S, FERGUS R. Learning from noisy labels with deep neural networks[EB/OL]. [2021-11-24].. 10.1109/tnnls.2022.3152527 |
76 | GOLDBERGER J, BEN-REUVEN E. Training deep neural-networks using a noise adaptation layer[EB/OL]. [2021-11-24].. |
77 | JIANG Z L, SILOVSKY J, SIU M H, et al. Learning from noisy labels with noise modeling network[EB/OL]. (2020-05-01) [2021-11-24].. |
78 | HAN B, YAO J CA, NIU G, et al. Masking: a new perspective of noisy supervision[C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2018: 5841-5851. |
79 | GOODFELLOW I J, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. Cambridge: MIT Press, 2014: 2672-2680. |
80 | LI J N, WONG Y, ZHAO Q, et al. Learning to learn from noisy labeled data[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 5046-5054. 10.1109/cvpr.2019.00519 |
81 | JINDAL I, NOKLEBY M, CHEN X W. Learning deep networks from noisy labels with Dropout regularization[C]// Proceedings of the IEEE 16th International Conference on Data Mining. Piscataway: IEEE, 2016: 967-972. 10.1109/icdm.2016.0121 |
82 | JIANG L, ZHOU Z Y, LEUNG T, et al. MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels[C]// Proceedings of the 35th International Conference on Machine Learning. New York: JMLR.org, 2018: 2304-2313. |
83 | LI M, ZHOU Z H. SETRED: self-training with editing[C]// Proceedings of the 2005 Pacific-Asia Conference on Knowledge Discovery and Data Mining, LNCS 3518. Berlin: Springer, 2005: 611-621. |
84 | BLUM A, MITCHELL T. Combining labeled and unlabeled data with co-training[C]// Proceedings of the 11th Annual Conference on Computational Learning Theory. New York: ACM, 1998: 92-100. 10.1145/279943.279962 |
85 | YU X R, HAN B, YAO J C, et al. How does disagreement help generalization against label corruption?[C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 7164-7173. |
86 | ZHANG H Y, CISSE M, DAUPHIN Y N, et al. mixup: Beyond empirical risk minimization[EB/OL]. (2018-04-27) [2021-12-04].. |
[1] | Shunyong LI, Shiyi LI, Rui XU, Xingwang ZHAO. Incomplete multi-view clustering algorithm based on self-attention fusion [J]. Journal of Computer Applications, 2024, 44(9): 2696-2703. |
[2] | Jing QIN, Zhiguang QIN, Fali LI, Yueheng PENG. Diagnosis of major depressive disorder based on probabilistic sparse self-attention neural network [J]. Journal of Computer Applications, 2024, 44(9): 2970-2974. |
[3] | Xiyuan WANG, Zhancheng ZHANG, Shaokang XU, Baocheng ZHANG, Xiaoqing LUO, Fuyuan HU. Unsupervised cross-domain transfer network for 3D/2D registration in surgical navigation [J]. Journal of Computer Applications, 2024, 44(9): 2911-2918. |
[4] | Yunchuan HUANG, Yongquan JIANG, Juntao HUANG, Yan YANG. Molecular toxicity prediction based on meta graph isomorphism network [J]. Journal of Computer Applications, 2024, 44(9): 2964-2969. |
[5] | Yexin PAN, Zhe YANG. Optimization model for small object detection based on multi-level feature bidirectional fusion [J]. Journal of Computer Applications, 2024, 44(9): 2871-2877. |
[6] | Tingjie TANG, Jiajin HUANG, Jin QIN. Session-based recommendation with graph auxiliary learning [J]. Journal of Computer Applications, 2024, 44(9): 2711-2718. |
[7] | Jieru JIA, Jianchao YANG, Shuorui ZHANG, Tao YAN, Bin CHEN. Unsupervised person re-identification based on self-distilled vision Transformer [J]. Journal of Computer Applications, 2024, 44(9): 2893-2902. |
[8] | Yingjun ZHANG, Niuniu LI, Binhong XIE, Rui ZHANG, Wangdong LU. Semi-supervised object detection framework guided by curriculum learning [J]. Journal of Computer Applications, 2024, 44(8): 2326-2333. |
[9] | Yuhan LIU, Genlin JI, Hongping ZHANG. Video pedestrian anomaly detection method based on skeleton graph and mixed attention [J]. Journal of Computer Applications, 2024, 44(8): 2551-2557. |
[10] | Zhonghua LI, Yunqi BAI, Xuejin WANG, Leilei HUANG, Chujun LIN, Shiyu LIAO. Low illumination face detection based on image enhancement [J]. Journal of Computer Applications, 2024, 44(8): 2588-2594. |
[11] | Kaili DENG, Weibo WEI, Zhenkuan PAN. Industrial defect detection method with improved masked autoencoder [J]. Journal of Computer Applications, 2024, 44(8): 2595-2603. |
[12] | Yanjie GU, Yingjun ZHANG, Xiaoqian LIU, Wei ZHOU, Wei SUN. Traffic flow forecasting via spatial-temporal multi-graph fusion [J]. Journal of Computer Applications, 2024, 44(8): 2618-2625. |
[13] | Qianhong SHI, Yan YANG, Yongquan JIANG, Xiaocao OUYANG, Wubo FAN, Qiang CHEN, Tao JIANG, Yuan LI. Multi-granularity abrupt change fitting network for air quality prediction [J]. Journal of Computer Applications, 2024, 44(8): 2643-2650. |
[14] | Zheng WU, Zhiyou CHENG, Zhentian WANG, Chuanjian WANG, Sheng WANG, Hui XU. Deep learning-based classification of head movement amplitude during patient anaesthesia resuscitation [J]. Journal of Computer Applications, 2024, 44(7): 2258-2263. |
[15] | Huanhuan LI, Tianqiang HUANG, Xuemei DING, Haifeng LUO, Liqing HUANG. Public traffic demand prediction based on multi-scale spatial-temporal graph convolutional network [J]. Journal of Computer Applications, 2024, 44(7): 2065-2072. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||