| 1 |
SAMBASIVAN N, KAPANIA S, HIGHFILL H, et al. “Everyone wants to do the model work, not the data work”: data cascades in high-stakes AI[C]// Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. New York: ACM, 2021: No.39.
|
| 2 |
ZHANG H, CISSE M, DAUPHIN Y N, et al. mixup: beyond empirical risk minimization[EB/OL]. [2023-10-30]..
|
| 3 |
SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research, 2014, 15: 1929-1958.
|
| 4 |
张增辉,姜高霞,王文剑.基于动态概率抽样的标签噪声过滤方法[J].计算机应用,2021,41(12):3485-3491.
|
|
ZHANG Z H, JIANG G X, WANG W J. Label noise filtering method based on dynamic probability sampling[J]. Journal of Computer Applications, 2021, 41(12): 3485-3491.
|
| 5 |
魏翔,王靖杰,张顺利,等.ReLSL:基于可靠标签选择与学习的半监督学习算法[J].计算机学报,2022,45(6):1147-1160.
|
|
WEI X, WANG J J, ZHANG S L, et al. ReLSL: reliable label selection and learning based algorithm for semi-supervised learning[J]. Chinese Journal of Computers, 2022, 45(6): 1147-1160.
|
| 6 |
ZHANG Y, ZHENG S, WU P, et al. Learning with feature-dependent label noise: a progressive approach[EB/OL]. [2023-09-05]. .
|
| 7 |
余游,冯林,王格格,等.一种基于伪标签的半监督少样本学习模型[J].电子学报,2019,47(11):2284-2291.
|
|
YU Y, FENG L, WANG G G, et al. A few-shot learning model based on semi-supervised with pseudo label[J]. Acta Electronica Sinica, 2019, 47(11): 2284-2291.
|
| 8 |
FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]// Proceedings of the 34th International Conference on Machine Learning. New York: JMLR.org, 2017: 1126-1135.
|
| 9 |
伏博毅,彭云聪,蓝鑫,等. 基于深度学习的标签噪声学习算法综述[J]. 计算机应用, 2023, 43(3): 674-684.
|
|
FU B Y, PENG Y C, LAN X, et al. Survey of label noise learning algorithms based on deep learning[J]. Journal of Computer Applications, 2023, 43(3): 674-684.
|
| 10 |
PATRINI G, ROZZA A, MENON A K, et al. Making deep neural networks robust to label noise: a loss correction approach[C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 2233-2241.
|
| 11 |
HAN B, YAO Q, YU X, et al. Co-teaching: robust training of deep neural networks with extremely noisy labels[C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2018: 8536-8546.
|
| 12 |
SUKHBAATAR S, FERGUS R. Learning from noisy labels with deep neural networks [EB/OL]. [2023-12-11]. .
|
| 13 |
HENDRYCKS D, MAZEIKA M, WILSON D, et al. Using trusted data to train deep networks on labels corrupted by severe noise[C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2018: 10477-10486.
|
| 14 |
LI Y, YANG J, SONG Y, et al. Learning from noisy labels with distillation[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 1928-1936.
|
| 15 |
SHU J, XIE Q, YI L, et al. Meta-weight-net: learning an explicit mapping for sample weighting[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2019: 1919-1930.
|
| 16 |
ZHENG G, AWADALLAH A H, DUMAIS S. Meta label correction for noisy label learning[C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2021: 11053-11061.
|
| 17 |
HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778.
|
| 18 |
ZHANG C, BENGIO S, HARDT M, et al. Understanding deep learning (still) requires rethinking generalization[J]. Communications of the ACM, 2021, 64(3): 107-115.
|
| 19 |
LIU S, NILES-WEED J, RAZAVIAN N, et al. Early-learning regularization prevents memorization of noisy labels[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2020: 20331-20342.
|
| 20 |
HINTON G, VINVALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. [2024-01-08]. .
|
| 21 |
XIAO T, XIA T, YANG Y, et al. Learning from massive noisy labeled data for image classification[C]// Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2015: 2691-2699.
|
| 22 |
ZHANG Z, SABUNCU M R. Generalized cross entropy loss for training deep neural networks with noisy labels[C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2018: 8792-8802.
|
| 23 |
WU Y, SHU J, XIE Q, et al. Learning to purify noisy labels via meta soft label corrector[C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2021: 10388-10396.
|
| 24 |
REED S E, LEE H, ANGUELOV D, et al. Training deep neural networks on noisy labels with bootstrapping[EB/OL]. [2023-11-30]..
|