《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (4): 1095-1103.DOI: 10.11772/j.issn.1001-9081.2023121852
收稿日期:
2024-01-09
修回日期:
2024-03-13
接受日期:
2024-03-18
发布日期:
2024-04-28
出版日期:
2025-04-10
通讯作者:
罗川
作者简介:
严一钦(1998—),男,四川成都人,硕士研究生,主要研究方向:机器学习、计算机视觉基金资助:
Yiqin YAN1, Chuan LUO1(), Tianrui LI2, Hongmei CHEN2
Received:
2024-01-09
Revised:
2024-03-13
Accepted:
2024-03-18
Online:
2024-04-28
Published:
2025-04-10
Contact:
Chuan LUO
About author:
YAN Yiqin, born in 1998, M. S. candidate. His research interests include machine learning, computer vision.Supported by:
摘要:
针对小样本学习模型在数据域存在偏移时分类准确度不高的问题,提出一种基于关系网络和ViT (Vision Transformer)的跨域小样本图像分类模型ReViT (Relation ViT)。首先,引入ViT作为特征提取器,并使用经过预训练的深层神经网络解决浅层神经网络的特征表达能力不足的问题;其次,以浅层卷积网络作为任务适配器提升模型的知识迁移能力,并基于关系网络和通道注意力机制构建非线性分类器;随后,将特征提取器和任务适配器进行特征融合,从而增强模型的泛化能力;最后,采取“预训练-元学习-微调-元测试”四阶段学习策略训练模型,有效融合迁移学习与元学习,进一步提升ReViT的跨域分类性能。以平均分类准确率为评估指标的实验结果表明,ReViT在跨域小样本分类问题上有良好的性能。具体地,ReViT的分类准确度在Meta-Dataset的域内场景下和域外场景下相较于次优的模型分别提升了5.82和1.71个百分点,在BCDFSL (Broader study of Cross-Domain Few-Shot Learning)数据集的3个子问题EuroSAT(European SATellite data)、CropDisease和ISIC (International Skin Imaging Collaboration)的5-way 5-shot上相较于次优的模型分别提升了1.00、1.54和2.43个百分点,在EuroSAT、CropDisease和ISIC的5-way 20-shot上相较于次优的模型分别提升了0.13、0.97和3.40个百分点,在CropDisease的5-way 50-shot上相较于次优的模型提升了0.36个百分点。可见,ReViT能在样本量稀少的图像分类任务上保持良好的准确率。
中图分类号:
严一钦, 罗川, 李天瑞, 陈红梅. 基于关系网络和Vision Transformer的跨域小样本分类模型[J]. 计算机应用, 2025, 45(4): 1095-1103.
Yiqin YAN, Chuan LUO, Tianrui LI, Hongmei CHEN. Cross-domain few-shot classification model based on relation network and Vision Transformer[J]. Journal of Computer Applications, 2025, 45(4): 1095-1103.
模型 | ChestX | ISIC | EuroSAT | CropDisease | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
5way5shot | 5way20shot | 5way50shot | 5way5shot | 5way20shot | 5way50shot | 5way5shot | 5way20shot | 5way50shot | 5way5shot | 5way20shot | 5way50shot | |
ProtoNet | 24.05 | 28.21 | 29.32 | 39.57 | 49.50 | 51.99 | 73.29 | 82.27 | 80.48 | 79.72 | 88.15 | 90.81 |
SPFSL | 27.13 | 31.57 | 34.17 | 43.78 | 54.06 | 57.86 | 89.18 | 93.08 | 96.06 | 95.06 | 97.25 | 97.77 |
STARTUP | 26.94 | 33.19 | 36.91 | 47.22 | 58.63 | 64.16 | 82.29 | 89.26 | 91.99 | 93.02 | 97.51 | 98.45 |
CHEF | 24.72 | 29.71 | 31.25 | 41.26 | 54.30 | 60.86 | 74.15 | 83.31 | 86.55 | 86.87 | 94.78 | 96.77 |
ReViT(B) | 25.05 | 27.64 | 31.33 | 48.38 | 58.34 | 62.29 | 81.95 | 88.04 | 89.12 | 96.32 | 97.66 | 98.06 |
ReViT(A) | 23.87 | 26.13 | 28.43 | 48.28 | 59.21 | 61.16 | 83.01 | 87.23 | 88.49 | 96.60 | 98.48 | 98.81 |
ReViT(D) | 25.54 | 31.29 | 33.60 | 49.65 | 62.03 | 63.04 | 90.18 | 93.21 | 93.94 | 94.38 | 97.76 | 98.16 |
表1 BCDFSL数据集上的平均分类准确率 (%)
Tab. 1 Average classification accuracy on BCDFSL dataset
模型 | ChestX | ISIC | EuroSAT | CropDisease | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
5way5shot | 5way20shot | 5way50shot | 5way5shot | 5way20shot | 5way50shot | 5way5shot | 5way20shot | 5way50shot | 5way5shot | 5way20shot | 5way50shot | |
ProtoNet | 24.05 | 28.21 | 29.32 | 39.57 | 49.50 | 51.99 | 73.29 | 82.27 | 80.48 | 79.72 | 88.15 | 90.81 |
SPFSL | 27.13 | 31.57 | 34.17 | 43.78 | 54.06 | 57.86 | 89.18 | 93.08 | 96.06 | 95.06 | 97.25 | 97.77 |
STARTUP | 26.94 | 33.19 | 36.91 | 47.22 | 58.63 | 64.16 | 82.29 | 89.26 | 91.99 | 93.02 | 97.51 | 98.45 |
CHEF | 24.72 | 29.71 | 31.25 | 41.26 | 54.30 | 60.86 | 74.15 | 83.31 | 86.55 | 86.87 | 94.78 | 96.77 |
ReViT(B) | 25.05 | 27.64 | 31.33 | 48.38 | 58.34 | 62.29 | 81.95 | 88.04 | 89.12 | 96.32 | 97.66 | 98.06 |
ReViT(A) | 23.87 | 26.13 | 28.43 | 48.28 | 59.21 | 61.16 | 83.01 | 87.23 | 88.49 | 96.60 | 98.48 | 98.81 |
ReViT(D) | 25.54 | 31.29 | 33.60 | 49.65 | 62.03 | 63.04 | 90.18 | 93.21 | 93.94 | 94.38 | 97.76 | 98.16 |
模型 | 域内 | 域外 | 平均 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
INet | Olt | AC | CUB | DT | QD | Fg | Flr | TS | MCC | ||
ProtoNet | 67.01 | 44.50 | 79.56 | 71.14 | 67.01 | 65.18 | 64.88 | 40.26 | 86.85 | 46.48 | 63.29 |
ITA | 57.35 | 94.96 | 87.91 | 85.91 | 76.74 | 82.01 | 67.40 | 92.18 | 83.55 | 55.75 | 78.07 |
CTX | 60.30 | 87.91 | 85.58 | 93.93 | 73.15 | 71.73 | 65.89 | 91.50 | 73.98 | 63.11 | 76.71 |
SPFSL | 67.51 | 85.91 | 80.30 | 81.67 | 87.80 | 72.84 | 60.03 | 94.69 | 87.17 | 58.92 | 77.61 |
ReViT(B) | 79.14 | 91.21 | 91.43 | 93.95 | 87.96 | 80.17 | 76.90 | 93.86 | 74.38 | 69.93 | 83.89 |
ReViT(A) | 78.17 | 92.95 | 88.77 | 93.77 | 86.28 | 79.24 | 77.07 | 94.30 | 75.74 | 78.17 | 83.23 |
ReViT(D) | 71.46 | 91.44 | 77.67 | 89.21 | 84.57 | 76.28 | 74.25 | 95.21 | 68.37 | 64.95 | 79.34 |
表2 Meta-Dataset复合域实验的平均分类准确率 (%)
Tab. 2 Average classification accuracy in multi-domain scenarios on Meta-Dataset
模型 | 域内 | 域外 | 平均 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
INet | Olt | AC | CUB | DT | QD | Fg | Flr | TS | MCC | ||
ProtoNet | 67.01 | 44.50 | 79.56 | 71.14 | 67.01 | 65.18 | 64.88 | 40.26 | 86.85 | 46.48 | 63.29 |
ITA | 57.35 | 94.96 | 87.91 | 85.91 | 76.74 | 82.01 | 67.40 | 92.18 | 83.55 | 55.75 | 78.07 |
CTX | 60.30 | 87.91 | 85.58 | 93.93 | 73.15 | 71.73 | 65.89 | 91.50 | 73.98 | 63.11 | 76.71 |
SPFSL | 67.51 | 85.91 | 80.30 | 81.67 | 87.80 | 72.84 | 60.03 | 94.69 | 87.17 | 58.92 | 77.61 |
ReViT(B) | 79.14 | 91.21 | 91.43 | 93.95 | 87.96 | 80.17 | 76.90 | 93.86 | 74.38 | 69.93 | 83.89 |
ReViT(A) | 78.17 | 92.95 | 88.77 | 93.77 | 86.28 | 79.24 | 77.07 | 94.30 | 75.74 | 78.17 | 83.23 |
ReViT(D) | 71.46 | 91.44 | 77.67 | 89.21 | 84.57 | 76.28 | 74.25 | 95.21 | 68.37 | 64.95 | 79.34 |
模型 | 域内 | 域外 | 平均 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
INet | Olt | AC | CUB | DT | QD | Fg | Flr | TS | MCC | ||
ProtoNet | 50.50 | 59.98 | 53.10 | 68.79 | 66.56 | 48.96 | 39.71 | 85.27 | 47.12 | 41.00 | 56.10 |
ITA | 63.72 | 82.58 | 80.13 | 83.35 | 79.58 | 70.96 | 51.27 | 94.04 | 81.71 | 61.72 | 74.91 |
CTX | 62.76 | 82.21 | 79.49 | 80.63 | 75.57 | 72.68 | 51.58 | 95.34 | 82.65 | 59.90 | 74.28 |
SPFSL | 76.69 | 81.42 | 80.33 | 84.38 | 86.87 | 75.43 | 55.93 | 95.14 | 89.68 | 65.01 | 79.09 |
ReViT(B) | 79.25 | 89.47 | 78.60 | 93.87 | 79.84 | 79.03 | 55.50 | 94.00 | 86.67 | 71.75 | 80.80 |
ReViT(A) | 78.75 | 88.91 | 75.21 | 93.97 | 79.14 | 77.49 | 58.10 | 94.72 | 88.00 | 66.24 | 80.05 |
ReViT(D) | 71.82 | 89.53 | 67.36 | 87.07 | 79.68 | 72.19 | 66.28 | 95.50 | 81.62 | 65.46 | 77.65 |
表3 Meta-Dataset跨域实验的平均分类准确率 (%)
Tab. 3 Average classification accuracy in cross-domain scenarios on Meta-Dataset
模型 | 域内 | 域外 | 平均 | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
INet | Olt | AC | CUB | DT | QD | Fg | Flr | TS | MCC | ||
ProtoNet | 50.50 | 59.98 | 53.10 | 68.79 | 66.56 | 48.96 | 39.71 | 85.27 | 47.12 | 41.00 | 56.10 |
ITA | 63.72 | 82.58 | 80.13 | 83.35 | 79.58 | 70.96 | 51.27 | 94.04 | 81.71 | 61.72 | 74.91 |
CTX | 62.76 | 82.21 | 79.49 | 80.63 | 75.57 | 72.68 | 51.58 | 95.34 | 82.65 | 59.90 | 74.28 |
SPFSL | 76.69 | 81.42 | 80.33 | 84.38 | 86.87 | 75.43 | 55.93 | 95.14 | 89.68 | 65.01 | 79.09 |
ReViT(B) | 79.25 | 89.47 | 78.60 | 93.87 | 79.84 | 79.03 | 55.50 | 94.00 | 86.67 | 71.75 | 80.80 |
ReViT(A) | 78.75 | 88.91 | 75.21 | 93.97 | 79.14 | 77.49 | 58.10 | 94.72 | 88.00 | 66.24 | 80.05 |
ReViT(D) | 71.82 | 89.53 | 67.36 | 87.07 | 79.68 | 72.19 | 66.28 | 95.50 | 81.62 | 65.46 | 77.65 |
学习率 | 分类准确率/% | ||
---|---|---|---|
ReViT(B) | ReViT(D) | ReViT(A) | |
0.100 | 58.34 | 61.97 | 59.17 |
0.050 | 58.31 | 61.94 | 59.23 |
0.010 | 58.34 | 62.03 | 59.21 |
0.001 | 58.35 | 61.99 | 59.16 |
0.005 | 58.26 | 62.05 | 59.22 |
表4 学习率敏感性实验结果
Tab. 4 Experimental results of learning rate sensitivity
学习率 | 分类准确率/% | ||
---|---|---|---|
ReViT(B) | ReViT(D) | ReViT(A) | |
0.100 | 58.34 | 61.97 | 59.17 |
0.050 | 58.31 | 61.94 | 59.23 |
0.010 | 58.34 | 62.03 | 59.21 |
0.001 | 58.35 | 61.99 | 59.16 |
0.005 | 58.26 | 62.05 | 59.22 |
Transformer | 适配器 | 关系网络 | 微调 | 准确率/% |
---|---|---|---|---|
√ | √ | √ | 59.87 | |
√ | √ | √ | 53.91 | |
√ | √ | √ | 58.26 | |
√ | √ | √ | 41.77 | |
√ | √ | √ | √ | 62.03 |
表5 消融实验结果
Tab. 5 Ablation experimental results
Transformer | 适配器 | 关系网络 | 微调 | 准确率/% |
---|---|---|---|---|
√ | √ | √ | 59.87 | |
√ | √ | √ | 53.91 | |
√ | √ | √ | 58.26 | |
√ | √ | √ | 41.77 | |
√ | √ | √ | √ | 62.03 |
1 | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
2 | DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale [EB/OL]. [2023-10-05]. . |
3 | CHEN W Y, LIU Y C, KIRA Z, et al. A closer look at few-shot classification [EB/OL]. [2023-10-15]. . |
4 | HOSPEDALES T M, ANTONIOU A, MICAELLI P, et al. Meta-learning in neural networks: a survey [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(9): 5149-5169. |
5 | GUO Y, CODELLA N, KARLINSKY L, et al. A broader study of cross-domain few-shot learning [C]// Proceedings of the 2020 European Conference on Computer Vision, LNCS 12372. Cham: Springer, 2020: 124-141. |
6 | VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning [C]// Proceedings of the 30th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2016: 3637-3645. |
7 | HOCHREITER S, SCHMIDHUBER J. Long short-term memory [J]. Neural Computation, 1997, 9(8): 1735-1780. |
8 | FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks [C]// Proceedings of the 34th International Conference on Machine Learning. New York: JMLR.org, 2017: 1126-1135. |
9 | 许仁杰,刘宝弟,张凯,等.基于贝叶斯权函数的模型无关元学习算法[J].计算机应用, 2022, 42(3): 708-712. |
XU R J, LIU B D, ZHANG K, et al. Model agnostic meta learning algorithm based on Bayesian weight function [J]. Journal of Computer Applications, 2022, 42(3): 708-712. | |
10 | SNELL J, SWERSKY K, ZEMEL R. Prototypical networks for few-shot learning [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 4080-4090. |
11 | SUNG F, YANG Y, ZHANG L, et al. Learning to compare: relation network for few-shot learning [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 1199-1208. |
12 | LEE K, MAJI S, RAVICHANDRAN A, et al. Meta-learning with differentiable convex optimization [C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 10649-10657. |
13 | LIM J Y, LIM K M, OOI S Y, et al. Efficient-PrototypicalNet with self knowledge distillation for few-shot learning [J]. Neurocomputing, 2021, 459: 327-337. |
14 | SUN Q, LIU Y, CHEN Z, et al. Meta-transfer learning through hard tasks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(3): 1443-1456. |
15 | BENDOU Y, HU Y, LAFARGUE R, et al. EASY — ensemble augmented-shot-Y-shaped learning: state of the art few-shot classification with simple components [J]. Journal of Imaging, 2022, 8(7): No.179. |
16 | DVORNIK N, SCHMID C, MAIRAL J. Selecting relevant features from a multi-domain representation for few-shot classification [C]// Proceedings of the 2020 European Conference on Computer Vision, LNCS 12355. Cham: Springer, 2020: 769-786. |
17 | LIU L, HAMITON W, LONG G, et al. A universal representation transformer layer for few-shot image classification [EB/OL]. [2023-12-09]. . |
18 | LIU Y, LEE J, PARK M, et al. Learning to propagate labels: transductive propagation network for few-shot learning [EB/OL]. [2023-01-10]. . |
19 | REQUEIMA J, GORDON J, BRONSKILL J, et al. Fast and flexible multi-task classification using conditional neural adaptive processes [C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 7959-7970. |
20 | BATENI P, GOYAL R, MASRANI V, et al. Improved few-shot visual classification [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 14481-14490. |
21 | BATENI P, BARBER J, VAN DE MEENT J W, et al. Enhancing few-shot image classification with unlabeled examples [C]// Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2022: 1597-1606. |
22 | HU S X, LI D, STÜHMER J, et al. Pushing the limits of simple pipelines for few-shot learning: external data and fine-tuning make a difference [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 9058-9067. |
23 | DOERSCH C, GUPTA A, ZISSERMAN A. CrossTransformers: spatially-aware few-shot transfer [C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 21981-21993. |
24 | ADLER T, BRANDSTETTER J, WIDRICH M, et al. Cross-domain few-shot learning by representation fusion [EB/OL]. [2023-12-15]. . |
25 | LI W H, LIU X, BILEN H. Cross-domain few-shot learning with task-specific adapters [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 7151-7160. |
26 | PHOO C P, HARIHARAN B. Self-training for few-shot transfer across extreme task differences [EB/OL]. [2023-12-10]. . |
27 | CARON M, TOUVRON H, MISRA I, et al. Emerging properties in self-supervised Vision Transformers [C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9630-9640. |
28 | BAO H, DONG L, PIAO S, et al. BEiT: BERT pre-training of image transformers [EB/OL]. [2023-12-11]. . |
29 | STEINER A, KOLESNIKOV A, ZHAI X, et al. How to train your ViT? data, augmentation, and regularization in vision transformers [EB/OL]. [2023-11-08]. . |
30 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010. |
31 | HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(8): 2011-2023. |
32 | TRIANTAFILLOU E, ZHU T, DUMOULIN V, et al. Meta-Dataset: a dataset of datasets for learning to learn from few examples [EB/OL]. [2023-11-08]. . |
33 | RUSSAKOVSKY O, DENG J, SU H, et al. ImageNet large scale visual recognition challenge [J]. International Journal of Computer Vision, 2015, 115(3): 211-252. |
34 | LAKE B M, SALAKHUTDINOV R, TENENBAUM J B. Human-level concept learning through probabilistic program induction [J]. Science, 2015, 350(6266): 1332-1338. |
35 | MAJI S, RAHTU E, KANNALA J, et al. Fine-grained visual classification of aircraft [EB/OL]. [2023-12-15]. . |
36 | WAH C, BRANSON S, WELINDER P, et al. The Caltech-UCSD Birds-200-2011 dataset [EB/OL]. [2023-03-02]. . |
37 | CIMPOI M, MAJI S, KOKKINOS I, et al. Describing textures in the wild [C]// Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 3606-3613. |
38 | GUO K, WoMA J, XU E. Quick, draw! doodle recognition [EB/OL]. [2023-02-05]. . |
39 | SULC M, PICEK L, MATAS J, et al. Fungi recognition: a practical use case [C]// Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision. Piscataway: IEEE, 2020: 2305-2313. |
40 | PATEL I, PATEL S. Flower identification and classification using computer vision and machine learning techniques [J]. International Journal of Engineering and Advanced Technology, 2019, 8(6): 277-285. |
41 | HOUBEN S, STALLKAMP J, SALMEN J, et al. Detection of traffic signs in real-world images: the German traffic sign detection benchmark [C]// Proceedings of the 2013 International Joint Conference on Neural Networks. Piscataway: IEEE, 2013: 1-8. |
42 | LIN T Y, MAIRE M, BELONGIE S, et al. Microsoft COCO: common objects in context [C]// Proceedings of the 2014 European Conference on Computer Vision, LNCS 8693. Cham: Springer, 2014: 740-755. |
43 | WANG X, PENG Y, LU L, et al. ChestX-Ray8: hospital scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases [C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 3462-3471. |
44 | CODELLA N, ROTEMBERG V, TSCHANDL P, et al. Skin lesion analysis toward melanoma detection 2018: a challenge hosted by the International Skin Imaging Collaboration (ISIC) [EB/OL]. [2023-11-02]. . |
45 | HELBER P, BISCHKE B, DENGEL A, et al. EuroSAT: a novel dataset and deep learning benchmark for land use and land cover classification [J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019, 12(7): 2217-2226. |
46 | MOHANTY S P, HUGHES D P, SLATHÉ M. Using deep learning for image-based plant disease detection [J]. Frontiers in Plant Science, 2016, 7: No.1419. |
[1] | 张李伟, 梁泉, 胡禹涛, 朱乔乐. 基于分组卷积的通道重洗注意力机制[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1069-1076. |
[2] | 丁美荣, 卓金鑫, 陆玉武, 刘庆龙, 郎济聪. 融合环境标签平滑与核范数差异的领域自适应[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1130-1138. |
[3] | 谢斌红, 高婉银, 陆望东, 张英俊, 张睿. 小样本相似性匹配特征增强的密集目标计数网络[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 403-410. |
[4] | 富坤, 应世聪, 郑婷婷, 屈佳捷, 崔静远, 李建伟. 面向小样本节点分类的图数据增强方法[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 392-402. |
[5] | 丁丹妮, 彭博, 吴锡. 受腹侧通路启发的脂肪肝超声图像分类方法VPNet[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 662-669. |
[6] | 严雪文, 黄章进. 基于对比学习的小样本图像分类方法[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 383-391. |
[7] | 郑宗生, 杜嘉, 成雨荷, 赵泽骋, 张月维, 王绪龙. 用于红外-可见光图像分类的跨模态双流交替交互网络[J]. 《计算机应用》唯一官方网站, 2025, 45(1): 275-283. |
[8] | 黄云川, 江永全, 黄骏涛, 杨燕. 基于元图同构网络的分子毒性预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2964-2969. |
[9] | 王东炜, 刘柏辰, 韩志, 王艳美, 唐延东. 基于低秩分解和向量量化的深度网络压缩方法[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 1987-1994. |
[10] | 翟飞宇, 马汉达. 基于DenseNet的经典-量子混合分类模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1905-1910. |
[11] | 余新言, 曾诚, 王乾, 何鹏, 丁晓玉. 基于知识增强和提示学习的小样本新闻主题分类方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1767-1774. |
[12] | 时旺军, 王晶, 宁晓军, 林友芳. 小样本场景下的元迁移学习睡眠分期模型[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1445-1451. |
[13] | 吴郅昊, 迟子秋, 肖婷, 王喆. 基于元学习自适应的小样本语音合成[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1629-1635. |
[14] | 肖斌, 杨模, 汪敏, 秦光源, 李欢. 独立性视角下的相频融合领域泛化方法[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1002-1009. |
[15] | 黄雨鑫, 黄贻望, 黄辉. 基于浅层网络预测的元标签校正方法[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3364-3370. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||