Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (2): 354-361.DOI: 10.11772/j.issn.1001-9081.2024020212
• Artificial intelligence • Previous Articles
Sheng YANG1,2,3, Yan LI1,2()
Received:
2024-03-04
Revised:
2024-04-23
Accepted:
2024-04-24
Online:
2024-06-04
Published:
2025-02-10
Contact:
Yan LI
About author:
YANG Sheng, born in 1999, M. S. candidate. His research interests include model compression, knowledge distillation.
通讯作者:
李岩
作者简介:
杨晟(1999—),男,江西上饶人,硕士研究生,主要研究方向:模型压缩、知识蒸馏
CLC Number:
Sheng YANG, Yan LI. Contrastive knowledge distillation method for object detection[J]. Journal of Computer Applications, 2025, 45(2): 354-361.
杨晟, 李岩. 面向目标检测的对比知识蒸馏方法[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 354-361.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024020212
数据集 | 类数 | 训练集样本数 | 验证集样本数 |
---|---|---|---|
COCO2014 | 80 | 82 783 | 40 540 |
Pascal VOC | 20 | 16 551 | 4 953 |
Tab. 1 COCO2014 and Pascal VOC datasets
数据集 | 类数 | 训练集样本数 | 验证集样本数 |
---|---|---|---|
COCO2014 | 80 | 82 783 | 40 540 |
Pascal VOC | 20 | 16 551 | 4 953 |
方法 | 数据集 | 模型 | AP50 | mAP |
---|---|---|---|---|
Teacher | COCO2014 | YOLOv5x | 51.2 | 33.3 |
Pascal VOC | YOLOv5x | 77.9 | 55.8 | |
Baseline | COCO2014 | YOLOv5l | 48.7 | 31.2 |
YOLOv5m | 44.9 | 27.9 | ||
Pascal VOC | YOLOv5m | 74.2 | 51.1 | |
YOLOv5s | 69.8 | 45.5 | ||
FGFI[ | COCO2014 | YOLOv5l | 49.3 | 31.6 |
YOLOv5m | 45.3 | 28.0 | ||
Pascal VOC | YOLOv5m | 74.7 | 51.6 | |
YOLOv5s | 70.4 | 45.9 | ||
TADF[ | COCO2014 | YOLOv5l | 49.3 | 31.6 |
YOLOv5m | 45.3 | 28.1 | ||
Pascal VOC | YOLOv5m | 75.2 | 51.6 | |
YOLOv5s | 70.2 | 45.6 | ||
DeFeat[ | COCO2014 | YOLOv5l | 49.4 | 31.6 |
YOLOv5m | 45.5 | 28.2 | ||
Pascal VOC | YOLOv5m | 75.0 | 51.5 | |
YOLOv5s | 70.4 | 45.8 | ||
CKD | COCO2014 | YOLOv5l | 50.4 | 32.3 |
YOLOv5m | 45.9 | 28.5 | ||
Pascal VOC | YOLOv5m | 74.9 | 51.9 | |
YOLOv5s | 70.5 | 46.4 |
Tab. 2 Comparison of YOLOv5 results on COCO2014 and Pascal VOC datasets
方法 | 数据集 | 模型 | AP50 | mAP |
---|---|---|---|---|
Teacher | COCO2014 | YOLOv5x | 51.2 | 33.3 |
Pascal VOC | YOLOv5x | 77.9 | 55.8 | |
Baseline | COCO2014 | YOLOv5l | 48.7 | 31.2 |
YOLOv5m | 44.9 | 27.9 | ||
Pascal VOC | YOLOv5m | 74.2 | 51.1 | |
YOLOv5s | 69.8 | 45.5 | ||
FGFI[ | COCO2014 | YOLOv5l | 49.3 | 31.6 |
YOLOv5m | 45.3 | 28.0 | ||
Pascal VOC | YOLOv5m | 74.7 | 51.6 | |
YOLOv5s | 70.4 | 45.9 | ||
TADF[ | COCO2014 | YOLOv5l | 49.3 | 31.6 |
YOLOv5m | 45.3 | 28.1 | ||
Pascal VOC | YOLOv5m | 75.2 | 51.6 | |
YOLOv5s | 70.2 | 45.6 | ||
DeFeat[ | COCO2014 | YOLOv5l | 49.4 | 31.6 |
YOLOv5m | 45.5 | 28.2 | ||
Pascal VOC | YOLOv5m | 75.0 | 51.5 | |
YOLOv5s | 70.4 | 45.8 | ||
CKD | COCO2014 | YOLOv5l | 50.4 | 32.3 |
YOLOv5m | 45.9 | 28.5 | ||
Pascal VOC | YOLOv5m | 74.9 | 51.9 | |
YOLOv5s | 70.5 | 46.4 |
方法 | 骨干网络 | AP50 | mAP |
---|---|---|---|
Teacher | Res101 | 67.2 | 67.2 |
Baseline | Res18 | 55.5 | 55.5 |
LD[ | Res18 | 55.4 | 55.4 |
CKD | Res18 | 61.1 | 61.1 |
Tab. 3 Comparison of GFocal results on Pascal VOC dataset
方法 | 骨干网络 | AP50 | mAP |
---|---|---|---|
Teacher | Res101 | 67.2 | 67.2 |
Baseline | Res18 | 55.5 | 55.5 |
LD[ | Res18 | 55.4 | 55.4 |
CKD | Res18 | 61.1 | 61.1 |
方法 | 模型 | 精确率 | 召回率 | F1 |
---|---|---|---|---|
Baseline | YOLOv5s | 80.5 | 64.5 | 71.6 |
FGFI | YOLOv5s | 81.1 | 64.7 | 72.0 |
TADF | YOLOv5s | 80.2 | 65.4 | 72.0 |
DeFeat | YOLOv5s | 80.8 | 64.8 | 71.9 |
CKD | YOLOv5s | 82.0 | 65.9 | 73.1 |
Tab. 4 Comparison of precision, recall, and F1 results on Pascal VOC dataset
方法 | 模型 | 精确率 | 召回率 | F1 |
---|---|---|---|---|
Baseline | YOLOv5s | 80.5 | 64.5 | 71.6 |
FGFI | YOLOv5s | 81.1 | 64.7 | 72.0 |
TADF | YOLOv5s | 80.2 | 65.4 | 72.0 |
DeFeat | YOLOv5s | 80.8 | 64.8 | 71.9 |
CKD | YOLOv5s | 82.0 | 65.9 | 73.1 |
方法 | 模型 | aero | bike | bird | boat | bottle | bus | car | cat | chair | cow |
---|---|---|---|---|---|---|---|---|---|---|---|
Teacher | YOLOv5x | 57.8 | 62.2 | 51.6 | 37.3 | 33.4 | 67.7 | 60.4 | 73.9 | 38.7 | 56.9 |
Baseline | YOLOv5s | 49.7 | 51.6 | 37.4 | 28.5 | 21.0 | 59.4 | 51.1 | 60.8 | 26.9 | 46.0 |
CKD | YOLOv5s | 50.5 | 52.4 | 37.7 | 29.4 | 21.2 | 60.5 | 51.2 | 62.3 | 28.5 | 46.1 |
性能提升 | 0.8 | 0.8 | 0.3 | 0.9 | 0.2 | 1.1 | 0.1 | 1.5 | 1.6 | 0.1 | |
方法 | 模型 | table | dog | horse | motorbike | person | plant | sheep | sofa | train | tv |
Teacher | YOLOv5x | 53.7 | 66.7 | 67.5 | 60.2 | 53.7 | 31.1 | 54.6 | 59.2 | 69.1 | 60.8 |
Baseline | YOLOv5s | 45.2 | 54.0 | 56.6 | 50.0 | 44.4 | 22.6 | 44.4 | 50.2 | 60.3 | 50.3 |
CKD | YOLOv5s | 47.0 | 54.7 | 57.5 | 52.0 | 44.4 | 24.2 | 44.9 | 51.5 | 61.1 | 50.2 |
性能提升 | 1.8 | 0.7 | 0.9 | 2.0 | 0 | 1.8 | 0.5 | 1.3 | 0.8 | -0.1 |
Tab. 5 Comparison of class results on Pascal VOC dataset
方法 | 模型 | aero | bike | bird | boat | bottle | bus | car | cat | chair | cow |
---|---|---|---|---|---|---|---|---|---|---|---|
Teacher | YOLOv5x | 57.8 | 62.2 | 51.6 | 37.3 | 33.4 | 67.7 | 60.4 | 73.9 | 38.7 | 56.9 |
Baseline | YOLOv5s | 49.7 | 51.6 | 37.4 | 28.5 | 21.0 | 59.4 | 51.1 | 60.8 | 26.9 | 46.0 |
CKD | YOLOv5s | 50.5 | 52.4 | 37.7 | 29.4 | 21.2 | 60.5 | 51.2 | 62.3 | 28.5 | 46.1 |
性能提升 | 0.8 | 0.8 | 0.3 | 0.9 | 0.2 | 1.1 | 0.1 | 1.5 | 1.6 | 0.1 | |
方法 | 模型 | table | dog | horse | motorbike | person | plant | sheep | sofa | train | tv |
Teacher | YOLOv5x | 53.7 | 66.7 | 67.5 | 60.2 | 53.7 | 31.1 | 54.6 | 59.2 | 69.1 | 60.8 |
Baseline | YOLOv5s | 45.2 | 54.0 | 56.6 | 50.0 | 44.4 | 22.6 | 44.4 | 50.2 | 60.3 | 50.3 |
CKD | YOLOv5s | 47.0 | 54.7 | 57.5 | 52.0 | 44.4 | 24.2 | 44.9 | 51.5 | 61.1 | 50.2 |
性能提升 | 1.8 | 0.7 | 0.9 | 2.0 | 0 | 1.8 | 0.5 | 1.3 | 0.8 | -0.1 |
方法 | 模型 | 训练时间/h | 推理速度/FPS |
---|---|---|---|
Baseline | YOLOv5m | 19.73 | 31.6 |
FGFI | YOLOv5m | 28.00 | 31.6 |
TADF | YOLOv5m | 27.98 | 31.6 |
DeFeat | YOLOv5m | 27.98 | 31.6 |
CKD | YOLOv5m | 31.23 | 31.6 |
Tab. 6 Comparison of training time and reasoning speed on Pascal VOC dataset
方法 | 模型 | 训练时间/h | 推理速度/FPS |
---|---|---|---|
Baseline | YOLOv5m | 19.73 | 31.6 |
FGFI | YOLOv5m | 28.00 | 31.6 |
TADF | YOLOv5m | 27.98 | 31.6 |
DeFeat | YOLOv5m | 27.98 | 31.6 |
CKD | YOLOv5m | 31.23 | 31.6 |
方法 | 模型 | AP50 | mAP |
---|---|---|---|
CKD+数据增强 | YOLOv5s | 4.0 | 1.1 |
CKD | YOLOv5s | 70.5 | 46.4 |
Tab. 7 Comparison of CKD results with and without data augmentation
方法 | 模型 | AP50 | mAP |
---|---|---|---|
CKD+数据增强 | YOLOv5s | 4.0 | 1.1 |
CKD | YOLOv5s | 70.5 | 46.4 |
有无负样本 | 模型 | AP50 | mAP |
---|---|---|---|
无负样本 | YOLOv5l | 48.7 | 31.2 |
有负样本 | YOLOv5l | 50.4 | 32.3 |
Tab. 8 Comparison of results with and without negative samples
有无负样本 | 模型 | AP50 | mAP |
---|---|---|---|
无负样本 | YOLOv5l | 48.7 | 31.2 |
有负样本 | YOLOv5l | 50.4 | 32.3 |
有无投影层 | 模型 | AP50 | mAP |
---|---|---|---|
无投影层 | YOLOv5s | 61.4 | 36.9 |
有投影层 | YOLOv5s | 70.5 | 46.4 |
Tab. 9 Comparison of results with and without projection layer
有无投影层 | 模型 | AP50 | mAP |
---|---|---|---|
无投影层 | YOLOv5s | 61.4 | 36.9 |
有投影层 | YOLOv5s | 70.5 | 46.4 |
参数更新方式 | 模型 | AP50 | mAP |
---|---|---|---|
wo grad | YOLOv5s | 50.4 | 32.3 |
w grad | YOLOv5s | 49.8 | 32.0 |
wo grad and shared | YOLOv5s | 50.0 | 32.1 |
Tab. 10 Comparison of results when using different parameter update methods in projection layer
参数更新方式 | 模型 | AP50 | mAP |
---|---|---|---|
wo grad | YOLOv5s | 50.4 | 32.3 |
w grad | YOLOv5s | 49.8 | 32.0 |
wo grad and shared | YOLOv5s | 50.0 | 32.1 |
1 | ZOU Z, CHEN K, SHI Z, et al. Object detection in 20 years: a survey[J]. Proceedings of the IEEE, 2023, 111(3): 257-276. |
2 | HAN S, POOL J, TRAN J, et al. Learning both weights and connections for efficient neural networks[C]// Proceedings of the 28th International Conference on Neural Information Processing Systems — Volume 1. Cambridge: MIT Press, 2015: 1135-1143. |
3 | 龚成,卢冶,代素蓉,等. 一种超低损失的深度神经网络量化压缩方法[J]. 软件学报, 2021, 32(8): 2391-2407. |
GONG C, LU Y, DAI S R, et al. Ultra-low loss quantization method for deep neural network compression[J]. Journal of Software, 2021, 32(8): 2391-2407. | |
4 | RASTEGARI M, ORDONEZ V, REDMON J, et al. XNOR-Net: ImageNet classification using binary convolutional neural networks[C]// Proceedings of the 2016 European Conference on Computer Vision, LNCS 9908. Cham: Springer, 2016: 525-542. |
5 | HAN S, MAO H, DALLY W J. Deep compression: compressing deep neural networks with pruning, trained quantization and Huffman coding[EB/OL]. [2024-01-13].. |
6 | LI R, WANG Y, LIANG F, et al. Fully quantized network for object detection[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 2805-2814. |
7 | ALVAREZ J M, SALZMANN M. Learning the number of neurons in deep networks[C]// Proceedings of the 30th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2016: 2270-2278. |
8 | DENTON E, ZAREMBA W, BRUNA J, et al. Exploiting linear structure within convolutional networks for efficient evaluation[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014: 1269-1277. |
9 | ZHANG X, ZOU J, HE K, et al. Accelerating very deep convolutional networks for classification and detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(10): 1943-1955. |
10 | WEN W, WU C, WANG Y, et al. Learning structured sparsity in deep neural networks[C]// Proceedings of the 30th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2016: 2082-2090. |
11 | LIU Z, LI J, SHEN Z, et al. Learning efficient convolutional networks through network slimming[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 2755-2763. |
12 | LI H, KADAV A, DURDANOVIC I, et al. Pruning filters for efficient ConvNets[EB/OL]. [2024-01-13].. |
13 | HE Y, ZHANG X, SUN J. Channel pruning for accelerating very deep neural networks[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 1398-1406. |
14 | LUO J H, WU J, LIN W. ThiNet: a filter level pruning method for deep neural network compression[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 5068-5076. |
15 | HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. [2024-01-13].. |
16 | ZAGORUYKO S, KOMODAKIS N. Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer[EB/OL]. [2024-01-13].,pdf. |
17 | CHEN Y, WANG N, ZHANG Z. DarkRank: accelerating deep metric learning via cross sample similarities transfer[C]// Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2018: 2852-2859. |
18 | CHEN G, CHOI W, YU X, et al. Learning efficient object detection models with knowledge distillation[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 742-751. |
19 | XU T B, LIU C L. Deep neural network self-distillation exploiting data representation invariance[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(1): 257-269. |
20 | 陈嘉言,任东东,李文斌,等. 面向小样本学习的轻量化知识蒸馏[J]. 软件学报, 2024, 35(5): 2414-2429. |
CHEN J Y, REN D D, LI W B, et al. Lightweight knowledge distillation for few-shot learning[J]. Journal of Software, 2024, 35(5): 2414-2429. | |
21 | CHEN W, WILSON J T, TYREE S, et al. Compressing neural networks with the hashing trick[C]// Proceedings of the 32nd International Conference on Machine Learning. New York: JMLR.org, 2015: 2285-2294. |
22 | SRINIVAS S, BABU R V. Data-free parameter pruning for deep neural networks[C]// Proceedings of the 2015 British Machine Vision Conference. Durham: BMVA Press, 2015: No.31. |
23 | CHEN S, ZHAO Q. Shallowing deep networks: layer-wise pruning based on feature representations[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(12): 3048-3056. |
24 | WANG T, YUAN L, ZHANG X, et al. Distilling object detectors with fine-grained feature imitation[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4928-4937. |
25 | SUN R, TANG F, ZHANG X, et al. Distilling object detectors with task adaptive regularization[EB/OL]. [2024-01-13].. |
26 | GUO J, HAN K, WANG Y, et al. Distilling object detectors via decoupled features[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 2154-2164. |
27 | ZHENG Z, YE R, WANG P, et al. Localization distillation for dense object detection[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 9397-9406. |
28 | TIAN Y, KRISHNAN D, ISOLA P. Contrastive representation distillation[EB/OL]. [2024-01-13].. |
29 | GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]// Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 580-587. |
30 | GIRSHICK R. Fast R-CNN[C]// Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 1440-1448. |
31 | REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. |
32 | LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector[C]// Proceedings of the 2016 European Conference on Computer Vision, LNCS 9905. Cham: Springer, 2016: 21-37. |
33 | REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 779-788. |
34 | BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: optimal speed and accuracy of object detection[EB/OL]. [2024-01-13].. |
35 | 高洁,朱元,陆科. 基于雷达和相机融合的目标检测方法[J]. 计算机应用, 2021, 41(11): 3242-3250. |
GAO J, ZHU Y, LU K. Object detection method based on radar and camera fusion[J]. Journal of Computer Applications, 2021, 41(11): 3242-3250. | |
36 | BUCILUǍ C, CARUANA R, NICULESCU-MIZIL A. Model compression[C]// Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2006: 535-541. |
37 | HE K, FAN H, WU Y, et al. Momentum contrast for unsupervised visual representation learning[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 9726-9735. |
38 | WU Z, XIONG Y, YU S X, et al. Unsupervised feature learning via non-parametric instance discrimination[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 3733-3742. |
39 | VAN DEN OORD A, LI Y, VINYALS O. Representation learning with contrastive predictive coding[EB/OL]. [2024-01-13].. |
40 | LI X, WANG W, WU L, et al. Generalized focal loss: learning qualified and distributed bounding boxes for dense object detection[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 21002-21012. |
[1] | Xiaosheng YU, Zhixin WANG. Sequential recommendation model based on multi-level graph contrastive learning [J]. Journal of Computer Applications, 2025, 45(1): 106-114. |
[2] | Xingyao YANG, Yu CHEN, Jiong YU, Zulian ZHANG, Jiaying CHEN, Dongxiao WANG. Recommendation model combining self-features and contrastive learning [J]. Journal of Computer Applications, 2024, 44(9): 2704-2710. |
[3] | Yexin PAN, Zhe YANG. Optimization model for small object detection based on multi-level feature bidirectional fusion [J]. Journal of Computer Applications, 2024, 44(9): 2871-2877. |
[4] | Jieru JIA, Jianchao YANG, Shuorui ZHANG, Tao YAN, Bin CHEN. Unsupervised person re-identification based on self-distilled vision Transformer [J]. Journal of Computer Applications, 2024, 44(9): 2893-2902. |
[5] | Yeheng LI, Guangsheng LUO, Qianmin SU. Logo detection algorithm based on improved YOLOv5 [J]. Journal of Computer Applications, 2024, 44(8): 2580-2587. |
[6] | Yubo ZHAO, Liping ZHANG, Sheng YAN, Min HOU, Mao GAO. Relation extraction between discipline knowledge entities based on improved piecewise convolutional neural network and knowledge distillation [J]. Journal of Computer Applications, 2024, 44(8): 2421-2429. |
[7] | Yingjun ZHANG, Niuniu LI, Binhong XIE, Rui ZHANG, Wangdong LU. Semi-supervised object detection framework guided by curriculum learning [J]. Journal of Computer Applications, 2024, 44(8): 2326-2333. |
[8] | Rui SHI, Yong LI, Yanhan ZHU. Adversarial sample attack algorithm of modulation signal based on equalization of feature gradient [J]. Journal of Computer Applications, 2024, 44(8): 2521-2527. |
[9] | Song XU, Wenbo ZHANG, Yifan WANG. Lightweight video salient object detection network based on spatiotemporal information [J]. Journal of Computer Applications, 2024, 44(7): 2192-2199. |
[10] | Xun SUN, Ruifeng FENG, Yanru CHEN. Monocular 3D object detection method integrating depth and instance segmentation [J]. Journal of Computer Applications, 2024, 44(7): 2208-2215. |
[11] | Dongwei WANG, Baichen LIU, Zhi HAN, Yanmei WANG, Yandong TANG. Deep network compression method based on low-rank decomposition and vector quantization [J]. Journal of Computer Applications, 2024, 44(7): 1987-1994. |
[12] | Mei WANG, Xuesong SU, Jia LIU, Ruonan YIN, Shan HUANG. Time series classification method based on multi-scale cross-attention fusion in time-frequency domain [J]. Journal of Computer Applications, 2024, 44(6): 1842-1847. |
[13] | Xiaoxia JIANG, Ruizhang HUANG, Ruina BAI, Lina REN, Yanping CHEN. Deep event clustering method based on event representation and contrastive learning [J]. Journal of Computer Applications, 2024, 44(6): 1734-1742. |
[14] | Yue LIU, Fang LIU, Aoyun WU, Qiuyue CHAI, Tianxiao WANG. 3D object detection network based on self-attention mechanism and graph convolution [J]. Journal of Computer Applications, 2024, 44(6): 1972-1977. |
[15] | Yaping DENG, Yingjiang LI. Review of YOLO algorithm and its applications to object detection in autonomous driving scenes [J]. Journal of Computer Applications, 2024, 44(6): 1949-1958. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||