《计算机应用》唯一官方网站 ›› 2026, Vol. 46 ›› Issue (1): 242-251.DOI: 10.11772/j.issn.1001-9081.2025010058
收稿日期:2025-01-15
修回日期:2025-03-31
接受日期:2025-03-31
发布日期:2026-01-10
出版日期:2026-01-10
通讯作者:
桑雨
作者简介:贡同(2000—),男,安徽芜湖人,硕士研究生,主要研究方向:跨域目标检测、语义分割基金资助:
Yu SANG1,2(
), Tong GONG1, Chen ZHAO3, Bowen YU1, Siman LI1
Received:2025-01-15
Revised:2025-03-31
Accepted:2025-03-31
Online:2026-01-10
Published:2026-01-10
Contact:
Yu SANG
About author:GONG Tong, born in 2000, M. S. candidate. His research interests include cross-domain object detection, semantic segmentation.Supported by:摘要:
夜间目标检测受限于低光照条件和高质量标注数据的匮乏,目标特征提取困难,目标检测精度不高。因此,提出一种具有光度对齐的域适应夜间目标检测方法。首先,设计一种夜间域适应光度对齐模块,将有标记的白天源域图像转换为对应的夜间目标域图像,即通过光度对齐弥合源域与目标域之间的差距,解决低光照条件下难以获取准确夜间目标注释的问题;其次,采用CNN-Transformer混合模型作为检测器,即以CSwin Transformer作为主干网络提取多层次的图像特征,并将提取特征输入特征金字塔网络中,提升模型对多尺度目标的检测能力;最后,引入Outlook注意力解决光照不足导致的图像细节不明显问题,提升模型在光照变化和阴影等复杂环境下的鲁棒性。实验结果表明,在公共数据集BDD100K上,所提方法的平均精度均值(mAP)@0.5达到了50.0%,比2PCNet (two-Phase Consistency Network)方法提高4.2个百分点;在公共数据集SODA10M上,所提方法的mAP@0.5达到了45.4%,比SFA (Sequence Feature Alignment)方法提高0.9个百分点。
中图分类号:
桑雨, 贡同, 赵琛, 于博文, 李思漫. 具有光度对齐的域适应夜间目标检测方法[J]. 计算机应用, 2026, 46(1): 242-251.
Yu SANG, Tong GONG, Chen ZHAO, Bowen YU, Siman LI. Domain-adaptive nighttime object detection method with photometric alignment[J]. Journal of Computer Applications, 2026, 46(1): 242-251.
| 方法 | AP | mAP@0.5 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Pedestrian | Rider | Car | Truck | Bus | Motorcycle | Bicycle | Traffic-light | Traffic-sign | ||
| DAFRCNN[ | 50.4 | 30.3 | 66.3 | 46.8 | 48.3 | 32.6 | 41.4 | 41.0 | 56.2 | 41.3 |
| TDD[ | 43.1 | 20.7 | 68.4 | 33.3 | 35.6 | 16.5 | 25.9 | 43.1 | 59.5 | 34.6 |
| UMT[ | 46.5 | 26.1 | 46.8 | 44.0 | 46.3 | 28.2 | 40.2 | 31.6 | 52.7 | 36.2 |
| AT (Adaptive Teacher)[ | 42.3 | 30.4 | 60.8 | 48.9 | 52.1 | 34.5 | 42.7 | 29.1 | 43.9 | 38.5 |
| CycleGAN[ | 52.3 | 33.9 | 69.9 | 50.1 | 52.0 | 34.8 | 43.1 | 33.0 | 62.6 | 43.2 |
| ForkGAN[ | 49.9 | 29.6 | 69.2 | 48.7 | 50.3 | 32.5 | 39.4 | 44.6 | 61.8 | 42.6 |
| Nod[ | 53.0 | 32.6 | 69.1 | 51.9 | 52.3 | 37.9 | 42.3 | 45.0 | 62.9 | 44.7 |
| 2PCNet[ | 53.8 | 31.7 | 71.8 | 53.2 | 53.3 | 37.0 | 40.5 | 45.2 | 64.3 | 45.8 |
| 本文方法 | 55.6 | 34.3 | 71.9 | 55.6 | 53.3 | 30.8 | 43.6 | 46.0 | 62.1 | 50.0 |
表1 不同方法在BDD100K数据集上的实验结果对比 ( %)
Tab. 1 Comparison of experimental results of different methods on BDD100K dataset
| 方法 | AP | mAP@0.5 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Pedestrian | Rider | Car | Truck | Bus | Motorcycle | Bicycle | Traffic-light | Traffic-sign | ||
| DAFRCNN[ | 50.4 | 30.3 | 66.3 | 46.8 | 48.3 | 32.6 | 41.4 | 41.0 | 56.2 | 41.3 |
| TDD[ | 43.1 | 20.7 | 68.4 | 33.3 | 35.6 | 16.5 | 25.9 | 43.1 | 59.5 | 34.6 |
| UMT[ | 46.5 | 26.1 | 46.8 | 44.0 | 46.3 | 28.2 | 40.2 | 31.6 | 52.7 | 36.2 |
| AT (Adaptive Teacher)[ | 42.3 | 30.4 | 60.8 | 48.9 | 52.1 | 34.5 | 42.7 | 29.1 | 43.9 | 38.5 |
| CycleGAN[ | 52.3 | 33.9 | 69.9 | 50.1 | 52.0 | 34.8 | 43.1 | 33.0 | 62.6 | 43.2 |
| ForkGAN[ | 49.9 | 29.6 | 69.2 | 48.7 | 50.3 | 32.5 | 39.4 | 44.6 | 61.8 | 42.6 |
| Nod[ | 53.0 | 32.6 | 69.1 | 51.9 | 52.3 | 37.9 | 42.3 | 45.0 | 62.9 | 44.7 |
| 2PCNet[ | 53.8 | 31.7 | 71.8 | 53.2 | 53.3 | 37.0 | 40.5 | 45.2 | 64.3 | 45.8 |
| 本文方法 | 55.6 | 34.3 | 71.9 | 55.6 | 53.3 | 30.8 | 43.6 | 46.0 | 62.1 | 50.0 |
| 方法 | AP大 | AP中 | AP小 | mAP@0.5 |
|---|---|---|---|---|
| 2PCNet[ | 41.7 | 25.8 | 9.1 | 45.8 |
| ISP[ | 45.7 | 27.2 | 9.2 | 48.8 |
| CoS[ | 45.9 | 27.9 | 10.2 | 49.4 |
| 本文方法 | 52.9 | 30.5 | 9.4 | 50.0 |
表2 不同方法在BDD100K数据集上大、中、小目标实验结果对比 ( %)
Tab. 2 Comparison of experimental results of different methods for large, medium, and small objects on BDD100K dataset
| 方法 | AP大 | AP中 | AP小 | mAP@0.5 |
|---|---|---|---|---|
| 2PCNet[ | 41.7 | 25.8 | 9.1 | 45.8 |
| ISP[ | 45.7 | 27.2 | 9.2 | 48.8 |
| CoS[ | 45.9 | 27.9 | 10.2 | 49.4 |
| 本文方法 | 52.9 | 30.5 | 9.4 | 50.0 |
| 方法 | AP | mAP@0.5 | ||||
|---|---|---|---|---|---|---|
| Pedestrian | Rider | Car | Truck | Tram | ||
| DAFRCNN[ | 49.9 | 29.7 | 47.4 | 32.6 | 47.3 | 42.6 |
| TDD[ | 44.1 | 26.4 | 45.7 | 33.3 | 25.0 | 35.7 |
| UMT[ | 43.7 | 27.0 | 46.8 | 29.9 | 39.3 | 37.3 |
| AT[ | 50.3 | 29.1 | 46.6 | 31.5 | 48.7 | 42.0 |
| AQT (Adversarial Query Transformers)[ | 36.7 | 20.6 | 64.8 | 35.2 | 32.3 | 37.9 |
| SFA[ | 43.4 | 36.5 | 68.9 | 36.7 | 36.9 | 44.5 |
| 本文方法 | 53.3 | 43.5 | 75.5 | 45.4 | 43.1 | 45.4 |
表3 不同方法在SODA10M数据集的实验结果对比 ( %)
Tab. 3 Comparison of experimental results of different methods on SODA10M dataset
| 方法 | AP | mAP@0.5 | ||||
|---|---|---|---|---|---|---|
| Pedestrian | Rider | Car | Truck | Tram | ||
| DAFRCNN[ | 49.9 | 29.7 | 47.4 | 32.6 | 47.3 | 42.6 |
| TDD[ | 44.1 | 26.4 | 45.7 | 33.3 | 25.0 | 35.7 |
| UMT[ | 43.7 | 27.0 | 46.8 | 29.9 | 39.3 | 37.3 |
| AT[ | 50.3 | 29.1 | 46.6 | 31.5 | 48.7 | 42.0 |
| AQT (Adversarial Query Transformers)[ | 36.7 | 20.6 | 64.8 | 35.2 | 32.3 | 37.9 |
| SFA[ | 43.4 | 36.5 | 68.9 | 36.7 | 36.9 | 44.5 |
| 本文方法 | 53.3 | 43.5 | 75.5 | 45.4 | 43.1 | 45.4 |
| 方法 | mAP@0.5 |
|---|---|
| CNN-Transformer | 47.4 |
| FFT[ | 48.2 |
| RGB直方图匹配 | 47.9 |
| LAB直方图匹配 | 48.4 |
| 夜间域适应(NDA)模块 | 49.1 |
表4 不同图像转换方法的对比实验结果 ( %)
Tab. 4 Comparison experiment results of different image transformation methods
| 方法 | mAP@0.5 |
|---|---|
| CNN-Transformer | 47.4 |
| FFT[ | 48.2 |
| RGB直方图匹配 | 47.9 |
| LAB直方图匹配 | 48.4 |
| 夜间域适应(NDA)模块 | 49.1 |
| 消融实验 | AP大 | AP中 | AP小 | mAP@0.5 | mAP@0.5:0.95 |
|---|---|---|---|---|---|
| CNN-Transformer | 51.8 | 29.4 | 8.8 | 47.4 | 26.4 |
| CNN-Transformer+Outlook | 52.3 | 29.5 | 9.1 | 48.5 | 26.5 |
| CNN-Transformer+NDA | 52.9 | 29.9 | 8.9 | 49.1 | 26.9 |
| CNN-Transformer+Outlook+NDA | 52.9 | 30.5 | 9.4 | 50.0 | 27.3 |
表5 BDD100K数据集上的消融实验结果 ( %)
Tab. 5 Ablation experiment results on BDD100K dataset
| 消融实验 | AP大 | AP中 | AP小 | mAP@0.5 | mAP@0.5:0.95 |
|---|---|---|---|---|---|
| CNN-Transformer | 51.8 | 29.4 | 8.8 | 47.4 | 26.4 |
| CNN-Transformer+Outlook | 52.3 | 29.5 | 9.1 | 48.5 | 26.5 |
| CNN-Transformer+NDA | 52.9 | 29.9 | 8.9 | 49.1 | 26.9 |
| CNN-Transformer+Outlook+NDA | 52.9 | 30.5 | 9.4 | 50.0 | 27.3 |
| [1] | REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. |
| [2] | WANG S. Research towards YOLO-series algorithms: comparison and analysis of object detection models for real-time UAV applications [J]. Journal of Physics: Conference Series, 2021, 1948: No.012021. |
| [3] | DAI X, CHEN Y, YANG J, et al. Dynamic DETR: end-to-end object detection with dynamic attention [C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 2968-2977. |
| [4] | CUI Z, LI K, GU L, et al. You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction [C]// Proceedings of the 2022 British Machine Vision Conference. Durham: BMVA Press, 2022: No.238. |
| [5] | YIN X, YU Z, FEI Z, et al. PE-YOLO: pyramid enhancement network for dark object detection [C]// Proceedings of the 2023 International Conference on Artificial Neural Networks, LNCS 14260. Cham: Springer, 2023: 163-174. |
| [6] | CHEN Y, LI W, SAKARIDIS C, et al. Domain adaptive Faster R-CNN for object detection in the wild [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 3339-3348. |
| [7] | CAI Q, PAN Y, NGO C W, et al. Exploring object relation in mean teacher for cross-domain detection [C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 11449-11458. |
| [8] | DENG J, LI W, CHEN Y, et al. Unbiased mean teacher for cross-domain object detection [C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 4089-4099. |
| [9] | HE M, WANG Y, WU J, et al. Cross domain object detection by target-perceived dual branch distillation [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 9560-9570. |
| [10] | KENNERLEY M, WANG J G, VEERAVALLI B, et al. 2PCNet: two-phase consistency training for day-to-night unsupervised domain adaptive object detection [C]// Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2023: 11484-11493. |
| [11] | SAKARIDIS C, DAI D, VAN GOOL L. Guided curriculum model adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation [C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 7373-7382. |
| [12] | ZHU J Y, PARK T, ISOLA P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks [C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 2242-2251. |
| [13] | ZHOU H, JIANG F, LU H. SSDA-YOLO: semi-supervised domain adaptive yolo for cross-domain object detection [J]. Computer Vision and Image Understanding, 2023, 229: No.103649. |
| [14] | GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [C]// Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 580-587. |
| [15] | HE K, ZHANG X, REN S, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916. |
| [16] | GIRSHICK R. Fast R-CNN [C]// Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2015: 1440-1448. |
| [17] | LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot MultiBox detector [C]// Proceedings of the 2016 European Conference on Computer Vision, LNCS 9905. Cham: Springer, 2016: 21-37. |
| [18] | LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection [C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 2999-3007. |
| [19] | TIAN Z, SHEN C, CHEN H, et al. FCOS: fully convolutional one-stage object detection [C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 9626-9635. |
| [20] | DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale [EB/OL]. [2025-03-31]. . |
| [21] | BEAL J, KIM E, TZENG E, et al. Toward Transformer-based object detection [EB/OL]. [2025-03-31]. . |
| [22] | LIU Z, LIN Y, CAO Y, et al. Swin Transformer: hierarchical Vision Transformer using shifted windows [C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 9992-10002. |
| [23] | DONG X, BAO J, CHEN D, et al. CSwin Transformer: a general vision transformer backbone with cross-shaped windows [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 12114-12124. |
| [24] | GANIN Y, USTINOVA E, AJAKAN H, et al. Domain-adversarial training of neural networks [J]. Journal of Machine Learning Research, 2016, 17: 1-35. |
| [25] | KHODABANDEH M, VAHDAT A, RANJBAR M, et al. A robust learning approach to domain adaptive object detection [C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 480-490. |
| [26] | LI Y J, DAI X, MA C Y, et al. Cross-domain adaptive teacher for object detection [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 7571-7580. |
| [27] | FUJII K, KERA H, KAWAMOTO K. Adversarially trained object detector for unsupervised domain adaptation [J]. IEEE Access, 2022, 10: 59534-59543. |
| [28] | ZHUANG C, HAN X, HUANG W, et al. iFAN: image-instance full alignment networks for adaptive object detection [C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2020: 13122-13129. |
| [29] | CHEN C, ZHENG Z, DING X, et al. Harmonizing transferability and discriminability for adapting object detectors [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 8866-8875. |
| [30] | ZHENG Z, WU Y, HAN X, et al. ForkGAN: seeing into the rainy night [C]// Proceedings of the 2020 European Conference on Computer Vision, LNCS 12348. Cham: Springer, 2020: 155-170. |
| [31] | WU J, CHEN J, HE M, et al. Target-relevant knowledge preservation for multi-source domain adaptive object detection [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 5291-5300. |
| [32] | CHEN M, CHEN W, YANG S, et al. Learning domain adaptive object detection with probabilistic teacher [C]// Proceedings of the 39th International Conference on Machine Learning. New York: JMLR.org, 2022: 3040-3055. |
| [33] | 章东平,张煜,刘志勇,等.域适应增强和多尺度特征融合的跨域小样本目标检测方法[J/OL].北京航空航天大学学报[2025-03-31]. . |
| ZHANG D P, ZHANG Y, LIU Z Y, et al. Domain adaptation and multi-scale feature fusion for cross-domain few-shot object detection [J/OL]. Journal of Beijing University of Aeronautics and Astronautics [2025-03-31]. . | |
| [34] | HUANG W J, LU Y L, LIN S Y, et al. AQT: adversarial query transformers for domain adaptive object detection [C]// Proceedings of the 31st International Joint Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2022: 972-979. |
| [35] | WANG W, CAO Y, ZHANG J, et al. Exploring sequence feature alignment for domain adaptive detection Transformers [C]// Proceedings of the 29th ACM International Conference on Multimedia. New York: ACM, 2021: 1730-1738. |
| [36] | DENG X, WANG P, LIAN X, et al. NightLab: a dual-level architecture with hardness detection for segmentation at night [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 16917-16927. |
| [37] | 杜运亮,王明甲.基于半监督域适应的微弱光环境下行人检测研究[J].电子测量与仪器学报, 2024, 38(1): 106-113. |
| DU Y L, WANG M J. Research on pedestrian detection in low-light conditions based on semi-supervised domain adaptation [J]. Journal of Electronic Measurement and Instrumentation, 2024, 38(1): 106-113. | |
| [38] | 苗德邻,刘磊,莫涌超,等.基于知识蒸馏的夜间低照度图像增强及目标检测[J].应用光学, 2023, 44(5): 1037-1044. |
| MIAO D L, LIU L, MO Y C, et al. Nighttime low-light image enhancement and object detection based on knowledge distillation [J]. Journal of Applied Optics, 2023, 44(5): 1037-1044. | |
| [39] | 卫禹帆,张丽红.基于域适应与类别对比的夜间目标检测方法[J].网络新媒体技术, 2024, 13(4): 16-25. |
| WEI Y F, ZHANG L H. Nighttime object detection method based on domain adaptation and category contrast [J]. Journal of Network New Media Technology, 2024, 13(4): 16-25. | |
| [40] | YUAN J, LE-TUAN A, HAUSWIRTH M, et al. Cooperative students: navigating unsupervised domain adaptation in nighttime object detection [C]// Proceedings of the 2024 IEEE International Conference on Multimedia and Expo. Piscataway: IEEE, 2024: 1-6. |
| [41] | ZHANG Y, ZHANG Y, ZHANG Z, et al. ISP-Teacher: image signal process with disentanglement regularization for unsupervised domain adaptive dark object detection [C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 7387-7395. |
| [42] | YUAN L, HOU Q, JIANG Z, et al. VOLO: Vision Outlooker for visual recognition [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(5): 6575-6586. |
| [43] | ZHENG Z, WANG P, REN D, et al. Enhancing geometric factors in model learning and inference for object detection and instance segmentation [J]. IEEE Transactions on Cybernetics, 2022, 52(8): 8574-8586. |
| [44] | LI X, WANG W, WU L, et al. Generalized focal loss: learning qualified and distributed bounding boxes for dense object detection [C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 21002-21012. |
| [45] | YU F, CHEN H, WANG X, et al. BDD100K: a diverse driving dataset for heterogeneous multitask learning [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 2633-2642. |
| [46] | HAN J, LIANG X, XU H, et al. SODA 10M: large-scalea 2D self/semi-supervised object detection dataset for autonomous driving [EB/OL]. [2025-02-20]. . |
| [47] | YANG Y, SOATTO S. FDA: Fourier domain adaptation for semantic segmentation [C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 4084-4094. |
| [1] | 谢斌红, 王瑞, 张睿, 张英俊. 代理原型蒸馏的小样本目标检测算法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 233-241. |
| [2] | 李世伟, 周昱峰, 孙鹏飞, 刘伟松, 孟竹喧, 廉浩杰. 基于煤尘对激光雷达电磁波散射和吸收效应的点云数据增强方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 331-340. |
| [3] | 边小勇, 袁培洋, 胡其仁. 双编码空频混合的红外小目标检测方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 252-259. |
| [4] | 黄舒雯, 郭柯宇, 宋翔宇, 韩锋, 孙士杰, 宋焕生. 基于单目图像的多目标三维视觉定位方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 207-215. |
| [5] | 魏利利, 闫丽蓉, 唐晓芬. 上下文语义表征和像素关系纠正的小样本目标检测[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2993-3002. |
| [6] | 张嘉祥, 李晓明, 张佳慧. 结合新类特征增强与度量机制的小样本目标检测算法[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2984-2992. |
| [7] | 王闯, 俞璐, 陈健威, 潘成, 杜文博. 开集域适应综述[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2727-2736. |
| [8] | 颜承志, 陈颖, 钟凯, 高寒. 基于多尺度网络与轴向注意力的3D目标检测算法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2537-2545. |
| [9] | 廖炎华, 鄢元霞, 潘文林. 基于YOLOv9的交通路口图像的多目标检测算法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2555-2565. |
| [10] | 谢斌红, 剌颖坤, 张英俊, 张睿. 自步学习指导下的半监督目标检测框架[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2546-2554. |
| [11] | 张子墨, 赵雪专. 多尺度稀疏图引导的视觉图神经网络[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2188-2194. |
| [12] | 陈亮, 王璇, 雷坤. 复杂场景下跨层多尺度特征融合的安全帽佩戴检测算法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2333-2341. |
| [13] | 于平平, 闫玉婷, 唐心亮, 苏鹤, 王建超. 输电线路场景下的施工机械多目标跟踪算法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2351-2360. |
| [14] | 范博淦, 王淑青, 陈开元. 基于改进YOLOv8的航拍无人机小目标检测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2342-2350. |
| [15] | 张英俊, 闫薇薇, 谢斌红, 张睿, 陆望东. 梯度区分与特征范数驱动的开放世界目标检测[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2203-2210. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||