《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (6): 1920-1926.DOI: 10.11772/j.issn.1001-9081.2023060866
• 多媒体计算与计算机仿真 • 上一篇
收稿日期:
2023-07-03
修回日期:
2023-09-06
接受日期:
2023-09-11
发布日期:
2023-10-07
出版日期:
2024-06-10
通讯作者:
邴雅星
作者简介:
王阳萍(1973—),女,四川达州人,教授,博士,主要研究方向:数字图像处理、虚拟现实基金资助:
Yaxing BING1(), Yangping WANG1,2, Jiu YONG2, Haomou BAI3
Received:
2023-07-03
Revised:
2023-09-06
Accepted:
2023-09-11
Online:
2023-10-07
Published:
2024-06-10
Contact:
Yaxing BING
About author:
WANG Yangping, born in 1973, Ph. D., professor. Her research interests include digital image processing, virtual reality.Supported by:
摘要:
针对在复杂场景下对弱纹理目标位姿估计的准确性和实时性问题,提出基于筛选学习网络的六自由度(6D)目标位姿估计算法。首先,将标准卷积替换为蓝图可分离卷积(BSConv)以减少模型参数,并使用GeLU(Gaussian error Linear Unit)激活函数,能够更好地逼近正态分布,以提高网络模型的性能;其次,提出上采样筛选编码信息模块(UFAEM),弥补了上采样关键信息丢失的缺陷;最后,提出一种全局注意力机制(GAM),增加上下文信息,更有效地提取输入特征图的信息。在公开数据集LineMOD、YCB-Video和Occlusion LineMOD上测试,实验结果表明,所提算法在网络参数大幅度减少的同时提升了精度。所提算法网络参数量减少近3/4,采用ADD(-S) metric指标,在lineMOD数据集下较Dual-Stream算法精度提升约1.2个百分点,在YCB-Video数据集下较DenseFusion算法精度提升约5.2个百分点,在Occlusion LineMOD数据集下较像素投票网络(PVNet)算法精度提升约6.6个百分点。通过实验结果可知,所提算法对弱纹理目标位姿估计具有较好的效果,对遮挡物体位姿估计具有一定的鲁棒性。
中图分类号:
邴雅星, 王阳萍, 雍玖, 白浩谋. 基于筛选学习网络的六自由度目标位姿估计算法[J]. 计算机应用, 2024, 44(6): 1920-1926.
Yaxing BING, Yangping WANG, Jiu YONG, Haomou BAI. Six degrees of freedom object pose estimation algorithm based on filter learning network[J]. Journal of Computer Applications, 2024, 44(6): 1920-1926.
算法 | 测试物体 | 平均值 | |||||
---|---|---|---|---|---|---|---|
ape | cam | cat | duck | holepuncher | phone | ||
ResNet18 | 43.6 | 86.9 | 79.3 | 52.6 | 81.9 | 92.4 | 72.8 |
BSConv+GeLU | 55.2 | 89.4 | 85.6 | 67.8 | 83.2 | 93.5 | 79.1 |
BSConv+GeLU+ UFAEM | 85.1 | 94.8 | 90.7 | 84.3 | 88.1 | 95.1 | 89.7 |
BSConv+GeLU+ UFAEM+GAM | 89.7 | 95.6 | 93.1 | 90.8 | 89.7 | 95.9 | 92.5 |
表1 不同模块对模型的精度影响 (%)
Tab.1 Impact of different modules on model accuracy
算法 | 测试物体 | 平均值 | |||||
---|---|---|---|---|---|---|---|
ape | cam | cat | duck | holepuncher | phone | ||
ResNet18 | 43.6 | 86.9 | 79.3 | 52.6 | 81.9 | 92.4 | 72.8 |
BSConv+GeLU | 55.2 | 89.4 | 85.6 | 67.8 | 83.2 | 93.5 | 79.1 |
BSConv+GeLU+ UFAEM | 85.1 | 94.8 | 90.7 | 84.3 | 88.1 | 95.1 | 89.7 |
BSConv+GeLU+ UFAEM+GAM | 89.7 | 95.6 | 93.1 | 90.8 | 89.7 | 95.9 | 92.5 |
函数 | 测试物体 | 平均值 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ape | bench-wise | cam | can | cat | driller | duck | egg-box | glue | holepuncher | iron | lamp | phone | ||
Sigmoid | 86.7 | 99.8 | 91.2 | 93.1 | 90.8 | 98.4 | 85.1 | 99.5 | 95.2 | 85.2 | 96.3 | 99.1 | 94.0 | 93.4 |
ReLU | 87.9 | 100.0 | 90.2 | 94.6 | 93.0 | 99.5 | 88.2 | 100.0 | 90.9 | 88.4 | 97.1 | 99.0 | 94.2 | 94.1 |
GeLU | 89.7 | 100.0 | 95.6 | 96.2 | 93.1 | 99.8 | 90.8 | 100.0 | 98.5 | 89.7 | 99.8 | 99.4 | 95.9 | 96.0 |
表2 不同激活函数对模型精度的影响 (%)
Tab.2 Impact of different activation functions on model accuracy
函数 | 测试物体 | 平均值 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ape | bench-wise | cam | can | cat | driller | duck | egg-box | glue | holepuncher | iron | lamp | phone | ||
Sigmoid | 86.7 | 99.8 | 91.2 | 93.1 | 90.8 | 98.4 | 85.1 | 99.5 | 95.2 | 85.2 | 96.3 | 99.1 | 94.0 | 93.4 |
ReLU | 87.9 | 100.0 | 90.2 | 94.6 | 93.0 | 99.5 | 88.2 | 100.0 | 90.9 | 88.4 | 97.1 | 99.0 | 94.2 | 94.1 |
GeLU | 89.7 | 100.0 | 95.6 | 96.2 | 93.1 | 99.8 | 90.8 | 100.0 | 98.5 | 89.7 | 99.8 | 99.4 | 95.9 | 96.0 |
算法 | 测试物体 | 平均值 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ape | bench-wise | cam | can | cat | driller | duck | egg-box | glue | holepuncher | iron | lamp | phone | ||
real-time | 21.6 | 81.8 | 36.6 | 68.8 | 41.8 | 63.5 | 27.2 | 69.9 | 80.0 | 42.6 | 74.9 | 71.1 | 47.7 | 55.9 |
PoseCNN | 77.0 | 97.5 | 93.5 | 96.5 | 82.1 | 95.0 | 77.7 | 97.1 | 99.4 | 52.8 | 98.3 | 97.5 | 87.7 | 88.6 |
Dense-Fusion | 92.3 | 93.2 | 94.4 | 93.1 | 96.5 | 87.0 | 92.3 | 99.8 | 100 | 92.1 | 97.0 | 95.3 | 92.8 | 94.3 |
Dual-Stream | 91.3 | 93.5 | 94.0 | 94.3 | 95.8 | 92.9 | 94.7 | 99.9 | 99.9 | 92.8 | 95.1 | 94.6 | 94.0 | 94.8 |
PVNet | 43.6 | 99.9 | 86.9 | 95.5 | 79.3 | 96.4 | 52.6 | 99.2 | 95.7 | 81.9 | 98.9 | 99.3 | 92.4 | 86.3 |
本文算法 | 89.7 | 100.0 | 95.6 | 96.2 | 93.1 | 99.8 | 90.8 | 100.0 | 98.5 | 89.7 | 99.8 | 99.4 | 95.9 | 96.0 |
表3 LineMOD数据集上精度结果对比 (%)
Tab.3 Comparison of accuracy results on LineMOD dataset
算法 | 测试物体 | 平均值 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ape | bench-wise | cam | can | cat | driller | duck | egg-box | glue | holepuncher | iron | lamp | phone | ||
real-time | 21.6 | 81.8 | 36.6 | 68.8 | 41.8 | 63.5 | 27.2 | 69.9 | 80.0 | 42.6 | 74.9 | 71.1 | 47.7 | 55.9 |
PoseCNN | 77.0 | 97.5 | 93.5 | 96.5 | 82.1 | 95.0 | 77.7 | 97.1 | 99.4 | 52.8 | 98.3 | 97.5 | 87.7 | 88.6 |
Dense-Fusion | 92.3 | 93.2 | 94.4 | 93.1 | 96.5 | 87.0 | 92.3 | 99.8 | 100 | 92.1 | 97.0 | 95.3 | 92.8 | 94.3 |
Dual-Stream | 91.3 | 93.5 | 94.0 | 94.3 | 95.8 | 92.9 | 94.7 | 99.9 | 99.9 | 92.8 | 95.1 | 94.6 | 94.0 | 94.8 |
PVNet | 43.6 | 99.9 | 86.9 | 95.5 | 79.3 | 96.4 | 52.6 | 99.2 | 95.7 | 81.9 | 98.9 | 99.3 | 92.4 | 86.3 |
本文算法 | 89.7 | 100.0 | 95.6 | 96.2 | 93.1 | 99.8 | 90.8 | 100.0 | 98.5 | 89.7 | 99.8 | 99.4 | 95.9 | 96.0 |
算法 | 测试物体 | 平均值 | |||||||
---|---|---|---|---|---|---|---|---|---|
ape | can | cat | driller | duck | eggbox | glue | holepuncher | ||
HybridPose | 20.9 | 75.3 | 24.9 | 70.2 | 27.9 | 52.4 | 53.8 | 54.2 | 47.5 |
SSPE | 19.2 | 65.1 | 18.9 | 69.0 | 25.3 | 52.0 | 51.4 | 45.6 | 43.3 |
RePOSE | 31.1 | 80.0 | 25.6 | 73.1 | 43.0 | 51.7 | 54.3 | 53.6 | 51.6 |
SegDriven | 12.1 | 39.9 | 8.2 | 45.2 | 17.2 | 22.1 | 35.8 | 36.0 | 27.0 |
PoseCNN | 9.6 | 45.2 | 0.9 | 41.4 | 19.6 | 22.0 | 38.5 | 22.1 | 24.9 |
PVNet | 15.8 | 63.3 | 16.7 | 65.7 | 25.2 | 50.2 | 49.6 | 39.7 | 40.8 |
本文算法 | 23.1 | 72.6 | 24.4 | 74.4 | 26.0 | 53.1 | 54.5 | 50.7 | 47.4 |
表4 Occlusion LineMOD数据集上精度结果对比 (%)
Tab.4 Comparison of accuracy results on Occlusion LineMOD dataset
算法 | 测试物体 | 平均值 | |||||||
---|---|---|---|---|---|---|---|---|---|
ape | can | cat | driller | duck | eggbox | glue | holepuncher | ||
HybridPose | 20.9 | 75.3 | 24.9 | 70.2 | 27.9 | 52.4 | 53.8 | 54.2 | 47.5 |
SSPE | 19.2 | 65.1 | 18.9 | 69.0 | 25.3 | 52.0 | 51.4 | 45.6 | 43.3 |
RePOSE | 31.1 | 80.0 | 25.6 | 73.1 | 43.0 | 51.7 | 54.3 | 53.6 | 51.6 |
SegDriven | 12.1 | 39.9 | 8.2 | 45.2 | 17.2 | 22.1 | 35.8 | 36.0 | 27.0 |
PoseCNN | 9.6 | 45.2 | 0.9 | 41.4 | 19.6 | 22.0 | 38.5 | 22.1 | 24.9 |
PVNet | 15.8 | 63.3 | 16.7 | 65.7 | 25.2 | 50.2 | 49.6 | 39.7 | 40.8 |
本文算法 | 23.1 | 72.6 | 24.4 | 74.4 | 26.0 | 53.1 | 54.5 | 50.7 | 47.4 |
测试物体 | PoseCNN | DenseFusion | 本文算法 | |||
---|---|---|---|---|---|---|
ADD-S | ADD(S) | ADD-S | ADD(S) | ADD-S | ADD(S) | |
平均值 | 75.9 | 59.9 | 91.2 | 82.9 | 92.5 | 88.1 |
02 master chef can | 83.9 | 50.2 | 95.3 | 70.7 | 95.0 | 76.9 |
03 cracker box | 76.9 | 53.1 | 92.5 | 86.9 | 94.2 | 90.2 |
04 sugar box | 84.2 | 68.4 | 95.1 | 90.8 | 96.1 | 93.9 |
05 tomato soup can | 81.0 | 66.2 | 93.8 | 84.7 | 94.7 | 88.1 |
06 mustard bottle | 90.4 | 81.0 | 95.8 | 90.9 | 96.3 | 93.2 |
07 tuna fish can | 88.0 | 70.7 | 95.7 | 79.6 | 95.1 | 89.5 |
08 pudding box | 79.1 | 62.7 | 94.3 | 89.3 | 93.7 | 88.1 |
09 gelatin box | 87.2 | 75.2 | 97.2 | 95.8 | 96.0 | 94.6 |
10 potted meat can | 78.5 | 59.5 | 89.3 | 79.6 | 90.2 | 82.0 |
11 banana | 86.0 | 72.3 | 90.0 | 76.7 | 93.2 | 91.0 |
19 pitcher base | 77.0 | 53.3 | 93.6 | 87.1 | 91.2 | 86.0 |
21 bleach cleanser | 71.6 | 50.3 | 94.4 | 87.5 | 95.5 | 86.7 |
24 bowl* | 69.6 | 69.6 | 86.0 | 86.0 | 87.8 | 87.8 |
25 mug | 78.2 | 58.5 | 95.3 | 83.8 | 96.9 | 92.1 |
35 power drill | 72.7 | 55.3 | 92.1 | 83.7 | 95.4 | 92.3 |
36 wood block* | 64.3 | 64.3 | 89.5 | 89.5 | 88.7 | 88.7 |
37 scissors | 56.9 | 35.8 | 90.1 | 77.4 | 92.1 | 88.1 |
40 large marker | 71.7 | 58.3 | 95.1 | 89.1 | 93.2 | 84.0 |
51 large clamp* | 50.2 | 50.2 | 71.5 | 71.5 | 79.3 | 79.3 |
52 extra large clamp* | 44.1 | 44.1 | 70.2 | 70.2 | 86.0 | 86.0 |
61 foam brick* | 88.0 | 88.0 | 92.2 | 92.2 | 92.1 | 92.1 |
表5 YCB-Video数据集上精度结果对比 (%)
Tab.5 Comparison of accuracy results on YCB-Video dataset
测试物体 | PoseCNN | DenseFusion | 本文算法 | |||
---|---|---|---|---|---|---|
ADD-S | ADD(S) | ADD-S | ADD(S) | ADD-S | ADD(S) | |
平均值 | 75.9 | 59.9 | 91.2 | 82.9 | 92.5 | 88.1 |
02 master chef can | 83.9 | 50.2 | 95.3 | 70.7 | 95.0 | 76.9 |
03 cracker box | 76.9 | 53.1 | 92.5 | 86.9 | 94.2 | 90.2 |
04 sugar box | 84.2 | 68.4 | 95.1 | 90.8 | 96.1 | 93.9 |
05 tomato soup can | 81.0 | 66.2 | 93.8 | 84.7 | 94.7 | 88.1 |
06 mustard bottle | 90.4 | 81.0 | 95.8 | 90.9 | 96.3 | 93.2 |
07 tuna fish can | 88.0 | 70.7 | 95.7 | 79.6 | 95.1 | 89.5 |
08 pudding box | 79.1 | 62.7 | 94.3 | 89.3 | 93.7 | 88.1 |
09 gelatin box | 87.2 | 75.2 | 97.2 | 95.8 | 96.0 | 94.6 |
10 potted meat can | 78.5 | 59.5 | 89.3 | 79.6 | 90.2 | 82.0 |
11 banana | 86.0 | 72.3 | 90.0 | 76.7 | 93.2 | 91.0 |
19 pitcher base | 77.0 | 53.3 | 93.6 | 87.1 | 91.2 | 86.0 |
21 bleach cleanser | 71.6 | 50.3 | 94.4 | 87.5 | 95.5 | 86.7 |
24 bowl* | 69.6 | 69.6 | 86.0 | 86.0 | 87.8 | 87.8 |
25 mug | 78.2 | 58.5 | 95.3 | 83.8 | 96.9 | 92.1 |
35 power drill | 72.7 | 55.3 | 92.1 | 83.7 | 95.4 | 92.3 |
36 wood block* | 64.3 | 64.3 | 89.5 | 89.5 | 88.7 | 88.7 |
37 scissors | 56.9 | 35.8 | 90.1 | 77.4 | 92.1 | 88.1 |
40 large marker | 71.7 | 58.3 | 95.1 | 89.1 | 93.2 | 84.0 |
51 large clamp* | 50.2 | 50.2 | 71.5 | 71.5 | 79.3 | 79.3 |
52 extra large clamp* | 44.1 | 44.1 | 70.2 | 70.2 | 86.0 | 86.0 |
61 foam brick* | 88.0 | 88.0 | 92.2 | 92.2 | 92.1 | 92.1 |
1 | 刘泽洋,贾迪.面向目标6DoF姿态与尺寸估计的全卷积神经网络模型[J].计算机应用研究,2023,40(3):938-942. |
LIU Z Y, JIA D. Full convolution neural network model for 6DoF attitude and size estimation[J]. Application Research of Computers, 2023,40(3): 938-942. | |
2 | LINDEBERG T. Scale invariant feature transform[J]. Scholarpedia, 2012, 7(5):10491. |
3 | BAY H, TUYTELAARS T, VAN G L.SURF: speeded up robust features[C]// Proceedings of the 9th European Conference on Computer Vision. Berlin: Springer, 2006:404-417. |
4 | RUBLEE E, RABAUD V, KONOLIGE K, et al. ORB: an efficient alternative to SIFT or SURF[C]// Proceedings of the 2011 International Conference on Computer Vision. Piscataway:IEEE, 2011: 2564-2571. |
5 | MAIR E, HAGER G D, BURSCHKA D, et al. Adaptive and generic corner detection based on the accelerated segment test[C]// Proceedings of the 11th European Conference on Computer Vision. Berlin: Springer, 2010:183-196. |
6 | CALONDER M, LEPETIT V, STRECHA C, et al. BRIEF: binary robust independent elementary features[C]// Proceedings of the 11th European Conference on Computer Vision. Berlin:Springer, 2010:778-792. |
7 | KEHL W, MANHARDT F, TOMBARI F, et al. SSD-6D: making RGB-based 3D detection and 6D pose estimation great again[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway:IEEE, 2017: 1530-1538. |
8 | HE Y, HUANG H, FAN H, et al. FFB6D: a full flow bidirectional fusion network for 6D pose estimation[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2021: 3002-3012. |
9 | BUKSCHAT Y, VETTER M. EfficientPose: an efficient, accurate and scalable end-to-end 6D multi object pose estimation approach[EB/OL]. (2020-11-18)[2023-06-08].. |
10 | XU Y, LIN K-Y, ZHANG G, et al. RNNPose: recurrent 6-DoF object pose refinement with robust correspondence field estimation and pose optimization [C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2022: 14860-14870. |
11 | PENG S, ZHOU X, LIU Y, et al. PVNet: pixel-wise voting network for 6DoF object pose estimation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(6): 3212-3223. |
12 | 帅昊.基于深度学习的机械产品AR跟踪注册方法研究[D].重庆:重庆邮电大学, 2021:37-38. |
SHUAI H. Research on tracking registration of mechanical product assembly augmented reality based on deep learning[D].Chongqing: Chongqing University of Posts and Telecommunications, 2021:37-38. | |
13 | RAD M, LEPETIT V. BB8: a scalable, accurate, robust to partial occlusion method for predicting the 3D poses of challenging objects without using depth [C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway:IEEE,2017: 3848-3856. |
14 | 马康哲,皮家甜,熊周兵,等.融合注意力特征的遮挡物体6D姿态估计[J].计算机应用,2022,42(12):3715-3722. |
MA K Z, PI J T, XIONG Z B, et al. 6D pose estimation incorporating attentional features for occluded objects[J]. Journal of Computer Applications, 2022, 42(12): 3715-3722. | |
15 | YANG Z X, YU X, YANG Y. DSC-PoseNet: learning 6DoF object pose estimation via dual-scale consistency[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2021: 3906-3915. |
16 | HAASE D, AMTHOR M. Rethinking depthwise separable convolutions: how intra-kernel correlations lead to improved mobilenets[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE,2020:14588-14597. |
17 | ZHAO Z, PENG G, WANG H, et al. Estimating 6D pose from localizing designated surface keypoints [EB/OL]. (2018-12-04)[2023-06-08]. . |
18 | HINTERSTOISSER S, LEPETIT V, ILIC S, et al. Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes[C]// Proceedings of the 11th Asian Conference on Computer Vision. Berlin:Springer, 2013: 548-562. |
19 | TEKIN B, SINHA S N, FUA P. Real-time seamless single shot 6D object pose prediction [C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 292-301. |
20 | XIANG Y, SCHMIDT T, NARAYANAN V, et al. PoseCNN: a convolution neural network for 6D object pose estimation in cluttered scenes [EB/OL]. (2018-05-26)[2023-06-08] . |
21 | WANG C, XU D, ZHU Y, et al. DenseFusion: 6D object pose estimation by iterative dense fusion[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 3338-3347. |
22 | LI Q, HU R, XIAO J, et al. Learning latent geometric consistency for 6D object pose estimation in heavily cluttered scenes[J]. Journal of Visual Communication and Image Representation, 2020, 70: 175-184. |
23 | SONG C, SONG J, HUANG Q. HybridPose: 6D object pose estimation under hybrid representations[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 428-437. |
24 | HU Y, FUA P, WANG W,et al. Single-stage 6D object pose estimations[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE,2020:2927-2936. |
25 | IWASE S, LIU X, KHIRODKAR R, et al. RePOSE: fast 6D object pose refinement via deep texture rendering[C]// Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 3283-3292. |
26 | HU Y, HUGONOT J, FUA P,et al. Segmentation-driven 6D object pose estimation [C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 3380-3389. |
[1] | 王美, 苏雪松, 刘佳, 殷若南, 黄珊. 时频域多尺度交叉注意力融合的时间序列分类方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1842-1847. |
[2] | 吴锦富, 柳毅. 基于随机噪声和自适应步长的快速对抗训练方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1807-1815. |
[3] | 赵雅娟, 孟繁军, 徐行健. 在线教育学习者知识追踪综述[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1683-1698. |
[4] | 徐泽鑫, 杨磊, 李康顺. 较短的长序列时间序列预测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1824-1831. |
[5] | 时旺军, 王晶, 宁晓军, 林友芳. 小样本场景下的元迁移学习睡眠分期模型[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1445-1451. |
[6] | 孙子文, 钱立志, 杨传栋, 高一博, 陆庆阳, 袁广林. 基于Transformer的视觉目标跟踪方法综述[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1644-1654. |
[7] | 李鸿天, 史鑫昊, 潘卫国, 徐成, 徐冰心, 袁家政. 融合多尺度和注意力机制的小样本目标检测[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1437-1444. |
[8] | 郭琳, 刘坤虎, 马晨阳, 来佑雪, 徐映芬. 基于感受野扩展残差注意力网络的图像超分辨率重建[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1579-1587. |
[9] | 耿焕同, 刘振宇, 蒋骏, 范子辰, 李嘉兴. 基于改进YOLOv8的嵌入式道路裂缝检测算法[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1613-1618. |
[10] | 盖彦辛, 闫涛, 张江峰, 郭小英, 陈斌. 基于时空注意力的空间关联三维形貌重建[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1570-1578. |
[11] | 宋霄罡, 张冬冬, 张鹏飞, 梁莉, 黑新宏. 面向复杂施工环境的实时目标检测算法[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1605-1612. |
[12] | 李鑫, 孟乔, 皇甫俊逸, 孟令辰. 基于分离式标签协同学习的YOLOv5多属性分类[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1619-1628. |
[13] | 袁泉, 陈昌平, 陈泽, 詹林峰. 基于BERT的两次注意力机制远程监督关系抽取[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1080-1085. |
[14] | 王铂越, 李英祥, 钟剑丹. 基于改进Res-UNet的昼夜地基云图分割网络[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1310-1316. |
[15] | 万泽轩, 谢春丽, 吕泉润, 梁瑶. 基于依赖增强的分层抽象语法树的代码克隆检测[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1259-1268. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||