Journal of Computer Applications ›› 2022, Vol. 42 ›› Issue (12): 3715-3722.DOI: 10.11772/j.issn.1001-9081.2021101840
Special Issue: 人工智能
• Artificial intelligence • Previous Articles Next Articles
Kangzhe MA1, Jiatian PI2(), Zhoubing XIONG3, Jia LYU1
Received:
2021-10-28
Revised:
2021-12-06
Accepted:
2021-12-23
Online:
2022-01-04
Published:
2022-12-10
Contact:
Jiatian PI
About author:
MA Kangzhe, born in 1996, M. S. candidate. His research interests include deep learning, object pose estimation.Supported by:
通讯作者:
皮家甜
作者简介:
马康哲(1996—),男,山西长治人,硕士研究生,主要研究方向:计算机视觉、物体姿态估计基金资助:
CLC Number:
Kangzhe MA, Jiatian PI, Zhoubing XIONG, Jia LYU. 6D pose estimation incorporating attentional features for occluded objects[J]. Journal of Computer Applications, 2022, 42(12): 3715-3722.
马康哲, 皮家甜, 熊周兵, 吕佳. 融合注意力特征的遮挡物体6D姿态估计[J]. 《计算机应用》唯一官方网站, 2022, 42(12): 3715-3722.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2021101840
方法 | 时间 |
---|---|
本文方法(基于注意力图(Attention Map)) | 11.2 |
PVNet(基于RANSAC voting) | 22.8 |
Tab.1 Time consumption comparison of calculating key points
方法 | 时间 |
---|---|
本文方法(基于注意力图(Attention Map)) | 11.2 |
PVNet(基于RANSAC voting) | 22.8 |
方法 | 测试物体 | 平均值 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ape | benchwise | cam | can | cat | driller | duck | eggbox | glue | holepuncher | iron | lamp | phone | ||
BB8*[ | 96.6 | 90.1 | 86.0 | 91.2 | 98.8 | 80.9 | 92.2 | 91.0 | 92.3 | 95.3 | 84.8 | 75.8 | 85.3 | 89.3 |
BB8[ | 95.3 | 80.0 | 80.9 | 84.1 | 97.0 | 74.1 | 81.2 | 87.9 | 89.0 | 90.5 | 78.9 | 74.4 | 77.6 | 83.9 |
YOLO6D [ | 92.1 | 95.1 | 93.2 | 97.4 | 97.4 | 79.4 | 94.7 | 90.3 | 96.5 | 92.9 | 82.9 | 76.9 | 86.1 | 90.4 |
PVNet[ | 99.2 | 99.8 | 99.2 | 99.9 | 99.3 | 96.9 | 98.0 | 99.3 | 98.5 | 100.0 | 99.2 | 98.3 | 99.4 | 99.0 |
本文方法 | 99.2 | 99.7 | 99.4 | 99.8 | 99.6 | 98.5 | 98.7 | 99.4 | 98.8 | 99.9 | 99.5 | 98.5 | 99.6 | 99.3 |
Tab.2 Comparison of methods on LINEMOD dataset in terms of 2D projection metric
方法 | 测试物体 | 平均值 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ape | benchwise | cam | can | cat | driller | duck | eggbox | glue | holepuncher | iron | lamp | phone | ||
BB8*[ | 96.6 | 90.1 | 86.0 | 91.2 | 98.8 | 80.9 | 92.2 | 91.0 | 92.3 | 95.3 | 84.8 | 75.8 | 85.3 | 89.3 |
BB8[ | 95.3 | 80.0 | 80.9 | 84.1 | 97.0 | 74.1 | 81.2 | 87.9 | 89.0 | 90.5 | 78.9 | 74.4 | 77.6 | 83.9 |
YOLO6D [ | 92.1 | 95.1 | 93.2 | 97.4 | 97.4 | 79.4 | 94.7 | 90.3 | 96.5 | 92.9 | 82.9 | 76.9 | 86.1 | 90.4 |
PVNet[ | 99.2 | 99.8 | 99.2 | 99.9 | 99.3 | 96.9 | 98.0 | 99.3 | 98.5 | 100.0 | 99.2 | 98.3 | 99.4 | 99.0 |
本文方法 | 99.2 | 99.7 | 99.4 | 99.8 | 99.6 | 98.5 | 98.7 | 99.4 | 98.8 | 99.9 | 99.5 | 98.5 | 99.6 | 99.3 |
方法 | 测试物体 | 平均值 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ape | benchwise | cam | can | cat | driller | duck | eggbox | glue | holepuncher | iron | lamp | phone | ||
BB8*[ | 40.4 | 91.8 | 55.7 | 64.1 | 62.6 | 74.4 | 44.3 | 57.8 | 41.2 | 67.2 | 84.7 | 76.5 | 54.0 | 62.7 |
HybridPose*[ | 63.1 | 99.9 | 90.4 | 98.5 | 89.4 | 98.5 | 65.0 | 100.0 | 98.8 | 89.7 | 100.0 | 99.5 | 94.9 | 91.3 |
DPOD*[ | 87.7 | 98.5 | 96.1 | 99.7 | 94.7 | 98.8 | 86.3 | 99.9 | 96.8 | 86.9 | 100.0 | 96.8 | 94.7 | 95.2 |
YOLO6D[ | 21.6 | 81.8 | 36.6 | 68.8 | 41.8 | 63.5 | 27.2 | 69.6 | 80.0 | 42.6 | 75.0 | 71.1 | 47.7 | 56.0 |
DPOD[ | 53.3 | 95.3 | 90.4 | 94.1 | 60.4 | 97.7 | 66.0 | 99.7 | 93.8 | 65.8 | 99.8 | 88.1 | 74.2 | 83.0 |
PVNet[ | 43.6 | 99.9 | 86.9 | 95.5 | 79.3 | 96.4 | 52.6 | 99.2 | 95.7 | 81.9 | 98.9 | 99.3 | 92.4 | 86.3 |
CDPN[ | 64.4 | 97.8 | 91.7 | 95.9 | 83.8 | 96.2 | 66.8 | 99.7 | 99.6 | 85.8 | 97.9 | 97.9 | 90.8 | 89.9 |
本文方法 | 68.6 | 99.9 | 88.0 | 97.8 | 86.9 | 98.4 | 68.6 | 100.0 | 98.0 | 89.9 | 99.1 | 100.0 | 91.7 | 91.3 |
Tab.3 Comparison of methods on LINEMOD dataset in terms of ADD(-S) metric
方法 | 测试物体 | 平均值 | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ape | benchwise | cam | can | cat | driller | duck | eggbox | glue | holepuncher | iron | lamp | phone | ||
BB8*[ | 40.4 | 91.8 | 55.7 | 64.1 | 62.6 | 74.4 | 44.3 | 57.8 | 41.2 | 67.2 | 84.7 | 76.5 | 54.0 | 62.7 |
HybridPose*[ | 63.1 | 99.9 | 90.4 | 98.5 | 89.4 | 98.5 | 65.0 | 100.0 | 98.8 | 89.7 | 100.0 | 99.5 | 94.9 | 91.3 |
DPOD*[ | 87.7 | 98.5 | 96.1 | 99.7 | 94.7 | 98.8 | 86.3 | 99.9 | 96.8 | 86.9 | 100.0 | 96.8 | 94.7 | 95.2 |
YOLO6D[ | 21.6 | 81.8 | 36.6 | 68.8 | 41.8 | 63.5 | 27.2 | 69.6 | 80.0 | 42.6 | 75.0 | 71.1 | 47.7 | 56.0 |
DPOD[ | 53.3 | 95.3 | 90.4 | 94.1 | 60.4 | 97.7 | 66.0 | 99.7 | 93.8 | 65.8 | 99.8 | 88.1 | 74.2 | 83.0 |
PVNet[ | 43.6 | 99.9 | 86.9 | 95.5 | 79.3 | 96.4 | 52.6 | 99.2 | 95.7 | 81.9 | 98.9 | 99.3 | 92.4 | 86.3 |
CDPN[ | 64.4 | 97.8 | 91.7 | 95.9 | 83.8 | 96.2 | 66.8 | 99.7 | 99.6 | 85.8 | 97.9 | 97.9 | 90.8 | 89.9 |
本文方法 | 68.6 | 99.9 | 88.0 | 97.8 | 86.9 | 98.4 | 68.6 | 100.0 | 98.0 | 89.9 | 99.1 | 100.0 | 91.7 | 91.3 |
方法 | 测试物体 | 平均值 | |||||||
---|---|---|---|---|---|---|---|---|---|
ape | can | cat | driller | duck | eggbox | glue | holepuncher | ||
Oberweger[ | 69.9 | 82.6 | 65.1 | 73.8 | 61.4 | 13.1 | 54.9 | 66.4 | 60.9 |
SegDriven[ | 59.1 | 59.8 | 46.9 | 59.0 | 42.6 | 11.9 | 16.5 | 63.6 | 44.9 |
PVNet[ | 69.1 | 86.1 | 65.1 | 73.1 | 61.4 | 8.4 | 55.4 | 69.8 | 61.1 |
本文方法 | 64.6 | 87.4 | 61.6 | 80.0 | 60.2 | 5.6 | 52.3 | 79.9 | 61.5 |
Tab. 4 Comparison of methods on Occlusion LINEMOD dataset in terms of 2D Projection metric
方法 | 测试物体 | 平均值 | |||||||
---|---|---|---|---|---|---|---|---|---|
ape | can | cat | driller | duck | eggbox | glue | holepuncher | ||
Oberweger[ | 69.9 | 82.6 | 65.1 | 73.8 | 61.4 | 13.1 | 54.9 | 66.4 | 60.9 |
SegDriven[ | 59.1 | 59.8 | 46.9 | 59.0 | 42.6 | 11.9 | 16.5 | 63.6 | 44.9 |
PVNet[ | 69.1 | 86.1 | 65.1 | 73.1 | 61.4 | 8.4 | 55.4 | 69.8 | 61.1 |
本文方法 | 64.6 | 87.4 | 61.6 | 80.0 | 60.2 | 5.6 | 52.3 | 79.9 | 61.5 |
方法 | 测试物体 | 平均值 | |||||||
---|---|---|---|---|---|---|---|---|---|
ape | can | cat | driller | duck | eggbox | glue | holepuncher | ||
DPOD*[ | — | — | — | — | — | — | — | — | 47.3 |
HybridPose*[ | 20.9 | 75.3 | 24.9 | 70.2 | 27.9 | 52.4 | 53.8 | 54.2 | 47.5 |
Oberweger[ | 17.6 | 53.6 | 3.3 | 62.4 | 19.2 | 25.9 | 39.6 | 21.3 | 30.4 |
SegDriven[ | 12.1 | 39.9 | 8.2 | 45.2 | 17.2 | 22.1 | 35.8 | 36.0 | 27.0 |
DPOD[ | — | — | — | — | — | — | - | — | 32.8 |
SSPE[ | 19.2 | 65.1 | 18.9 | 69.0 | 25.3 | 52.0 | 51.4 | 45.6 | 43.3 |
PVNet[ | 15.8 | 63.3 | 16.7 | 65.7 | 25.2 | 50.2 | 49.6 | 39.7 | 40.8 |
本文方法 | 21.0 | 79.9 | 23.5 | 74.2 | 31.3 | 42.2 | 44.5 | 53.8 | 46.3 |
Tab. 5 Comparison with other methods on Occlusion LINEMOD dataset in terms of ADD(-S) metric
方法 | 测试物体 | 平均值 | |||||||
---|---|---|---|---|---|---|---|---|---|
ape | can | cat | driller | duck | eggbox | glue | holepuncher | ||
DPOD*[ | — | — | — | — | — | — | — | — | 47.3 |
HybridPose*[ | 20.9 | 75.3 | 24.9 | 70.2 | 27.9 | 52.4 | 53.8 | 54.2 | 47.5 |
Oberweger[ | 17.6 | 53.6 | 3.3 | 62.4 | 19.2 | 25.9 | 39.6 | 21.3 | 30.4 |
SegDriven[ | 12.1 | 39.9 | 8.2 | 45.2 | 17.2 | 22.1 | 35.8 | 36.0 | 27.0 |
DPOD[ | — | — | — | — | — | — | - | — | 32.8 |
SSPE[ | 19.2 | 65.1 | 18.9 | 69.0 | 25.3 | 52.0 | 51.4 | 45.6 | 43.3 |
PVNet[ | 15.8 | 63.3 | 16.7 | 65.7 | 25.2 | 50.2 | 49.6 | 39.7 | 40.8 |
本文方法 | 21.0 | 79.9 | 23.5 | 74.2 | 31.3 | 42.2 | 44.5 | 53.8 | 46.3 |
基于 RANSAC voting | 基于attention map | CBAM | 2D 投影指标 | ADD(-S) | |||
---|---|---|---|---|---|---|---|
LINEMOD | Occlusion LINEMOD | LINEMOD | Occlusion LINEMOD | ||||
√ | 99.0 | 61.1 | 86.3 | 40.8 | |||
√ | 99.2 | 60.2 | 90.9 | 44.2 | |||
√ | √ | 99.3 | 61.4 | 91.3 | 46.3 |
Tab. 6 Ablation experiment results
基于 RANSAC voting | 基于attention map | CBAM | 2D 投影指标 | ADD(-S) | |||
---|---|---|---|---|---|---|---|
LINEMOD | Occlusion LINEMOD | LINEMOD | Occlusion LINEMOD | ||||
√ | 99.0 | 61.1 | 86.3 | 40.8 | |||
√ | 99.2 | 60.2 | 90.9 | 44.2 | |||
√ | √ | 99.3 | 61.4 | 91.3 | 46.3 |
1 | QI C R, SU H, MO K C, et al. PointNet: deep learning on point sets for 3D classification and segmentation [C]// Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2017: 77-85. 10.1109/cvpr.2017.16 |
2 | NIBALI A, HE Z, MORGAN S, et al. Numerical coordinate regression with convolutional neural networks [EB/OL]. (2018-05-03) [2021-10-11].. |
3 | PENG S D, LIU Y, HUANG Q X, et al. PVNet: pixel-wise voting network for 6DoF pose estimation[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 4556-4565. 10.1109/cvpr.2019.00469 |
4 | YANG Z X, YU X, YANG Y. DSC-PoseNet: learning 6DoF object pose estimation via dual-scale consistency[C]// Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2021: 3906-3915. 10.1109/cvpr46437.2021.00390 |
5 | WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11211. Cham: Springer, 2018: 3-19. |
6 | XIANG Y, SCHMIDT T, NARAYANAN V, et al. PoseCNN: a convolutional neural network for 6D object pose estimation in cluttered scenes [C/OL]// Proceedings of the 2018 Robotics: Science and Systems [2021-10-11].. 10.15607/rss.2018.xiv.019 |
7 | KEHL W, MANHARDT F, TOMBARI F, et al. SSD-6D: making RGB-based 3D detection and 6D pose estimation great again[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 1530-1538. 10.1109/iccv.2017.169 |
8 | SUNDERMEYER M, MARTON Z C, DURNER M, et al. Implicit 3D orientation learning for 6D object detection from RGB images[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11210. Cham: Springer, 2018: 712-729. |
9 | 梁达勇,陈俊洪,朱展模,等. 多特征像素级融合的遮挡物体6DoF姿态估计研究[J]. 计算机科学与探索, 2020, 14(12):2072-2082. 10.3778/j.issn.1673-9418.2003041 |
LIANG D Y, CHEN J H, ZHU Z M, et al. Research on occluded objects 6DoF pose estimation with multi-features and pixel-level fusion[J]. Journal of Frontiers of Computer Science and Technology, 2020, 14(12):2072-2082. 10.3778/j.issn.1673-9418.2003041 | |
10 | LI Z G, WANG G, JI X Y. CDPN: coordinates-based disentangled pose network for real-time RGB-based 6-DoF object pose estimation[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 7677-7686. 10.1109/iccv.2019.00777 |
11 | ZAKHAROV S, SHUGUROV I, ILIC S. DPOD: 6D pose object detector and refiner[C]// Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 1941-1950. 10.1109/iccv.2019.00203 |
12 | HODAŇ T, BARÁTH D, MATAS J. EPOS: estimating 6D pose of objects with symmetries[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 11700-11709. 10.1109/cvpr42600.2020.01172 |
13 | RAD M, LEPETIT V. BB8: a scalable, accurate, robust to partial occlusion method for predicting the 3D poses of challenging objects without using depth[C]// Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 3848-3856. 10.1109/iccv.2017.413 |
14 | OBERWEGER M, RAD M, LEPETIT V. Making deep heatmaps robust to partial occlusions for 3D object pose estimation[C]// Proceedings of the 2018 European Conference on Computer Vision, LNCS 11219. Cham: Springer, 2018: 125-141. |
15 | SONG C, SONG J R, HUANG Q X. HybridPose: 6D object pose estimation under hybrid representations[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 428-437. 10.1109/cvpr42600.2020.00051 |
16 | 李坤,侯庆. 基于注意力机制的轻量型人体姿态估计[J]. 计算机应用, 2022, 42(8):2407-2414. |
LI K, HOU Q. Lightweight human pose estimation based on attention mechanism[J]. Journal of Computer Applications, 2022, 42(8):2407-2414. | |
17 | STEVŠIČ S, HILLIGES O. Spatial attention improves iterative 6D object pose estimation[C]// Proceedings of the 2020 International Conference on 3D Vision. Piscataway: IEEE, 2020: 1070-1078. 10.1109/3dv50981.2020.00117 |
18 | LEPETIT V, MORENO-NOGUER F, FUA P. EPnP: an accurate O(n) solution to the PnP problem[J]. International Journal of Computer Vision, 2009, 81(2): 155-166. 10.1007/s11263-008-0152-6 |
19 | HINTERSTOISSER S, LEPETIT V, ILIC S, et al. Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes[C]// Proceedings of the 2012 Asian Conference on Computer Vision, LNCS 7724. Berlin: Springer, 2013: 548-562. |
20 | BRACHMANN E, KRULL A, MICHEL F, et al. Learning 6D object pose estimation using 3D object coordinates[C]// Proceedings of the 2014 European Conference on Computer Vision, LNCS 8690. Cham: Springer, 2014: 536-551. |
21 | BRACHMANN E, MICHEL F, KRULL A, et al. Uncertainty-driven 6D pose estimation of objects and scenes from a single RGB image[C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 3364-3372. 10.1109/cvpr.2016.366 |
22 | XIAO J X, HAYS J, EHINGER K A, et al. SUN database: Large-scale scene recognition from abbey to zoo[C]// Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2010: 3485-3492. 10.1109/cvpr.2010.5539970 |
23 | TEKIN B, SINHA S N, FUA P. Real-time seamless single shot 6D object pose prediction[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 292-301. 10.1109/cvpr.2018.00038 |
24 | HU Y L, HUGONOT J, FUA P, et al. Segmentation-driven 6D object pose estimation[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 3380-3389. 10.1109/cvpr.2019.00350 |
25 | HU Y L, FUA P, WANG W, et al. Single-stage 6D object pose estimation[C]// Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2020: 2927-2936. 10.1109/cvpr42600.2020.00300 |
[1] | Zhiqiang ZHAO, Peihong MA, Xinhong HEI. Crowd counting method based on dual attention mechanism [J]. Journal of Computer Applications, 2024, 44(9): 2886-2892. |
[2] | Jing QIN, Zhiguang QIN, Fali LI, Yueheng PENG. Diagnosis of major depressive disorder based on probabilistic sparse self-attention neural network [J]. Journal of Computer Applications, 2024, 44(9): 2970-2974. |
[3] | Liting LI, Bei HUA, Ruozhou HE, Kuang XU. Multivariate time series prediction model based on decoupled attention mechanism [J]. Journal of Computer Applications, 2024, 44(9): 2732-2738. |
[4] | Kaipeng XUE, Tao XU, Chunjie LIAO. Multimodal sentiment analysis network with self-supervision and multi-layer cross attention [J]. Journal of Computer Applications, 2024, 44(8): 2387-2392. |
[5] | Pengqi GAO, Heming HUANG, Yonghong FAN. Fusion of coordinate and multi-head attention mechanisms for interactive speech emotion recognition [J]. Journal of Computer Applications, 2024, 44(8): 2400-2406. |
[6] | Zhonghua LI, Yunqi BAI, Xuejin WANG, Leilei HUANG, Chujun LIN, Shiyu LIAO. Low illumination face detection based on image enhancement [J]. Journal of Computer Applications, 2024, 44(8): 2588-2594. |
[7] | Shangbin MO, Wenjun WANG, Ling DONG, Shengxiang GAO, Zhengtao YU. Single-channel speech enhancement based on multi-channel information aggregation and collaborative decoding [J]. Journal of Computer Applications, 2024, 44(8): 2611-2617. |
[8] | Wu XIONG, Congjun CAO, Xuefang SONG, Yunlong SHAO, Xusheng WANG. Handwriting identification method based on multi-scale mixed domain attention mechanism [J]. Journal of Computer Applications, 2024, 44(7): 2225-2232. |
[9] | Huanhuan LI, Tianqiang HUANG, Xuemei DING, Haifeng LUO, Liqing HUANG. Public traffic demand prediction based on multi-scale spatial-temporal graph convolutional network [J]. Journal of Computer Applications, 2024, 44(7): 2065-2072. |
[10] | Dianhui MAO, Xuebo LI, Junling LIU, Denghui ZHANG, Wenjing YAN. Chinese entity and relation extraction model based on parallel heterogeneous graph and sequential attention mechanism [J]. Journal of Computer Applications, 2024, 44(7): 2018-2025. |
[11] | Li LIU, Haijin HOU, Anhong WANG, Tao ZHANG. Generative data hiding algorithm based on multi-scale attention [J]. Journal of Computer Applications, 2024, 44(7): 2102-2109. |
[12] | Song XU, Wenbo ZHANG, Yifan WANG. Lightweight video salient object detection network based on spatiotemporal information [J]. Journal of Computer Applications, 2024, 44(7): 2192-2199. |
[13] | Dahai LI, Zhonghua WANG, Zhendong WANG. Dual-branch low-light image enhancement network combining spatial and frequency domain information [J]. Journal of Computer Applications, 2024, 44(7): 2175-2182. |
[14] | Wenliang WEI, Yangping WANG, Biao YUE, Anzheng WANG, Zhe ZHANG. Deep learning model for infrared and visible image fusion based on illumination weight allocation and attention [J]. Journal of Computer Applications, 2024, 44(7): 2183-2191. |
[15] | Yan ZHOU, Yang LI. Rectified cross pseudo supervision method with attention mechanism for stroke lesion segmentation [J]. Journal of Computer Applications, 2024, 44(6): 1942-1948. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||