Journal of Computer Applications ›› 2026, Vol. 46 ›› Issue (1): 260-269.DOI: 10.11772/j.issn.1001-9081.2025010071
• Multimedia computing and computer simulation • Previous Articles Next Articles
Yi XIONG1, Caiqi WANG1, Ling MEI2, Shiqian WU2(
)
Received:2025-01-20
Revised:2025-04-06
Accepted:2025-04-08
Online:2026-01-10
Published:2026-01-10
Contact:
Shiqian WU
About author:XIONG Yi, born in 2000, M. S. candidate. His research interests include intelligent robot.Supported by:通讯作者:
伍世虔
作者简介:熊毅(2000—),男,湖北荆州人,硕士研究生,主要研究方向:智能机器人基金资助:CLC Number:
Yi XIONG, Caiqi WANG, Ling MEI, Shiqian WU. Global feature pose estimation method based on keypoint distance[J]. Journal of Computer Applications, 2026, 46(1): 260-269.
熊毅, 王蔡琪, 梅岭, 伍世虔. 基于关键点距离的全局特征位姿估计方法[J]. 《计算机应用》唯一官方网站, 2026, 46(1): 260-269.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2025010071
| 数据集 | 点云密度质量 | 挑战 | 模型数 | 场景数 |
|---|---|---|---|---|
| B3R | 高 | 高斯噪声 | 6 | 45 |
| Random Views | 高 | 遮挡、噪声 | 6 | 108 |
| UWA | 中 | 遮挡、杂乱 | 4 | 50 |
| Kinect | 低 | 遮挡、杂乱 | 6 | 16 |
Tab. 1 Characteristics of standard public datasets
| 数据集 | 点云密度质量 | 挑战 | 模型数 | 场景数 |
|---|---|---|---|---|
| B3R | 高 | 高斯噪声 | 6 | 45 |
| Random Views | 高 | 遮挡、噪声 | 6 | 108 |
| UWA | 中 | 遮挡、杂乱 | 4 | 50 |
| Kinect | 低 | 遮挡、杂乱 | 6 | 16 |
| 方法 | 场景1 | 场景2 | 场景3 | 场景4 | 场景5 | 场景6 | 场景7 | 场景8 | 场景9 | 场景10 | 场景11 | 场景12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| FGR | 0.002 7 | 0.265 2 | 0.056 1 | 0.118 6 | 0.874 4 | 0.049 8 | Failed | 0.004 1 | 0.018 7 | Failed | Failed | Failed |
| Teaser++ | 0.001 6 | 0.007 3 | 0.007 5 | 0.669 6 | 0.169 7 | 0.026 9 | Failed | 0.010 5 | 0.020 5 | Failed | Failed | Failed |
| SCVC | 0.001 9 | 0.005 3 | 0.019 5 | 0.023 6 | 0.035 2 | Failed | 0.139 3 | 0.022 5 | Failed | 0.087 1 | 0.067 3 | |
| GROR | 0.000 8 | 0.007 5 | 0.002 3 | 0.061 3 | 0.045 8 | 0.011 9 | Failed | 0.009 6 | 0.177 6 | 0.051 4 | 0.128 8 | |
| SCVCG | 0.003 8 | 0.005 1 | 0.021 7 | |||||||||
| 本文方法 | 0.000 0 | 0.000 0 | 0.001 2 | 0.003 1 | 0.011 4 | 0.005 2 | 0.077 2 | 0.009 8 | 0.148 4 | 0.042 4 | 0.038 2 |
Tab. 2 Rotation errors of proposed method and compared methods in 12 scenarios
| 方法 | 场景1 | 场景2 | 场景3 | 场景4 | 场景5 | 场景6 | 场景7 | 场景8 | 场景9 | 场景10 | 场景11 | 场景12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| FGR | 0.002 7 | 0.265 2 | 0.056 1 | 0.118 6 | 0.874 4 | 0.049 8 | Failed | 0.004 1 | 0.018 7 | Failed | Failed | Failed |
| Teaser++ | 0.001 6 | 0.007 3 | 0.007 5 | 0.669 6 | 0.169 7 | 0.026 9 | Failed | 0.010 5 | 0.020 5 | Failed | Failed | Failed |
| SCVC | 0.001 9 | 0.005 3 | 0.019 5 | 0.023 6 | 0.035 2 | Failed | 0.139 3 | 0.022 5 | Failed | 0.087 1 | 0.067 3 | |
| GROR | 0.000 8 | 0.007 5 | 0.002 3 | 0.061 3 | 0.045 8 | 0.011 9 | Failed | 0.009 6 | 0.177 6 | 0.051 4 | 0.128 8 | |
| SCVCG | 0.003 8 | 0.005 1 | 0.021 7 | |||||||||
| 本文方法 | 0.000 0 | 0.000 0 | 0.001 2 | 0.003 1 | 0.011 4 | 0.005 2 | 0.077 2 | 0.009 8 | 0.148 4 | 0.042 4 | 0.038 2 |
| 方法 | 场景1 | 场景2 | 场景3 | 场景4 | 场景5 | 场景6 | 场景7 | 场景8 | 场景9 | 场景10 | 场景11 | 场景12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| FGR | 0.009 7 | 0.261 5 | 0.736 2 | 1.028 6 | 8.924 8 | 0.199 7 | Failed | 1.083 2 | Failed | Failed | Failed | |
| Teaser++ | 0.014 1 | 0.262 2 | 0.073 6 | 2.592 5 | 1.823 0 | 0.049 6 | Failed | 0.543 3 | 0.923 1 | Failed | Failed | Failed |
| SCVC | 0.017 7 | 0.020 7 | 0.100 7 | 0.144 5 | 0.418 6 | 0.048 8 | Failed | 8.600 2 | 1.169 8 | Failed | 7.998 4 | 6.333 7 |
| GROR | 0.012 8 | 0.022 2 | 0.747 4 | 0.331 6 | Failed | 0.515 4 | 9.110 6 | 4.896 4 | 11.648 7 | |||
| SCVCG | 0.002 2 | 0.039 6 | 0.212 8 | 0.250 0 | ||||||||
| 本文方法 | 0.005 4 | 0.007 9 | 0.011 7 | 0.010 3 | 0.099 7 | 0.008 3 | 4.575 9 | 0.442 9 | 0.006 1 | 6.050 5 | 2.102 2 | 4.249 9 |
Tab. 3 Translation errors of proposed method and compared methods in 12 scenarios
| 方法 | 场景1 | 场景2 | 场景3 | 场景4 | 场景5 | 场景6 | 场景7 | 场景8 | 场景9 | 场景10 | 场景11 | 场景12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| FGR | 0.009 7 | 0.261 5 | 0.736 2 | 1.028 6 | 8.924 8 | 0.199 7 | Failed | 1.083 2 | Failed | Failed | Failed | |
| Teaser++ | 0.014 1 | 0.262 2 | 0.073 6 | 2.592 5 | 1.823 0 | 0.049 6 | Failed | 0.543 3 | 0.923 1 | Failed | Failed | Failed |
| SCVC | 0.017 7 | 0.020 7 | 0.100 7 | 0.144 5 | 0.418 6 | 0.048 8 | Failed | 8.600 2 | 1.169 8 | Failed | 7.998 4 | 6.333 7 |
| GROR | 0.012 8 | 0.022 2 | 0.747 4 | 0.331 6 | Failed | 0.515 4 | 9.110 6 | 4.896 4 | 11.648 7 | |||
| SCVCG | 0.002 2 | 0.039 6 | 0.212 8 | 0.250 0 | ||||||||
| 本文方法 | 0.005 4 | 0.007 9 | 0.011 7 | 0.010 3 | 0.099 7 | 0.008 3 | 4.575 9 | 0.442 9 | 0.006 1 | 6.050 5 | 2.102 2 | 4.249 9 |
| 方法 | 场景1 | 场景2 | 场景3 | 场景4 | 场景5 | 场景6 | 场景7 | 场景8 | 场景9 | 场景10 | 场景11 | 场景12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| FGR | 0.513 2 | 0.249 6 | 0.324 8 | 0.235 5 | 0.279 2 | 0.213 8 | 0.227 3 | |||||
| Teaser++ | 0.312 4 | 0.198 4 | 0.268 3 | 0.263 4 | 0.247 7 | 3.293 2 | 2.934 8 | 3.034 8 | ||||
| SCVC | 0.782 1 | 0.317 2 | 0.369 2 | 0.475 1 | 0.471 3 | 0.439 2 | 0.544 5 | 0.521 3 | 0.542 9 | 5.462 0 | 5.381 1 | 5.631 7 |
| GROR | 0.335 8 | 0.363 8 | 0.352 8 | 0.322 3 | 0.393 6 | 0.357 6 | 4.187 4 | 3.578 6 | 3.834 3 | |||
| SCVCG | 0.829 1 | 0.364 8 | 0.408 8 | 0.502 9 | 0.532 8 | 0.500 1 | 0.648 2 | 0.612 8 | 0.589 0 | 5.723 9 | 5.617 3 | 5.837 8 |
| 本文方法 | 0.517 4 | 0.273 7 | 0.318 9 | 0.375 4 | 0.405 4 | 0.383 7 | 0.463 7 | 0.464 8 | 0.426 7 | 4.911 0 | 4.012 3 | 4.226 7 |
Tab. 4 Running time of proposed method and compared methods in 12 scenarios
| 方法 | 场景1 | 场景2 | 场景3 | 场景4 | 场景5 | 场景6 | 场景7 | 场景8 | 场景9 | 场景10 | 场景11 | 场景12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| FGR | 0.513 2 | 0.249 6 | 0.324 8 | 0.235 5 | 0.279 2 | 0.213 8 | 0.227 3 | |||||
| Teaser++ | 0.312 4 | 0.198 4 | 0.268 3 | 0.263 4 | 0.247 7 | 3.293 2 | 2.934 8 | 3.034 8 | ||||
| SCVC | 0.782 1 | 0.317 2 | 0.369 2 | 0.475 1 | 0.471 3 | 0.439 2 | 0.544 5 | 0.521 3 | 0.542 9 | 5.462 0 | 5.381 1 | 5.631 7 |
| GROR | 0.335 8 | 0.363 8 | 0.352 8 | 0.322 3 | 0.393 6 | 0.357 6 | 4.187 4 | 3.578 6 | 3.834 3 | |||
| SCVCG | 0.829 1 | 0.364 8 | 0.408 8 | 0.502 9 | 0.532 8 | 0.500 1 | 0.648 2 | 0.612 8 | 0.589 0 | 5.723 9 | 5.617 3 | 5.837 8 |
| 本文方法 | 0.517 4 | 0.273 7 | 0.318 9 | 0.375 4 | 0.405 4 | 0.383 7 | 0.463 7 | 0.464 8 | 0.426 7 | 4.911 0 | 4.012 3 | 4.226 7 |
| [1] | GUO J, XING X, QUAN W, et al. Efficient center voting for object detection and 6D pose estimation in 3D point cloud [J]. IEEE Transactions on Image Processing, 2021, 30: 5072-5084. |
| [2] | MEI L, LAI J, FENG Z, et al. From pedestrian to group retrieval via Siamese network and correlation [J]. Neurocomputing, 2020, 412: 447-460. |
| [3] | MEI L, FU M, WANG B, et al. LSN-GTDA: learning symmetrical network via global thermal diffusion analysis for pedestrian trajectory prediction in unmanned aerial vehicle scenarios [J]. Remote Sensing, 2025, 17(1): No.154. |
| [4] | 邴雅星,王阳萍,雍玖,等.基于筛选学习网络的六自由度目标位姿估计算法[J].计算机应用, 2024, 44(6): 1920-1926. |
| BING Y X, WANG Y P, YONG J, et al. Six degrees of freedom object pose estimation algorithm based on filter learning network [J]. Journal of Computer Applications, 2024, 44(6): 1920-1926. | |
| [5] | 王一,谢杰,程佳,等.基于深度学习的RGB图像目标位姿估计综述[J].计算机应用, 2023, 43(8): 2546-2555. |
| WANG Y, XIE J, CHENG J, et al. Review of object pose estimation in RGB images based on deep learning [J]. Journal of Computer Applications, 2023, 43(8): 2546-2555. | |
| [6] | HINTERSTOISSER S, LEPETIT V, ILIC S, et al. Model based training, detection and pose estimation of texture-less 3D objects in heavily cluttered scenes[M]. Berlin: Springer, 2012. |
| [7] | WEN B, YANG W, KAUTZ J, et al. FoundationPose: unified 6D pose estimation and tracking of novel objects [C]// Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2024: 17868-17879. |
| [8] | JIANG H, SALZMANN M, DANG Z, et al. SE (3) diffusion model-based point cloud registration for robust 6D object pose estimation [C]// Proceedings of the 37th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2023: 21285-21297. |
| [9] | CHEN Y, LI Z, LI Q, et al. Pose estimation algorithm based on point pair features using PointNet++ [J]. Complex and Intelligent Systems, 2024, 10(5): 6581-6595. |
| [10] | LIU Q, DING K, ZHANG C, et al. A point cloud matching algorithm based on multiscale point pair features [C]// Proceedings of the 2023 IEEE International Conference on Real-time Computing and Robotics. Piscataway: IEEE, 2023: 953-958. |
| [11] | YU S, ZHAI D H, ZHAN Y, et al. 6-D object pose estimation based on point pair matching for robotic grasp detection [J]. IEEE Transactions on Neural Networks and Learning Systems, 2025, 36(7): 11902-11916. |
| [12] | DROST B, ULRICH M, NAVAB N, et al. Model globally, match locally: efficient and robust 3D object recognition [C]// Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2010: 998-1005. |
| [13] | JOHNSON A E, HEBERT M. Using spin images for efficient object recognition in cluttered 3D scenes [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, 21(5): 433-449. |
| [14] | TOMBARI F, SALTI S, DI STEFANO L. Unique shape context for 3D data description [C]// Proceedings of the 2010 ACM Workshop on 3D Object Retrieval. New York: ACM, 2010: 57-62. |
| [15] | TOMBARI F, SALTI S, DI STEFANO L. Unique signatures of histograms for local surface description [C]// Proceedings of the 2010 European Conference on Computer Vision, LNCS 6313. Berlin: Springer, 2010: 356-369. |
| [16] | RUSU R B, BLODOW N, BEETZ M. Fast Point Feature Histograms (FPFH) for 3D registration [C]// Proceedings of the 2009 IEEE International Conference on Robotics and Automation. Piscataway: IEEE, 2009: 3212-3217. |
| [17] | RUSU R B, BRADSKI G, THIBAUX R, et al. Fast 3D recognition and pose using the viewpoint feature histogram [C]// Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2010: 2155-2162. |
| [18] | ALDOMA A, VINCZE M, BLODOW N, et al. CAD-model recognition and 6DOF pose estimation using 3D cues [C]// Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops. Piscataway: IEEE, 2011: 585-592. |
| [19] | 郝雯,汪洋,魏海南.基于多特征融合的点云场景语义分割[J].计算机应用, 2023, 43(10): 3202-3208. |
| HAO W, WANG Y, WEI H N. Semantic segmentation of point cloud scenes based on multi-feature fusion [J]. Journal of Computer Applications, 2023, 43(10): 3202-3208. | |
| [20] | 朱新成,何坤金,倪娜,等.基于改进迭代最近点算法的接骨板贴合性快捷计算方法[J].计算机应用, 2021, 41(10): 3033-3039. |
| ZHU X C, HE K J, NI N, et al. Rapid calculation method of orthopedic plate fit based on improved iterative closest point algorithm [J]. Journal of Computer Applications, 2021, 41(10): 3033-3039. | |
| [21] | YANG H, SHI J, CARLONE L. TEASER: fast and certifiable point cloud registration [J]. IEEE Transactions on Robotics, 2021, 37(2): 314-333. |
| [22] | YAN L, WEI P, XIE H, et al. A new outlier removal strategy based on reliability of correspondence graph for fast point cloud registration [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(7): 7986-8002. |
| [23] | CURLESS B, LEVOY M. A volumetric method for building complex models from range images [C]// Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques. New York: ACM, 1996: 303-312. |
| [24] | MIAN A S, BENNAMOUN M, OWENS R. Three-dimensional model-based object recognition and segmentation in cluttered scenes [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(10): 1584-1601. |
| [25] | TEJANI A, TANG D, KOUSKOURIDAS R, et al. Latent-Class Hough Forests for 3D object detection and pose estimation [C]// Proceedings of the 2014 European Conference on Computer Vision, LNCS 8694. Cham: Springer, 2014: 462-477. |
| [26] | GUO Y, BENNAMOUN M, SOHEL F, et al. A comprehensive performance evaluation of 3D local feature descriptors [J]. International Journal of Computer Vision, 2016, 116(1): 66-89. |
| [27] | PRAKHYA S M, LIU B, LIN W. B-SHOT: a binary feature descriptor for fast and efficient keypoint matching on 3D point clouds [C]// Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE, 2015: 1929-1934. |
| [28] | ZHOU Q Y, PARK J, KOLTUN V. Fast global registration [C]// Proceedings of the 2016 European Conference on Computer Vision, LNCS 9906. Cham: Springer, 2016: 766-782. |
| [29] | XING X, LU Z, WANG Y, et al. Efficient single correspondence voting for point cloud registration [J]. IEEE Transactions on Image Processing, 2024, 33: 2116-2130. |
| [30] | MEI L, HE Y, FISHANI F J, et al. Learning domain-adaptive landmark detection-based self-supervised video synchronization for remote sensing panorama [J]. Remote Sensing, 2023, 15(4): No.953. |
| [31] | PAN C, FANG H, ZHANG H, et al. Visual attention-guided weighted Naïve Bayes for behavior intention inference [C]// Proceedings of the 2nd International Conference on Artificial Intelligence, Human-Computer Interaction and Robotics. Piscataway: IEEE, 2023: 569-574. |
| [32] | MEI L, LAI J, XIE X, et al. Illumination-invariance optical flow estimation using weighted regularization transform [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(2): 495-508. |
| [1] | Zhigang XU, Chuang ZHANG. Multi-level color restoration of mural image based on gated positional encoding [J]. Journal of Computer Applications, 2024, 44(9): 2931-2937. |
| [2] | Yuwei DING, Hongbo SHI, Jie LI, Min LIANG. Image denoising network based on local and global feature decoupling [J]. Journal of Computer Applications, 2024, 44(8): 2571-2579. |
| [3] | Rong HUANG, Junjie SONG, Shubo ZHOU, Hao LIU. Image aesthetic quality evaluation method based on self-supervised vision Transformer [J]. Journal of Computer Applications, 2024, 44(4): 1269-1276. |
| [4] | Cheng WANG, Yang WANG, Yingjiao RONG. YOLOv7-MSBP target location algorithm for character recognition of power distribution cabinet [J]. Journal of Computer Applications, 2024, 44(10): 3191-3199. |
| [5] | Hanyu ZHANG, Zhenbo LI, Weiran LI, Pu YANG. Review of research on aquaculture counting based on machine vision [J]. Journal of Computer Applications, 2023, 43(9): 2970-2982. |
| [6] | Jiahong SUI, Yingchi MAO, Huimin YU, Zicheng WANG, Ping PING. Global image captioning method based on graph attention network [J]. Journal of Computer Applications, 2023, 43(5): 1409-1415. |
| [7] | Xinyu CHEN, Mingzhe LIU, Jun REN, Ying TANG. Parameter asynchronous updating algorithm based on multi-column convolutional neural network [J]. Journal of Computer Applications, 2022, 42(2): 395-403. |
| [8] | LIU Ziyan, ZHU Mingcheng, YUAN Lei, MA Shanshan, CHEN Lingzhouting. Video person re-identification based on non-local attention and multi-feature fusion [J]. Journal of Computer Applications, 2021, 41(2): 530-536. |
| [9] | ZHANG Yazhou, LU Xianling. Surface defect detection method of light-guide plate based on improved coherence enhancing diffusion and texture energy measure-Gaussian mixture model [J]. Journal of Computer Applications, 2020, 40(5): 1545-1552. |
| [10] | GAO Ming, REN Dejun, HU Yunqi, FU Lei, QIU Lyu. Ampoule packaging quality inspection algorithm based on machine vision and lightweight neural network [J]. Journal of Computer Applications, 2020, 40(10): 2899-2903. |
| [11] | YANG Chunde, LIU Jing, QU Zhong. Fast scale adaptive object tracking algorithm with separating window [J]. Journal of Computer Applications, 2019, 39(4): 1145-1149. |
| [12] | SHI Changyou, WANG Meili, LIU Xinran, HUANG Huili, ZHOU Deqiang, DENG Ganran. Node recognition for different types of sugarcanes based on machine vision [J]. Journal of Computer Applications, 2019, 39(4): 1208-1213. |
| [13] | YANG Jidong, SUN Zhaoqi, WANG Feilong. Optimal path control of parallel truss manipulator based on visual grasping [J]. Journal of Computer Applications, 2019, 39(3): 913-917. |
| [14] | LIU Baocheng, PIAO Yan, TANG Yue. Person re-identification in video sequence based on spatial-temporal regularization [J]. Journal of Computer Applications, 2019, 39(11): 3216-3220. |
| [15] | ZHANG Yunke, LIU Dan. Real-time implementation of improved TINY YOLO vehicle detection algorithm based on Zynq SoC hardware acceleration [J]. Journal of Computer Applications, 2019, 39(1): 192-198. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||