Mobile robot motion estimation based on classified feature points
YIN Jun1, DONG Lida1,2, CHI Tianyang2
1. Department of Information Science and Electronic Engineering, Zhejiang University, Hangzhou Zhejiang 310027, China;
2. Institute of Service Engineering, Hangzhou Normal University, Hangzhou Zhejiang 311121, China
In order to solve the real-time problem of visual navigation system with traditional motion estimation algorithm, a new approach based on classified feature points for mobile robot motion estimation was proposed. For dividing feature points into far points and near points, the distances between feature points and mobile robot were calculated according to the 3-dimensional coordinates of feature points. The far points were sensitive to the rotational movement of robot, thus they were used to calculate rotational matrix; the near points were sensitive to translational motion, thus they were used to calculate the translational matrix. When the far points and the near points are 30% of original feature points, the proposed approach had equivalent accuracy but reduced 60% computing time compared with RANdom SAmple Consensus (RANSAC). The results demonstrate that, by using classified feature points, the proposed algorithm can effectively reduce computing time, meanwhile ensure accuracy of motion estimation, and it can meet the the real-time requirement with large feature points.
[1] GUAN L, DONG L, YIN J. Inertial navigation scheme for industrial mobile robot with local precise positioning [J]. Journal of Computer Applications, 2014, 34(4): 1205-1208. (管林波,董利达,尹俊.局域精确定位的工业移动机器人惯性导航方案 [J]. 计算机应用, 2014, 34(4): 1205-1208.) [2] CHEN M, XIANG Z, LIU J. Assistance localization method for mobile robot based on monocular natural visual landmarks [J]. Journal of Zhejiang University: Engineering Science, 2014, 48(2): 285-291. (陈明芽, 项志宇, 刘济林. 单目视觉自然路标辅助的移动机器人定位方法 [J]. 浙江大学学报:工学版, 2014, 48(2): 285-291.) [3] MAHMOOD A, BAIG A, AHSAN Q. Real time localization of moblie robotic platform via fusion of inertial and visual navigation system [C]//ICRAI 2012: Proceedings of the 2012 International Conference on Robotics and Artificial Intelligence, 2012, 32(4): 40-41. [4] ZHU A, YANG S X. Neurofuzzy-based approach to mobile robot navigation in unknown environments [J]. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 2007, 37(4): 610-621. [5] PANAHANDEH G, JANSSON M. Vision-aided inertial navigation based on ground plane feature detection [J]. IEEE/ASME Transactions on Mechatronics, 2014, 19(4): 1206-1215. [6] KIM A, EUSTICE R M. Real-time visual SLAM for autonomous underwater hull inspection using visual saliency [J]. IEEE Transactions on Robotics, 2013, 29(3): 719-733. [7] LATEGAHN H, STILLER C. Vision-only localization [J]. IEEE Transactions on Intelligent Transportation Systems, 2014, 15(3): 1246-1257. [8] IVANCSITS C, LEE M-F R. Visual navigation system for small unmanned aerial vehicles [J]. Sensor Review, 2013, 33(3): 267-291. [9] GUAN X, BAI H. A GPU accelerated real-time self-contained visual navigation system for UAVs [C]//Proceedings of the 2012 International Conference on Information and Automation. Piscataway: IEEE, 2012: 578-581. [10] BAY H, TUYTELAARS T, van GOOL L. SURF: Speeded Up Robust Features [C]//ECCV 2006: Proceedings of the 9th European Conference on Computer Vision, LNCS 3951. Berlin: Springer-Verlag, 2006: 404-417. [11] JIANG J, LING S. Parallel voting RANSAC and its implementation on FPGA [J]. Journal of Electronics and Information Technology, 2014, 36(5): 1145-1450.(江洁, 凌思睿. 一种投票式并行RANSAC算法及其FPGA实现 [J]. 电子与信息学报, 2014, 36(5): 1145-1450.) [12] FISCHLER M A, BOLLES R C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography [J]. Communications of the ACM, 1981, 24(6): 381-395. [13] LOWE D G. Distinctive image features from scale-invariant keypoints [J]. International Journal of Computer Vision, 2004, 60(2): 91-110. [14] LU D, ZHOU W, GONG X, et al. Decoupled mobile robot motion estimation based on fusion of visual and inertial measurement unit [J]. Journal of Zhejiang University: Engineering Science, 2012, 46(6): 1021-1026. (路丹晖,周文晖,龚小谨,等. 视觉和IMU融合的移动机器人运动解耦估计 [J]. 浙江大学学报:工学版, 2012, 46(6): 1021-1026.) [15] ANDREAS G, JULIUS Z, CHRISTOPH S. StereoScan: dense 3D reconstruction in real-time [C]//Proceedings of the 2011 IEEE Intelligent Vehicles Symposium. Piscataway: IEEE, 2011: 963-968.