Journal of Computer Applications ›› 2018, Vol. 38 ›› Issue (4): 1121-1126.DOI: 10.11772/j.issn.1001-9081.2017102394

Previous Articles     Next Articles

Improved D-Nets algorithm with matching quality purification

YE Feng, HONG Zheng, LAI Yizong, ZHAO Yuting, XIE Xianzhi   

  1. School of Mechanical & Automotive Engineering, South China University of Technology, Guangzhou Guangdong 510640, China
  • Received:2017-10-10 Revised:2017-11-13 Online:2018-04-10 Published:2018-04-09
  • Supported by:
    This work is partially supported by the Special Fund for Public Welfare Research and Capacity Building of Guangdong Province (2016A010106005), the Project of Ministry of Education on the Integration of Industry, Education and Research in Guangdong Province (2012A090300013).

基于匹配质量提纯的改进D-Nets算法

叶峰, 洪峥, 赖乙宗, 赵雨亭, 谢先治   

  1. 华南理工大学 机械与汽车工程学院, 广州 510640
  • 通讯作者: 叶峰
  • 作者简介:叶峰(1972-),男,广东高州人,副教授,博士,主要研究方向:机器视觉、智能控制;洪峥(1994-),男,广东湛江人,硕士研究生,主要研究方向:图像配准、机器视觉;赖乙宗(1972-),男,广西北流人,讲师,硕士,主要研究方向:机器视觉;赵雨亭(1995-),男,河北保定人,硕士研究生,主要研究方向:机器视觉;谢先治(1992-),男,湖南衡阳人,硕士研究生,主要研究方向:机器视觉。
  • 基金资助:
    广东省公益研究与能力建设专项(2016A010106005);广东省教育部产学研结合项目(2012A090300013)。

Abstract: To address the underperformance of feature-based image registration under situations with large affine deformation and similar targets, and reduce the time cost, an improved Descriptor-Nets (D-Nets) algorithm based on matching quality purification was proposed. The feature points were detected by Features From Accelerated Segment Test (FAST) algorithm initially, and then they were filtered according to Harris corner response function and meshing. Furthermore, on the basis of calculating the line-descriptor, a hash table and a vote were constructed, thus rough-matching pairs could be obtained. Eventually, mismatches were eliminated by the purification based on matching quality. Experiments were carried out on Mikolajczyk standard image data set of Oxford University. Results show that the proposed improved D-Nets algorithm has an average registration accuracy of 92.2% and an average time cost of 2.48 s under large variation of scale, parallax and light. Compared to Scale-Invariant Feature Transform (SIFT), Affine-SIFT (ASIFT), original D-Nets algorithms, the improved algorithm has a similar registration accuracy with the original algorithm but with up to 80 times speed boost, and it has the best robustness which significantly outperforms SIFT and ASIFT, which is practical for image registration applications.

Key words: image registration, feature matching, matching purification, feature point, feature descriptor

摘要: 针对基于特征的图像配准在较大仿射变形以及存在相似目标情况下适应性不佳的问题,为减少算法的时间开销,提出一种基于匹配质量提纯的改进描述网(D-Nets)算法。首先,通过FAST算法检测特征点,并根据Harris角点响应函数以及网格划分相结合的方式进行筛选;然后,在计算直线描述子的基础上构建哈希表和投票表决,从而得到粗匹配对;最后,采用基于匹配质量的提纯方法剔除误匹配。针对牛津大学Mikolajczyk标准图像数据集进行了实验,结果表明:提出的改进D-Nets算法在尺度、视差和光照变化较大的情况下平均配准精度为92.2%,平均时间开销为2.48 s。与尺度不变特征变换(SIFT)、仿射-尺度不变特征变换(Affine-SIFT)、原始D-Nets等算法相比,提出的改进算法与原始算法的配准精度基本相当,但速度最高可提升80倍,并具有最佳鲁棒性,显著优于SIFT、ASIFT算法,非常适于图像配准应用。

关键词: 图像配准, 特征匹配, 匹配提纯, 特征点, 特征描述子

CLC Number: