Journal of Computer Applications ›› 2022, Vol. 42 ›› Issue (8): 2556-2563.DOI: 10.11772/j.issn.1001-9081.2021071178

• Multimedia computing and computer simulation • Previous Articles    

Decoupled visual servoing control method based on point and line features

Jinyan LU1, Xiaoke QI2()   

  1. 1.School of Electrical and Information Engineering,Henan University of Engineering,Zhengzhou Henan 451191,China
    2.School of Information Management for Law,China University of Political Science and Law,Beijing 102249,China
  • Received:2021-07-07 Revised:2021-10-11 Accepted:2021-10-13 Online:2021-11-09 Published:2022-08-10
  • Contact: Xiaoke QI
  • About author:LU Jinyan, born in 1985, Ph. D., lecturer. Her research interests include robot control, intelligent control, signal processing, machine learning.
    QI Xiaoke, born in 1985, Ph. D., associate professor. Her research interests include multimedia, natural language processing, machine learning, wireless communication.
  • Supported by:
    National Natural Science Foundation of China(62173126);Key Research and Development and Promotion Project of Henan Province (Key Problem of Science and Technology)(202102210187);Doctoral Foundation of Henan University of Engineering(Dkj2018003)


卢金燕1, 戚肖克2()   

  1. 1.河南工程学院 电气信息工程学院,郑州 451191
    2.中国政法大学 法治信息管理学院,北京 102249
  • 通讯作者: 戚肖克
  • 作者简介:卢金燕(1985—),女,河南信阳人,讲师,博士,CCF会员,主要研究方向:机器人控制、智能控制、信号处理、机器学习;
  • 基金资助:


Aiming at the problem of automatic alignment for robot, a decoupled visual servoing control method based on point and line features was proposed. In the method, the points and lines were used as image features, and the interactive matrix of image features was used to decouple attitude control and position control, so as to realize six degrees of freedom alignment. Firstly, the attitude control law was designed according to the lines and their interactive matrix to eliminate the rotational deviation. Then, the position control law was designed according to the points and their interactive matrix to eliminate the positional deviation. Finally, the automatic alignment between the robot end-effector and the target was realized. In the alignment control process, based on the amount of camera motion and the variation of features before and after camera motion, the online estimation of depth was able to be realized. In addition, a monitor was designed to adjust the motion speed of the camera, thereby ensuring that the features were always in the field of view of the camera. The six degrees of freedom alignment of the robot on the Eye-in-Hand platform was completed by the proposed method and the traditional image based visual servoing method, respectively. The proposed method realizes the automatic alignment of the robot in 16 steps, and has the maximum translation error of 3.26 mm and the maximum rotation error of 0.72° of the robot end-effector after alignment. Compared with the comparison method, the proposed method has more efficient control process, faster convergence of control error and less alignment error. Experimental results show that the proposed method can realize fast and high-precision automatic alignment, improving the autonomy and intelligent level of robot operation, and is expected to be appied in the fields of target tracking, picking and positioning, automatic assembly, welding, service robot and so on.

Key words: visual servoing, visual control, interactive matrix, decouped control, six degrees of freedom alignment


针对机器人的自动对准问题,提出一种基于点线特征的解耦视觉伺服控制方法。所提方法以点和直线作为图像特征,并利用图像特征的交互矩阵解耦姿态控制和位置控制,实现六自由度对准。首先利用直线及其交互矩阵设计姿态控制律,以消除旋转偏差;然后利用点及其交互矩阵设计位置控制律,以消除位置偏差;最后实现机器人末端目标的自动对准。在对准控制过程中,基于执行的相机运动量以及相机运动前后特征的变化量,可实现对深度的在线估计。另外,还设计了监督器对相机的运动速度进行调节,从而确保特征一直处于相机视野当中。在Eye-in-Hand机器人平台上,分别用所提方法和传统的基于图像的视觉伺服方法实现了机器人的六自由度对准。所提方法经过16步实现了机器人的自动对准,对准结束时机器人末端位姿的最大平移误差为3.26 mm,最大旋转误差为0.72°。相较于对比方法,该方法的控制过程更加高效,控制误差收敛更快,对准误差更小。实验结果表明,所提方法可以实现快速高精度的自动对准,能够提高机器人操作的自主性和智能化水平,有望应用于目标跟踪、拾取和定位、自动化装配、焊接、服务机器人等领域。

关键词: 视觉伺服, 视觉控制, 交互矩阵, 解耦控制, 六自由度对准

CLC Number: