Journal of Computer Applications ›› 2020, Vol. 40 ›› Issue (3): 854-858.DOI: 10.11772/j.issn.1001-9081.2019071262

• Virtual reality and multimedia computing • Previous Articles     Next Articles

Vehicle information detection based on improved RetinaNet

LIU Ge, ZHENG Yelong, ZHAO Meirong   

  1. State Key Laboratory of Precision Testing Technology and Instruments, Tianjin University, Tianjin 300072, China
  • Received:2019-07-19 Revised:2019-09-27 Online:2020-03-10 Published:2019-10-25
  • Supported by:
    This work is partially supported by the National Natural Science Foundation of China (51805367), the Tianjin Natural Science Foundation (18JCQNJC04800, 18JCZDJC31800).

基于RetinaNet改进的车辆信息检测

刘革, 郑叶龙, 赵美蓉   

  1. 天津大学 精密测试技术及仪器国家重点实验室, 天津 300072
  • 通讯作者: 郑叶龙
  • 作者简介:刘革(1994-),男,湖北襄阳人,硕士研究生,主要研究方向:计算机视觉;郑叶龙(1987-),男,浙江温州人,讲师,博士,主要研究方向:机器视觉、力学测量;赵美蓉(1967-),女,天津人,教授,博士,主要研究方向:机器视觉、力学测量。
  • 基金资助:
    国家自然科学基金资助项目(51805367);天津市自然科学基金资助项目(18JCQNJC04800,18JCZDJC31800)。

Abstract: The lack of computational power and limited storage of the mobile terminals lead to the low accuracy and slow speed of vehicle information detection models. Therefore, an improved vehicle information detection algorithm based on RetinaNet was proposed to solve this problem. Firstly, a new vehicle information detection framework was developed, and the deep feature information of the FPN (Feature Pyramid Network) module was merged into the shallow feature layer, and MobileNet V3 was used as the basic feature extraction network. Secondly, the direct evaluation index of target detection task——GIoU (Generalized Intersection over Union) was introduced to guide the positioning task. Finally, the dimension clustering algorithm was used to find the better size of Anchors and match them to the corresponding feature layers. Compared with the original RetinaNet target detection algorithm, the proposed algorithm has the accuracy improved by 10.2 percentage points on the vehicle information detection dataset. When using MobileNet V3 as the basic network, the mAP (mean Average Precision) can reach 97.2% and the forward inference time of single frame can reach 100 ms on ARM v7 devices. The experimental results show that the proposed method can effectively improve the performance of mobile vehicle information detection algorithms.

Key words: Convolutional Neural Network (CNN), target detection, dimension clustering, feature fusion, Generalized Intersection over Union (GIoU)

摘要: 移动端计算力不足和存储有限导致车辆信息检测模型精度不高、速度较慢。针对这一问题,提出一种基于RetinaNet改进的车辆信息检测算法。首先,开发新的车辆信息检测框架,将特征金字塔网络(FPN)模块的深层特征信息融合进浅层特征层,以MobileNet V3为基础特征提取网络;其次,引入目标检测任务的直接评价指标GIoU指导定位任务;最后,使用维度聚类算法找出Anchor的较好尺寸并匹配到相对应的特征层。与原始RetinaNet目标检测算法的对比实验表明,所提算法在车辆信息检测数据集上的精度有10.2个百分点的提升。以MobileNet V3为基础网络时平均准确率均值(mAP)可达97.2%且在ARM v7设备上单帧前向推断用时可达100 ms。实验结果表明,所提方法能够有效提高移动端车辆信息检测算法性能。

关键词: 卷积神经网络, 目标检测, 维度聚类, 特征融合, GIoU

CLC Number: