Journal of Computer Applications ›› 2021, Vol. 41 ›› Issue (10): 2991-2996.DOI: 10.11772/j.issn.1001-9081.2020121908

Special Issue: 多媒体计算与计算机仿真

• Multimedia computing and computer simulation • Previous Articles     Next Articles

Pedestrian re-identification method based on multi-scale feature fusion

HAN Jiandong, LI Xiaoyu   

  1. School of Computer and Information Technology, Shanxi University, Taiyuan Shanxi 030006, China
  • Received:2020-12-08 Revised:2021-04-08 Online:2021-10-10 Published:2021-07-14
  • Supported by:
    This work is partially supported by the National Natural Science Foundation of China (62072291), the Research Project of Postgraduate Education Reform in Shanxi Province (2020YJJG030).

基于多尺度特征融合的行人重识别方法

韩建栋, 李晓宇   

  1. 山西大学 计算机与信息技术学院, 太原 030006
  • 通讯作者: 韩建栋
  • 作者简介:韩建栋(1980-),男,山西文水人,讲师,博士,主要研究方向:机器学习、图像处理;李晓宇(1996-),女,山西临汾人,硕士研究生,主要研究方向:计算机视觉、图像处理。
  • 基金资助:
    国家自然科学基金资助项目(62072291);山西省研究生教育改革研究课题(2020YJJG030)。

Abstract: Pedestrian re-identification tasks lack the consideration of the pedestrian feature scale variation during feature extraction, so that they are easily affected by environment and have low accuracy of pedestrian re-identification. In order to solve the problem, a pedestrian re-identification method based on multi-scale feature fusion was proposed. Firstly, in the shallow layer of the network, multi-scale pedestrian features were extracted through mixed pooling operation, which was helpful to improve the feature extraction capability of the network. Then, strip pooling operation was added to the residual block to extract the remote context information in horizontal and vertical directions respectively, which avoided the interference of irrelevant regions. Finally, after the residual network, the dilated convolutions with different scales were used to further preserve the multi-scale features, so as to help the model to analyze the scene structure flexibly and effectively. Experimental results show that, on Market-1501 dataset, the proposed method has the Rank1 of 95.9%, and the mean Average Precision (mAP) of 88.5%; on DukeMTMC-reID dataset, the proposed method has the Rank1 of 90.1%, and the mAP of 80.3%. It can be seen that the proposed method can retain the pedestrian feature information better, thereby improving the accuracy of pedestrian re-identification tasks.

Key words: pedestrian re-identification, multi-scale feature, remote context information, dilated convolution, feature fusion

摘要: 针对行人重识别任务在特征提取时缺乏对行人特征尺度变化的考虑,导致其易受环境影响而具有低行人重识别准确率的问题,提出了一种基于多尺度特征融合的行人重识别方法。首先,在网络浅层通过混合池化操作来提取多尺度的行人特征,从而帮助网络提升特征提取能力;然后,在残差块内添加条形池化操作以分别提取水平和竖直方向的远程上下文信息,从而避免无关区域的干扰;最后,在残差网络之后利用不同尺度的空洞卷积进一步保留多尺度的特征,从而帮助模型灵活有效地解析场景结构。实验结果表明,在Market-1501数据集上,所提方法的Rank1达到95.9%,平均精度均值(mAP)为88.5%;在DukeMTMC-reID数据集上,该方法的Rank1达到90.1%,mAP为80.3%。可见所提方法能够较好地保留行人特征信息,从而提高行人重识别任务准确率。

关键词: 行人重识别, 多尺度特征, 远程上下文信息, 空洞卷积, 特征融合

CLC Number: