Journal of Computer Applications ›› 2019, Vol. 39 ›› Issue (7): 2081-2086.DOI: 10.11772/j.issn.1001-9081.2019010156

• Virtual reality and multimedia computing • Previous Articles     Next Articles

Two-stream CNN for action recognition based on video segmentation

WANG Ping, PANG Wenhao   

  1. School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an Shaanxi 710049, China
  • Received:2019-01-22 Revised:2019-04-03 Online:2019-04-15 Published:2019-07-10
  • Supported by:

    This work is partially supported by the National Natural Science Foundation of China (61671365).

基于视频分段的空时双通道卷积神经网络的行为识别

王萍, 庞文浩   

  1. 西安交通大学 电子与信息工程学院, 西安 710049
  • 通讯作者: 王萍
  • 作者简介:王萍(1976-),女,陕西西安人,副教授,博士,主要研究方向:视频编码、视频分析;庞文浩(1994-),男,山东临沂人,硕士研究生,主要研究方向:视频分类、视频摘要。
  • 基金资助:

    国家自然科学基金资助项目(61671365)。

Abstract:

Aiming at the issue that original spatial-temporal two-stream Convolutional Neural Network (CNN) model has low accuracy for action recognition in long and complex videos, a two-stream CNN for action recognition based on video segmentation was proposed. Firstly, a video was split into multiple non-overlapping segments with same length. For each segment, one frame image was sampled randomly to represent its static features and stacked optical flow images were calculated to represent its motion features. Secondly, these two patterns of images were input into the spatial CNN and temporal CNN for feature extraction, respectively. And the classification prediction features of spatial and temporal domains for action recognition were obtained by merging all segment features in two streams respectively. Finally, the two-steam predictive features were integrated to obtain the action recognition results for the video. In series of experiments, some data augmentation techniques and transfer learning methods were discussed to solve the problem of over-fitting caused by the lack of training samples. The effects of various factors including the number of segments, network architectures, feature fusion schemes based on segmentation and two-stream integration strategy on the performance of action recognition were analyzed. The experimental results show that the accuracy of action recognition of the proposed model on dataset UCF101 reaches 91.80%, which is 3.8% higher than that of original two-stream CNN model; and the accuracy of the proposed model on dataset HMDB51 is improved to 61.39%, which is higher than that of the original model. It shows that the proposed model can better learn and express the action features in long and complex videos.

Key words: two-stream Convolutional Neural Network (CNN), action recognition, video segmentation, transfer learning, feature fusion

摘要:

针对原始空时双通道卷积神经网络(CNN)模型对长时段复杂视频中行为识别率低的问题,提出了一种基于视频分段的空时双通道卷积神经网络的行为识别方法。首先将视频分成多个等长不重叠的分段,对每个分段随机采样得到代表视频静态特征的帧图像和代表运动特征的堆叠光流图像;然后将这两种图像分别输入到空域和时域卷积神经网络进行特征提取,再在两个通道分别融合各视频分段特征得到空域和时域的类别预测特征;最后集成双通道的预测特征得到视频行为识别结果。通过实验讨论了多种数据增强方法和迁移学习方案以解决训练样本不足导致的过拟合问题,分析了不同分段数、预训练网络、分段特征融合方案和双通道集成策略对行为识别性能的影响。实验结果显示所提模型在UCF101数据集上的行为识别准确率达到91.80%,比原始的双通道模型提高了3.8个百分点;同时在HMDB51数据集上的行为识别准确率也比原模型提高,达到61.39%,这表明所提模型能够更好地学习和表达长时段复杂视频中人体行为特征。

关键词: 双通道卷积神经网络, 行为识别, 视频分段, 迁移学习, 特征融合

CLC Number: