[1] GU Y,YE X,SHENG W,et al. Multiple stream deep learning model for human action recognition[J]. Image and Vision Computing,2020,93:No. 103818. [2] 郭明祥, 宋全军, 徐湛楠, 等. 基于三维残差稠密网络的人体行为识别算法[J]. 计算机应用,2019,39(12):3482-3489.(GUO M X,SONG Q J,XU Z N,et al. Human behavior recognition algorithm based on three-dimensional residual dense network[J]. Journal of Computer Applications,2019,39(12):3482-3489.) [3] 杨锋, 许玉, 尹梦晓, 等. 基于深度学习的行人重识别综述[J]. 计算机应用,2020,40(5):1243-1252.(YANG F,XU Y,YIN M X,et al. Review on deep learning-based pedestrian re-identification[J]. Journal of Computer Applications,2020,40(5):1243-1252.) [4] HAN H,LI X J. Human action recognition with sparse geometric features[J]. The Imaging Science Journal,2015,63(1):45-53. [5] HOSHINO S, NⅡMURA K. Optical flow for real-time human detection and action recognition based on CNN classifiers[J]. Journal of Advanced Computational Intelligence and Intelligent Informatics,2019,23(4):735-742. [6] TRIPATHI V,GANGODKAR D,MITTAL A,et al. Robust action recognition framework using segmented block and distance mean histogram of gradients approach[J]. Procedia Computer Science, 2017, 115:493-500. [7] WANG H,KLÄSER A,SCHMID C,et al. Dense trajectories and motion boundary descriptors for action recognition[J]. International Journal of Computer Vision,2013,103(1):60-79. [8] SIMONYAN K, ZISSERMAN A. Two-stream convolutional networks for action recognition in videos[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge:MIT Press,2014:568-576. [9] WANG L, XIONG Y, WANG Z, et al. Temporal segment networks:towards good practices for deep action recognition[C]//Proceedings of the 2016 European Conference on Computer Vision, LNCS 9912. Cham:Springer,2016:20-36. [10] DU W,WANG Y,QIAO Y. RPAN:an end-to-end recurrent poseattention network for action recognition in videos[C]//Proceedings of the 2017 IEEE Conference on Computer Visio. Piscataway:IEEE,2017:3745-3754. [11] TRAN D, BOURDEV L, FERGUS R, et al. Learning spatiotemporal features with 3D convolutional networks[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway IEEE,2015:4489-4497. [12] KIM Y J,LEE D G,LEE S W. Three-stream fusion network for first-person interaction recognition[J]. Pattern Recognition, 2020,103:No. 107279. [13] 黄仕建. 视频序列中人体行为的低秩表达与识别方法研究[D]. 重庆:重庆大学,2015:39-53.(HUANG S J. Research on low-rank presentation and recognition of human actions in video sequences[D]. Chongqing:Chongqing University, 2015:39-53.) [14] SCHULDT C, LAPTEV I, CAPUTO B. Recognizing human actions:a local SVM approach[C]//Proceedings of the 17th International Conference on Pattern Recognition. Piscataway:IEEE,2004:32-36. [15] KUEHNE H,JHUANG H,GARROTE E,et al. HMDB:a large video database for human motion recognition[C]//Proceedings of the 2011 IEEE International Conference on Computer Vision. Piscataway:IEEE,2011:2556-2563. [16] GUO Z,WANG X,WANG B,et al. A novel 3D gradient LBP descriptor for action recognition[J]. IEICE Transactions on Information and Systems,2017,E100-D (6):1388-1392. [17] NAZIR S,YOUSAF M H,VELASTIN S A. Evaluating a bag-ofvisual features approach using spatio-temporal features for action recognition[J]. Computers and Electrical Engineering,2018,72:660-669. [18] HUAN R,XIE C,GUO F,et al. Human action recognition based on HOIRM feature fusion and AP clustering BOW[J]. PLoS One, 2019,14(7):No. 0219910. [19] KAPOOR R,MISHRA O,TRIPATHI M M,et al. Human action recognition using descriptor based on selective finite element analysis[J]. Journal of Electrical Engineering,2019,70(6):443-453. [20] JAOUEDI N,BOUJNAH N,BOUHLEL M S. A new hybrid deep learning model for human action recognition[J]. Journal of King Saud University-Computer and Information Sciences,2020,32(4):447-453. [21] VISHWAKARMA D K. A two-fold transformation model for human action recognition using decisive pose[J]. Cognitive Systems Research,2020,61:1-13. [22] DUTA I C,UIJLINGS J R R,IONESCU B,et al. Efficient human action recognition using histograms of motion gradients and VLAD with descriptor shape information[J]. Multimedia Tools and Applications,2017,76(21):22445-22472. [23] ZHANG H, XIN M, WANG S, et al. End-to-end temporal attention extraction and human action recognition[J]. Machine Vision and Applications,2018,29(7):1127-1142. [24] LIU Z,ZHANG X,SONG L,et al. More efficient and effective tricks for deep action recognition[J]. Cluster Computing,2019, 22(S1):819-826. [25] WANG D,XIAO H,OU F,et al. Moving human focus inference model for action recognition[C]//Proceedings of the 45th Annual Conference of the IEEE Industrial Electronics Society. Piscataway:IEEE,2019:2554-2559. [26] MAJD M,SAFABAKHSH R. Correlational convolutional LSTM for human action recognition[J]. Neurocomputing,2020,396:224-229. [27] KOOHZADI M,CHARKARI N M. A context based deep temporal embedding network in action recognition[J]. Neural Processing Letters,2020,52(1):187-220. |