[1] XIA L, CHEN C C, AGGARWAL J K. View invariant human action recognition using histograms of 3D joints[C]//Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Piscataway, NJ:IEEE, 2012:20-27. [2] YANG X, TIAN Y. Eigen-joints-based action recognition using naive-Bayes-nearest-neighbor[C]//Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Piscataway, NJ:IEEE, 2012:14-19. [3] LI W, ZHANG Z, LIU Z. Action recognition based on a bag of 3D points[C]//Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Piscataway, NJ:IEEE, 2010:9-14. [4] SHOTTON J, FITZGIBBON A, COOK M, et al. Real-time human pose recognition in parts from single depth images[C]//Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE, 2011:1297-1304. [5] SCHULDT C, LAPTEV I, CAPUTO B. Recognizing human actions:a local SVM approach[C]//Proceedings of the 17th IEEE International Conference on Pattern Recognition. Piscataway, NJ:IEEE, 2004, 3:32-36. [6] SUN J, WU X, YAN S, et al. Hierarchical spatio-temporal context modeling for action recognition[C]//Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE, 2009:2004-2011. [7] BOBICK A, DAVIS J. The recognition of human movement using temporal templates[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(3):257-267. [8] 蔡加欣,冯国灿.基于局部轮廓和随机森林的人体行为识别[J].光学学报,2014, 34(10):204-213.(CAI J X, FENG G C. Action recognition based on local contour and random forest[J]. Acta Optica Sinica, 2014, 34(10):204-213.) [9] LI W, ZHANG Z, LIU Z. Action recognition based on a bag of 3D points[C]//Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Piscataway, NJ:IEEE, 2010:9-14. [10] YANG X, ZHANG C,TIAN Y. Recognizing actions using depth motion maps-based histograms of oriented gradients[C]//Proceedings of the 20th ACM International Conference on Multimedia. New York:ACM, 2012:1057-1060. [11] WANG J, LIU Z, CHOROWSKI J, et al. Robust 3D action recognition with random occupancy patterns[C]//Proceedings of the 12th European Conference on Computer Vision. Berlin:Springer-Verlag, 2012:872-885. [12] 郑胤,陈权崎,章毓晋.深度学习及其在目标和行为识别中的新进展[J].中国图象图形学报,2014,19(2):175-184.(ZHENG Y, CHEN Q Q, ZHANG Y J. The new development of deep learning in action recognition[J]. Journal of Image and Graphics, 2014,19(2):175-184.) [13] CHAARAOUI A A, PADILLA-L'OPEZ J R, CLIMENT-P'EREZ P, et al. Evolutionary joint selection to improve human action recognition with RGB-D devices[J]. Expert Systems with Applications, 2014, 41(3):786-794. [14] CHAUDHRY R, OFLI F, KURILLO G, et al. Bio-inspired dynamic 3D discriminative skeletal features for human action recognition[C]//Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops. Washington, DC:IEEE Computer Society, 2013:471-478. [15] LIN Y Y. Depth and skeleton associated action recognition without online accessible RGB-D cameras[C]//Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE, 2014:2617-2624. [16] VIEIRA A W, NASCIMENTO E, OLIVEIRA G. STOP:space-time occupancy patterns for 3D action recognition from depth map sequences[C]//Proceedings of the 17th Iberoamerican Congress on Pattern Recognition. Berlin:Springer-Verlag, 2012:252-259. [17] GEORAIOS E, GURKIRT S, RADU H. Skeletal quads:human action recognition using joint quadruples[C]//Proceedings of the 201422nd International Conference on Pattern Recognition. Piscataway, NJ:IEEE, 2014:4513-4518. [18] XIA L, AGGARWAL J. Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera[C]//Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE, 2013:2834-2841. [19] OREIFEI O, LIU Z. HON4D:histogram of oriented 4D normals for activity recognition from depth sequences[C]//Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE, 2013:716-723. [20] WANG J, LIU Z, WU Y. Mining actionlet ensemble for action recognition with depth cameras[C]//Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE, 2012:1290-1297. [21] TRAN Q D, LY N Q. Sparse spatio-temporal representation of joint shape-motion cues for human action recognition in depth sequences[C]//Proceedings of the 2013 IEEE RIVF International Conference on Computing and Communication Technologies. Research, Innovation, and Vision for the Future. Piscataway, NJ:IEEE, 2013:253-258. [22] RAHMANI H, MAHMOOD A, HUYNH D Q, et al. Real time action recognition using histograms of depth gradients and random decision forests[C]//Proceedings of the 2014 IEEE Winter Conference on Applications of Computer Vision. Piscataway, NJ:IEEE, 2014:626-633. [23] VEMULAPALLI R, ARRATE F, CHELLAPPA R. Human action recognition by representing 3D human skeletons as points in a lie group[C]//Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ:IEEE, 2014:588-595. [24] CHEN C, LIU K, KEHTARNAVAZ N. Real-time human action recognition based on depth motion maps[J]. Journal of Real-Time Image Processing, 2013, 12(1):155-163. |