[1] EKMAN P, FRIESEN W. Facial Action Coding System:A Technique for the Measurement of Facial Movement[M]. Palo Alto:Consulting Psychologists Press. 1978:1-10. [2] COHN J F, AMBADAR Z, EKMAN P. Observer-based measure-ment of facial expression with the facial action coding system[M]//COAN J A, ALLEN J B. The Handbook of Emotion Elicitation and Assessment. New York:Oxford, 2007:203-221. [3] TIAN Y, KANADE T, COHN J F. Recognizing action units for facial expression analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(2):97-115. [4] 徐琳琳,张树美,赵俊莉.基于图像的面部表情识别方法综述[J].计算机应用,2017,37(12):3509-3516,3546.(XU L L, ZHANG S M, ZHAO J L. Summary of facial expression recognition methods based on image[J]. Journal of Computer Applications, 2017, 37(12):3509-3516, 3546.) [5] MAVADATI S M, MAHOOR M H, BARTLETT K, et al. DISFA:a spontaneous facial action intensity database[J]. IEEE Transactions on Affective Computing, 2013, 4(2):151-160. [6] ZHANG X, YIN L, COHN J F, et al. A high-resolution spontaneous 3D dynamic facial expression database[C]//Proceedings of the 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. Piscataway:IEEE, 2013:1-6. [7] BENITEZ-QUIROZ C F, SRINIVASAN R, MARTINEZ A M. EmotioNet:an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2016:5562-5570. [8] DU S, MARTINEZ A M. Compound facial expressions of emotion:from basic research to clinical applications[J]. Dialogues in Clinical Neuroscience, 2015, 17(4):443-455. [9] LUCEY P, COHN J F, PRKACHIN K M, et al. Painful data:the UNBC-McMaster shoulder pain expression archive database[C]//Proceedings of the 2011 International Conference and Workshops on Automatic Face and Gesture Recognition. Piscataway:IEEE, 2011:57-64. [10] LUCEY P, COHN J F, KANADE T, et al. The extended Cohn-Kanade dataset (CK+):a complete dataset for action unit and emotion-specified expression[C]//Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2010:94-101. [11] ZHANG Z, GIRARD J M, WU Y, et al. Multimodal spontaneous emotion corpus for human behavior analysis[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2016:3438-3446. [12] VIOLA P, JONES M. Robust real-time object detection[J]. International Journal of Computer Vision, 2001, 57(2):34-47. [13] ZHANG K, ZHANG Z, LI Z, et al. Joint face detection and alignment using multitask cascaded convolutional networks[J]. IEEE Signal Processing Letters, 2016, 23(10):1499-1503. [14] LI J, WANG Y, WANG C, et al. DSFD:Dual Shot Face Detector[EB/OL].[2019-04-06]. https://arxiv.org/pdf/1810.10220.pdf. [15] MATTHEWS I, BAKER S. Active appearance models revisited[J]. International Journal of Computer Vision, 2004, 60(2):135-164. [16] CUI Z, XIAO S, NIU Z, et al. Recurrent shape regression[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(5):1271-1278. [17] CHU W S, DE LA TORRE F, COHN J F. Selective transfer machine for personalized facial action unit detection[C]//Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2013:3515-3522. [18] LUCEY S, ASHRAF A B, COHN J F. Investigating spontaneous facial action recognition through AAM representations of the face[M]//DALAC K, GRGIC M. Face Recognition. Croatia:I-Tech Education and Publishing, 2007:275-286. [19] DING X, CHU W S, DE LA TORRE F, et al. Facial action unit event detection by cascade of tasks[C]//Proceedings of the 2013 IEEE International Conference on Computer Vision. Piscataway:IEEE, 2013:2400-2407. [20] 郭振铎,路向阳,徐庆伟,等.基于面部块运动历史直方图特征的视频表情自动识别[J].计算机应用与软件,2017,34(11):192-196.(GUO Z D, LU X Y, XU Q W, et al. Automatic facial expression recognition based on motion history histogram features of facial saliency blocks[J]. Computer Applications and Software, 2017, 34(11):192-196.) [21] VALSTAR M, PANTIC M. Fully automatic facial action unit detection and temporal analysis[C]//Proceedings of the 2006 IEEE International Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2006:149-149. [22] MAHOOR M H, ZHOU M, VEON K L, et al. Facial action unit recognition with sparse representation[C]//Proceedings of the 2011 International Conference and Workshops on Automatic Face and Gesture Recognition. Piscataway:IEEE, 2011:336-342. [23] BENITEZ-QUIROZ C F, SRINIVASAN R, MARTINEZ A M. EmotioNet:an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2016:5562-5570. [24] WHITEHILL J, OMLIN C W. Haar features for FACS AU recognition[C]//Proceeding of the 7th International Conference on Automatic Face and Gesture Recognition. Piscataway:IEEE, 2006:5-9. [25] LIU P, HAN S, MENG Z, et al. Facial expression recognition via a boosted deep belief network[C]//Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2014:1805-1812. [26] ZHONG L, LIU Q, YANG P, et al. Learning multiscale active facial patches for expression analysis[J]. IEEE Transactions on Cybernetics, 2014, 45(8):1499-1510. [27] TAHERI S, QIU Q, CHELLAPPA R. Structure-preserving sparse decomposition for facial expression analysis[J]. IEEE Transactions on Image Processing, 2014, 23(8):3590-3603. [28] ZHAO K, CHU W S, DE LA TORRE F, et al. Joint patch and multi-label learning for facial action unit detection[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2015:2207-2216. [29] GUDI A, TASLI H E, den UYL T M, et al. Deep learning based FACS action unit occurrence and intensity estimation[C]//Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition Piscataway:IEEE, 2015:1-5. [30] HAN S, MENG Z, LI Z, et al. Optimizing filter size in convolutional neural networks for facial action unit recognition[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2018:5070-5078. [31] DONATO G, BARTLETT M S, HAGER J C, et al. Classifying facial actions[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, 21(10):974-989. [32] JAISWAL S, VALSTAR M. Deep learning the dynamic appearance and shape of facial action units[C]//Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision. Piscataway:IEEE, 2016:1-8. [33] ZHAO K, CHU W S, ZHANG H. Deep region and multi-label learning for facial action unit detection[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2016:3391-3399. [34] LI W, ABTAHI F, ZHU Z, et al. EAC-Net:a region-based deep enhancing and cropping approach for facial action unit detection[C]//Proceedings of the 12th IEEE International Conference on Automatic Face and Gesture Recognition. Piscataway:IEEE, 2017:103-110. [35] LI W, ABTAHI F, ZHU Z. Action unit detection with region adaptation, multi-labeling learning and optimal temporal fusing[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2017:6766-6775. [36] SHAO Z, LIU Z, CAI J, et al. Deep adaptive attention for joint facial action unit detection and face alignment[C]//Proceedings of the 2018 European Conference on Computer Vision, LNCS 11217. Cham:Springer, 2018:725-740. [37] 赵凯莉.面部活动单元的结构化多标签学习[D].北京:北京邮电大学,2016:22-24.(ZHAO K L. Joint patch and multi-label learning of facial action unit detection[D]. Beijing:Beijing University of Posts and Telecommunications, 2016:22-24.) [38] VALSTAR M F, PANTIC M. Combined support vector machines and hidden Markov models for modeling facial action temporal dynamics[C]//Proceedings of the 2007 International Workshop on Human-Computer Interaction, LNCS 4796. Berlin:Springer, 2007:118-127. [39] TONG Y, LIAO W, JI Q. Inferring facial action units with causal relations[C]//Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2006:1623-1630. [40] TONG Y, JI Q. Learning Bayesian networks with qualitative constraints[C]//Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2008:1-8. [41] TONG Y, LIAO W, JI Q. Facial action unit recognition by exploiting their dynamic and semantic relationships[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(10):1683-1699. [42] GAO Z, WANG S, WU C, et al. Facial action unit recognition by relation modeling from both qualitative knowledge and quantitative data[C]//Proceedings of the 2014 IEEE International Conference on Multimedia and Expo Workshops. Piscataway:IEEE, 2014:1-6. [43] WANG Z, LI Y, WANG S, et al. Capturing global semantic relationships for facial action unit recognition[C]//Proceedings of the 2013 IEEE International Conference on Computer Vision. Piscataway:IEEE, 2013:3304-3311. [44] WU Y, JI Q. Constrained joint cascade regression framework for simultaneous facial action unit recognition and facial landmark detection[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2016:3400-3408. [45] ZHANG X, MAHOOR M H. Task-dependent multi-task multiple kernel learning for facial action unit detection[J]. Pattern Recognition, 2016, 51:187-196. [46] ELEFTHERIADIS S, RUDOVIC O, PANTIC M. Multi-conditional latent variable model for joint facial action unit detection[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway:IEEE, 2015:3792-3800. [47] WALECKI R, RUDOVIC O, PAVLOVIC V, et al. Deep structured learning for facial action unit intensity estimation[C]//Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2017:5709-5718. [48] SHAO Z, LIU Z, CAI J, et al. Facial action unit detection using attention and relation learning[EB/OL].[2018-08-24]. https://arxiv.org/pdf/1808.03457.pdf. [49] CORNEANU C, MADADI M, ESCALERA S. Deep structure inference network for facial action unit recognition[C]//Proceedings of the 2018 European Conference on Computer Vision, LNCS 11216. Cham:Springer, 2018:309-324. [50] LI G, ZHU X, ZENG Y, et al. Semantic relationships guided representation learning for facial action unit recognition[EB/OL].[2019-04-22]. https://arxiv.org/pdf/1904.09939.pdf. [51] WU S, WANG S, PAN B, et al. Deep facial action unit recognition from partially labeled data[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway:IEEE, 2017:3971-3979. [52] BENITEZ-QUIROZ C F, WANG Y, MARTINEZ A M. Recognition of action units in the wild with deep nets and a new global-local loss[C]//Proceedings of the 2017 IEEE International Conference on Computer Vision. Piscataway:IEEE, 2017:3990-3999. [53] ZHAO K, CHU W S, MARTINEZ A M. Learning facial action units from Web images with scalable weakly supervised clustering[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2018:2090-2099. [54] ZHANG Y, DONG W, HU B, et al. Weakly-supervised deep convolutional neural network learning for facial action unit intensity estimation[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2018:2314-2323. [55] ZHANG Y, DONG W, HU B, et al. Classifier learning with prior probabilities for facial action unit recognition[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2018:5108-5116. [56] 刘波宁,翟东海.基于双鉴别网络的生成对抗网络图像修复方法[J].计算机应用,2018,38(12):3557-3562,3595.(LIU B N, ZHAI D H. Image completion method of generative adversarial networks based on two discrimination networks[J]. Journal of Computer Applications, 2018, 38(12):3557-3562, 3595.) [57] PENG G, WANG S. Weakly supervised facial action unit recognition through adversarial training[C]//Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, DC:IEEE Computer Society, 2018:2188-2196. |