[1] PORIA S, CAMBRIA E, HAZARIKA D, et al. Multi-level multiple attentions for contextual multimodal sentiment analysis[C]//Proceedings of the 2017 IEEE International Conference on Data Mining. Piscataway:IEEE,2017:1033-1038. [2] HUDDAR M G,SANNAKKI S S,RAJPUROHIT V S. A survey of computational approaches and challenges in multimodal sentiment analysis[J]. International Journal of Computer Sciences and Engineering,2019,7(1):876-883. [3] PORIA S,CAMBRIA E,BAJPAI R,et al. A review of affective computing:from unimodal analysis to multimodal fusion[J]. Information Fusion,2017,37:98-125. [4] ROSAS V P, MIHALCEA R, MORENCY L P. Multimodal sentiment analysis of Spanish online videos[J]. IEEE Intelligent Systems,2013,28(3):38-45. [5] 何俊, 刘跃, 何忠文. 多模态情感识别研究进展[J]. 计算机应用研究,2018,35(11):3201-3205.(HE J,LIU Y,HE Z W. Research progress of multimodal emotion recognition[J]. Application Research of Computers,2018,35(11):3201-3205.) [6] ZHANG L, WANG S, LIU B. Deep learning for sentiment analysis:a survey[J]. WIREs Data Mining and Knowledge Discovery,2018,8(4):No. e1253. [7] KIM Y. Convolutional neural networks for sentence classification[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg,PA:Association for Computational Linguistics,2014:1746-1751. [8] HOCHREITER S,SCHMIDHUBER J. Long short-term memory[J]. Neural Computation,1997,9(8):1735-1780. [9] TANG D,QIN B,LIU T. Document modeling with gated recurrent neural network for sentiment classification[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics,2015:1422-1432. [10] 刘启元, 张栋, 吴良庆, 等. 基于上下文增强LSTM的多模态情感分析[J]. 计算机科学,2019,46(11):181-185.(LIU Q Y, ZHANG D,WU L Q,et al. Multi-modal sentiment analysis with context-augmented LSTM[J]. Computer Science,2019,46(11):181-185.) [11] ZADEH A,ZELLERS R,PINCUS E,et al. Multimodal sentiment intensity analysis in videos:facial gestures and verbal messages[J]. IEEE Intelligent Systems,2016,31(6):82-88. [12] ZADEH A B,LIANG P P,PORIA S,et al. Multimodal language analysis in the wild:CMU-MOSEI dataset and interpretable dynamic fusion graph[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA:Association for Computational Linguistics, 2018:2236-2246. [13] ELLIS J G,JOU B,CHANG S F. Why we watch the news:a dataset for exploring sentiment in broadcast video news[C]//Proceedings of the 16th International Conference on Multimodal Interaction. New York:ACM,2014:104-111. [14] PORIA S,CHATURVEDI I,CAMBRIA E,et al. Convolutional MKL based multimodal emotion recognition and sentiment analysis[C]//Proceedings of the 2016 IEEE International Conference on Data Mining. Piscataway:IEEE,2016:439-448. [15] PORIA S, CAMBRIA E, HAZARIKA D, et al. Contextdependent sentiment analysis in user-generated videos[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA:Association for Computational Linguistics,2017:873-883. [16] CHEN M,WANG S,LIANG P P,et al. Multimodal sentiment analysis with word-level fusion and reinforcement learning[C]//Proceedings of the 19th ACM International Conference on Multimodal Interaction. New York:ACM,2017:163-171. [17] ZADEH A,CHEN M,PORIA S,et al. Tensor fusion network for multimodal sentiment analysis[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2017:1103-1114. [18] BAHDANAU D,CHO K,BENGIO Y. Neural machine translation by jointly learning to align and translate[C/OL]//Proceedings of the 2015 International Conference on Learning Representations.[2020-02-12]. https://arxiv.org/pdf/1409.0473v7.pdf. [19] ZADEH A, LIANG P P, PORIA S, et al. Multi-attention recurrent network for human communication comprehension[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Palo Alto,CA:AAAI Press,2018:5642-5649. [20] ZADEH A,LIANG P P,MAZUMDER N,et al. Memory fusion network for multi-view sequential learning[C]//Proceedings of the 32nd AAAI Conference on Artificial Intelligence. Palo Alto,CA:AAAI Press,2018:5634-5641. [21] XI C,LU G,YAN J. Multimodal sentiment analysis based on multi-head attention mechanism[C]//Proceedings of the 4th International Conference on Machine Learning and Soft Computing. New York:ACM,2020:34-39. [22] KIM T,LEE B. Multi-attention multimodal sentiment analysis[C]//Proceedings of the 2020 International Conference on Multimedia Retrieval. New York:ACM,2020:436-441. [23] DHINGRA B,LIU H,YANG Z,et al. Gated-attention readers for text comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA:Association for Computational Linguistics,2017:1832-1846. [24] KINGMA D P, BA J L. Adam:a method for stochastic optimization[C/OL]//Proceedings of the 2015 International Conference on Learning Representations.[2020-03-22]. https://arxiv.org/pdf/1412.6980v9.pdf. [25] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout:a simple way to prevent neural networks from overfitting[J]. Journal of Machine Learning Research,2014,15(1):1929-1958. |