[1] 周立柱,贺宇凯,王建勇.情感分析研究综述[J].计算机应用,2008,28(11):2725-2728.(ZHOU L Z, HE Y K, WANG J Y. Survey on research of sentiment analysis[J]. Journal of Computer Applications, 2008,28(11):2725-2728.)
[2] 赵妍妍,秦兵,刘挺.文本情感分析[J].软件学报,2010,21(8):1834-1848.(ZHAO Y Y, QIN B, LIU T. Sentiment analysis[J]. Journal of Software, 2010, 21(8):1834-1848.)
[3] ZHANG Y, WALLACE B. A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification[EB/OL]. (2016-04-06)[2018-06-07]. https://arxiv.org/abs/1510.03820.
[4] KIM Y. Convolutional neural networks for sentence classification[EB/OL]. (2014-09-03)[2018-06-01]. https://arxiv.org/abs/1408.5882.
[5] ZHANG L, WANG S, LIU B. Deep learning for sentiment analysis:a survey[J]. Wiley Interdisciplinary Reviews:Data Mining and Knowledge Discovery, 2018, 8(4):e1253.
[6] KIM S-M, HOVY E. Extracting opinions, opinion holders, and topics expressed in online news media text[C]//Proceedings of the 2006 Workshop on Sentiment and Subjectivity in Text. Stroudsburg, PA:Association for Computational Linguistics, 2006:1-8.
[7] TURNEY P D. Thumbs up or thumbs down?:semantic orientation applied to unsupervised classification of reviews[C]//Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Stroudsburg, PA:Association for Computational Linguistics, 2002:417-424.
[8] HU M, LIU B. Mining and summarizing customer reviews[C]//Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York:ACM, 2004:168-177.
[9] PANG B, LEE L, VAITHYANATHAN S. Thumbs up?:sentiment classification using machine learning techniques[C]//Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing-Volume 10. Stroudsburg, PA:Association for Computational Linguistics, 2002:79-86.
[10] MOHAMMAD S M, KIRITCHENKO S, ZHU X. NRC-Canada:building the state-of-the-art in sentiment analysis of tweets[EB/OL]. (2013-08-28)[2018-07-02]. https://arxiv.org/abs/1308.6242.
[11] KIM S-M, HOVY E. Automatic identification of pro and con reasons in online reviews[C]//Proceedings of the 2006 COLING/ACL on Main Conference Poster Sessions. Stroudsburg, PA:Association for Computational Linguistics, 2006:483-490.
[12] MEDHAT W, HASSAN A, KORASHY H. Sentiment analysis algorithms and applications:a survey[J]. Ain Shams Engineering Journal, 2014, 5(4):1093-1113.
[13] BENGIO Y, DUCHARME R, VINCENT P, et al. A neural probabilistic language model[J]. Journal of Machine Learning Research, 2003, 3:1137-1155.
[14] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]//NIPS'13:Proceedings of the 26th International Conference on Neural Information Processing Systems. North Miami Beach, FL:Curran Associates Inc., 2013:3111-3119.
[15] PENNINGTON J, SOCHER R, MANNING C. GloVe:global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2014:1532-1543.
[16] SOCHER R, PERELYGIN A, WU J, et al. Recursive deep models for semantic compositionality over a sentiment treebank[C]//Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2013:1631-1642.
[17] SOCHER R, PENNINGTON J, HUANG E H, et al. Semi-supervised recursive autoencoders for predicting sentiment distributions[C]//Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2011:151-161.
[18] QIAN Q, TIAN B, HUANG M, et al. Learning tag embeddings and tag-specific composition functions in recursive neural network[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2015:1365-1374.
[19] TAI K S, SOCHER R, MANNING C D. Improved semantic representations from tree-structured long short-term memory networks[EB/OL]. (2015-05-30)[2018-08-10]. https://arxiv.org/abs/1503.00075.
[20] IRSOY O, CARDIE C. Opinion mining with deep recurrent neural networks[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2014:720-728.
[21] LIU P, QIU X, HUANG X. Recurrent neural network for text classification with multi-task learning[EB/OL]. (2016-05-17)[2018-08-01]. https://arxiv.org/abs/1605.05101.
[22] QIAN Q, HUANG M, LEI J, et al. Linguistically regularized LSTMs for sentiment classification[EB/OL]. (2017-04-25)[2018-08-15]. https://arxiv.org/abs/1611.03949.
[23] KALCHBRENNER N, GREFENSTETTE E, BLUNSOM P. A convolutional neural network for modelling sentences[EB/OL]. (2014-04-08)[2018-07-16]. https://arxiv.org/abs/1404.2188.
[24] ZHOU C, SUN C, LIU Z, et al. A C-LSTM neural network for text classification[EB/OL]. (2015-11-30)[2018-08-22]. https://arxiv.org/abs/1511.08630.
[25] COLLOBERT R, WESTON J, BOTTOU L, et al. Natural lan-guage processing (almost) from scratch[J]. Journal of Machine Learning Research, 2011, 12:2493-2537.
[26] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8):1735-1780.
[27] GRAVES A, JAITLY N, MOHAMED A. Hybrid speech recognition with deep bidirectional LSTM[C]//Proceedings of the 2013 IEEE Workshop on Automatic Speech Recognition and Understanding. Piscataway, NJ:IEEE, 2013:273-278.
[28] MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations in vector space[EB/OL]. (2013-09-07)[2018-09-02]. https://arxiv.org/abs/1301.3781.
[29] LIU B. Sentiment Analysis and Opinion Mining[M]. San Rafael, CA:Morgan and Claypool Publishers, 2012:1-167.
[30] McCANN B, BRADBURY J, XIONG C, et al. Learned in translation:contextualized word vectors[C]//NIPS 2017:Proceedings of the 31st Annual Conference on Neural Information Processing Systems. North Miami Beach, FL:Curran Associates Inc., 2017:6297-6308.
[31] PETERS M E, NEUMANN M, IYYER M et al. Deep contextualized word representations[EB/OL]. (2018-03-22)[2018-10-21]. https://arxiv.org/abs/1802.05365.
[32] HOWARD J, RUDER S. Universal language model fine-tuning for text classification[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA:Association for Computational Linguistics, 2018:328-339.
[33] RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understanding by generative pre-training[EB/OL]. (2018-06-11)[2018-10-22]. https://blog.openai.com/language-unsupervised/.
[34] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//NIPS 2017:Proceedings of the 31st Annual Conference on Neural Information Processing Systems. North Miami Beach, FL:Curran Associates Inc., 2017:5998-6008.
[35] DEVLIN J, CHANG M-W, LEE K, et al. BERT:pre-training of deep bidirectional transformers for language understanding[EB/OL]. (2018-10-11)[2018-11-13]. https://arxiv.org/abs/1810.04805. |