1 |
李博涵,向宇轩,封顶,等.融合知识感知与双重注意力的短文本分类模型[J].软件学报,2022,33(10):3565-3581.
|
|
LI B H, XIANG Y X, FENG D, et al. Short text classification model combining knowledge aware and dual attention [J]. Journal of Software, 2022, 33(10): 3565-3581.
|
2 |
WAWRE S V, DESHMUKH S N. Sentiment classification using machine learning techniques [J]. International Journal of Science and Research, 2016, 5(4): 819-821. 10.21275/v5i4.nov162724
|
3 |
MULLEN T, COLLIER N. Sentiment analysis using support vector machines with diverse information sources [C]// Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2004: 412-418. 10.3115/1219044.1219069
|
4 |
TAN S, CHENG X, WANG Y, et al. Adapting naive Bayes to domain adaptation for sentiment analysis [C]// Proceedings of the 2009 European Conference on Information Retrieval. Berlin: Springer, 2009: 337-349. 10.1007/978-3-642-00958-7_31
|
5 |
KIM Y. Convolutional neural networks for sentence classification[C]// Proceedings of the 2014 Conference of Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2014: 1746-1751. 10.3115/v1/d14-1181
|
6 |
LIU G, GUO J. Bidirectional LSTM with attention mechanism and convolutional layer for text classification [J]. Neurocomputing, 2019, 337: 325-338. 10.1016/j.neucom.2019.01.078
|
7 |
DEVLIN J, CHANG M-W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding [C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies,Volume 1 (Long and Short Papers). Stroudsburg:ACL,2019: 4171-4186. 10.18653/v1/n18-2
|
8 |
张海丰,曾诚,潘列,等.结合BERT和特征投影网络的新闻主题文本分类方法[J].计算机应用,2022,42(4):1116-1124. 10.11772/j.issn.1001-9081.2021071257
|
|
ZHANG H F, ZENG C, PAN L, et al. News topic text classification method based on BERT and feature projection network [J]. Journal of Computer Applications, 2022, 42(4): 1116-1124. 10.11772/j.issn.1001-9081.2021071257
|
9 |
LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach [EB/OL]. (2019-07-26) [2022-04-24]. .
|
10 |
PENNINGTON J, SOCHER R, MANNING C. GloVe: global vectors for word representation [C]// Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg:ACL, 2014: 1532-1543. 10.3115/v1/d14-1162
|
11 |
MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations in vector space [EB/OL]. (2013-09-07) [2020-08-06]. . 10.3126/jiee.v3i1.34327
|
12 |
黄友文,魏国庆,胡燕芳.DistillBIGRU:基于知识蒸馏的文本分类模型[J].中文信息学报,2022,36(4):81-89. 10.3969/j.issn.1003-0077.2022.04.010
|
|
HUANG Y W, WEI G Q, HU Y F. DistillBIGRU: text classification model based on knowledge distillation [J]. Journal of Chinese Information Processing, 2022, 36(4): 81-89. 10.3969/j.issn.1003-0077.2022.04.010
|
13 |
CUI Y, CHE W, LIU T, et al. Pre-training with whole word masking for Chinese BERT[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 3504-3514. 10.1109/taslp.2021.3124365
|
14 |
CUI Y, CHE W, LIU T, et al. Revisiting pre-trained models for Chinese natural language processing [C]// Proceedings of the 2020 Conference of Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2020: 657-668. 10.18653/v1/2020.findings-emnlp.58
|
15 |
徐月梅,樊祖薇,曹晗.基于标签嵌入注意力机制的多任务文本分类模型[J].数据分析与知识发现,2022,6(2/3):105-116.
|
|
XU Y M, FAN Z W, CAO H. A multi-task text classification model based on label embedding of attention mechanism [J]. Data Analysis and Knowledge Discovery, 2022, 6(2/3): 105-116.
|
16 |
袁国瑞. 基于标签嵌入和注意力机制的文本分类算法研究[D].合肥:中国科学技术大学,2021: 34-38. 10.23919/ccc52363.2021.9550750
|
|
YUAN G R. Text classification based on label embedding and attention mechanism [D]. Hefei: University of Science and Technology of China, 2021: 34-38. 10.23919/ccc52363.2021.9550750
|
17 |
张舒萌,余增,李天瑞.跨领域文本的可迁移情绪分析方法[J].计算机科学,2022,49(3):218-224. 10.11896/jsjkx.210400034
|
|
ZHANG S M, YU Z, LI T R. Transferable emotion analysis method for cross-domain text [J]. Computer Science, 2022, 49(3): 218-224. 10.11896/jsjkx.210400034
|
18 |
WANG Z, HUANG H, HAN S. IDEA: interactive double attentions from label embedding for text classification [C]// Proceedings of the 2022 IEEE 34th International Conference on Tools with Artificial Intelligence. Piscataway: IEEE, 2022: 233-238. 10.1109/ictai56018.2022.00041
|
19 |
ZHANG X, QIU X, PANG J, et al. Dual-axial self-attention network for text classification[J]. SCIENCE CHINA Information Sciences, 2021, 64: 222102. 10.1007/s11432-019-2744-2
|