Journal of Computer Applications ›› 2023, Vol. 43 ›› Issue (9): 2753-2759.DOI: 10.11772/j.issn.1001-9081.2022091347
• Artificial intelligence • Previous Articles Next Articles
					
						                                                                                                                                                                                                                                                    Xinyue ZHANG, Rong LIU( ), Chiyu WEI, Ke FANG
), Chiyu WEI, Ke FANG
												  
						
						
						
					
				
Received:2022-09-09
															
							
																	Revised:2022-11-11
															
							
																	Accepted:2022-11-15
															
							
							
																	Online:2023-02-14
															
							
																	Published:2023-09-10
															
							
						Contact:
								Rong LIU   
													About author:ZHANG Xinyue, born in 1997, M. S. candidate. Her research interests include pattern recognition, aspect-based sentiment analysis.Supported by:通讯作者:
					刘蓉
							作者简介:张心月(1997—),女,河南周口人,硕士研究生,主要研究方向:模式识别、方面级情感分析基金资助:CLC Number:
Xinyue ZHANG, Rong LIU, Chiyu WEI, Ke FANG. Aspect-based sentiment analysis method with integrating prompt knowledge[J]. Journal of Computer Applications, 2023, 43(9): 2753-2759.
张心月, 刘蓉, 魏驰宇, 方可. 融合提示知识的方面级情感分析方法[J]. 《计算机应用》唯一官方网站, 2023, 43(9): 2753-2759.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2022091347
| 数据集 | |
|---|---|
| SemEval2014 Task4 | Aspects, it was [MASK] | 
| ChnSentiCorp | 是好评吗?[MASK] | 
Tab. 1 Some examples of prompt texts (partial)
| 数据集 | |
|---|---|
| SemEval2014 Task4 | Aspects, it was [MASK] | 
| ChnSentiCorp | 是好评吗?[MASK] | 
| 数据集 | 标签 | 标签词 | 
|---|---|---|
| SemEval2014 Task4 | Positive | good,wonderful,great,… | 
| Negative | bad,upset,worse,… | |
| Neutral | indifferent, just ok,… | |
| ChnSentiCorp | Positive | 是,对,… | 
| Negative | 否,错,不,… | 
Tab. 2 Examples of expanded label words
| 数据集 | 标签 | 标签词 | 
|---|---|---|
| SemEval2014 Task4 | Positive | good,wonderful,great,… | 
| Negative | bad,upset,worse,… | |
| Neutral | indifferent, just ok,… | |
| ChnSentiCorp | Positive | 是,对,… | 
| Negative | 否,错,不,… | 
| 数据集 | 不同情感极性样本数 | |||
|---|---|---|---|---|
| Positive | Neutral | Negative | ||
| Laptop | train | 994 | 464 | 870 | 
| test | 341 | 169 | 128 | |
| Restaurant | train | 2 164 | 637 | 807 | 
| test | 728 | 196 | 196 | |
Tab. 3 SemEval2014 Task4 datasets
| 数据集 | 不同情感极性样本数 | |||
|---|---|---|---|---|
| Positive | Neutral | Negative | ||
| Laptop | train | 994 | 464 | 870 | 
| test | 341 | 169 | 128 | |
| Restaurant | train | 2 164 | 637 | 807 | 
| test | 728 | 196 | 196 | |
| 评论文本 | 标签 | 
|---|---|
| 很旧的设施,服务也不好,感觉一般,不能和大城市比。 | 0 | 
| 第一感觉就是门童服务很到位,前台服务也面带微笑。房间宽敞明亮,上网速度也很快。很满意的一家酒店! | 1 | 
| 服务没有最坏只有更坏,先是早上没热水然后电梯也坏了。 | 0 | 
Tab. 4 Examples of ChnSentiCorp dataset
| 评论文本 | 标签 | 
|---|---|
| 很旧的设施,服务也不好,感觉一般,不能和大城市比。 | 0 | 
| 第一感觉就是门童服务很到位,前台服务也面带微笑。房间宽敞明亮,上网速度也很快。很满意的一家酒店! | 1 | 
| 服务没有最坏只有更坏,先是早上没热水然后电梯也坏了。 | 0 | 
| 超参数 | SemEval2014 | ChnSentiCorp | 
|---|---|---|
| 预训练模型 | BERT-base-uncased | BERT-base-chinese | 
| 最大文本长度 | 32 | 300 | 
| 学习率 | ||
| dropout | 0.1 | 0.1 | 
| batch_size | 8 | 8 | 
| epoch | 10 | 10 | 
| 分类类别 | 3 | 2 | 
Tab. 5 Experimental configuration
| 超参数 | SemEval2014 | ChnSentiCorp | 
|---|---|---|
| 预训练模型 | BERT-base-uncased | BERT-base-chinese | 
| 最大文本长度 | 32 | 300 | 
| 学习率 | ||
| dropout | 0.1 | 0.1 | 
| batch_size | 8 | 8 | 
| epoch | 10 | 10 | 
| 分类类别 | 3 | 2 | 
| 方法 | Laptop | Restaurant | ||
|---|---|---|---|---|
| ACC | F1 | ACC | F1 | |
| Glove-TextCNN | 71.03 | 65.62 | 79.24 | 66.71 | 
| ELMo-Transformer | 73.12 | 66.37 | 80.46 | 68.05 | 
| BERT-TextCNN* | 75.01 | 68.93 | 81.99 | 72.15 | 
| BERT-pair* | 74.66 | 68.64 | 81.92 | 71.97 | 
| BERT-BiLSTM | 75.31 | 69.37 | 82.21 | 72.52 | 
| BMLA* | 76.73 | 71.50 | 83.54 | 74.91 | 
| P-tuning | 76.95 | 74.18 | 83.98 | 76.77 | 
| 本文方法 | 77.74 | 75.20 | 84.82 | 77.42 | 
Tab. 6 Experimental results on SemEval2014 dataset
| 方法 | Laptop | Restaurant | ||
|---|---|---|---|---|
| ACC | F1 | ACC | F1 | |
| Glove-TextCNN | 71.03 | 65.62 | 79.24 | 66.71 | 
| ELMo-Transformer | 73.12 | 66.37 | 80.46 | 68.05 | 
| BERT-TextCNN* | 75.01 | 68.93 | 81.99 | 72.15 | 
| BERT-pair* | 74.66 | 68.64 | 81.92 | 71.97 | 
| BERT-BiLSTM | 75.31 | 69.37 | 82.21 | 72.52 | 
| BMLA* | 76.73 | 71.50 | 83.54 | 74.91 | 
| P-tuning | 76.95 | 74.18 | 83.98 | 76.77 | 
| 本文方法 | 77.74 | 75.20 | 84.82 | 77.42 | 
| 方法 | ACC | F1 | 
|---|---|---|
| Glove-TextCNN | 87.38 | 88.49 | 
| ELMo-Transformer | 93.66 | 92.06 | 
| BERT-TextCNN | 93.72 | 92.53 | 
| BERT-BiLSTM | 94.05 | 94.06 | 
| P-tuning | 87.12 | 89.82 | 
| 本文方法 | 94.91 | 94.89 | 
Tab. 7 Experimental results on ChnSentiCorp dataset
| 方法 | ACC | F1 | 
|---|---|---|
| Glove-TextCNN | 87.38 | 88.49 | 
| ELMo-Transformer | 93.66 | 92.06 | 
| BERT-TextCNN | 93.72 | 92.53 | 
| BERT-BiLSTM | 94.05 | 94.06 | 
| P-tuning | 87.12 | 89.82 | 
| 本文方法 | 94.91 | 94.89 | 
| 组序 | PT | SV | F1/% | ||
|---|---|---|---|---|---|
| Laptop | Restaurant | ChnSentiCrop | |||
| 1 | × | × | 68.89 | 71.58 | 91.56 | 
| 2 | × | √ | 74.05 | 75.03 | 94.08 | 
| 3 | √ | × | 74.97 | 77.26 | 94.15 | 
| 4 | √ | √ | 75.20 | 77.42 | 94.89 | 
Tab. 8 Results of ablation experiment
| 组序 | PT | SV | F1/% | ||
|---|---|---|---|---|---|
| Laptop | Restaurant | ChnSentiCrop | |||
| 1 | × | × | 68.89 | 71.58 | 91.56 | 
| 2 | × | √ | 74.05 | 75.03 | 94.08 | 
| 3 | √ | × | 74.97 | 77.26 | 94.15 | 
| 4 | √ | √ | 75.20 | 77.42 | 94.89 | 
| 方法 | SemEval2014 | ChnSentiCrop | |
|---|---|---|---|
| Laptop | Restaurant | ||
| P-tuning | 1 700 | 2 140 | 840 | 
| BERT-BiLSTM | 460 | 580 | 227 | 
| 本文方法 | 220 | 282 | 110 | 
Tab. 9 Average running time of ten iterations of different methods
| 方法 | SemEval2014 | ChnSentiCrop | |
|---|---|---|---|
| Laptop | Restaurant | ||
| P-tuning | 1 700 | 2 140 | 840 | 
| BERT-BiLSTM | 460 | 580 | 227 | 
| 本文方法 | 220 | 282 | 110 | 
| 1 | ZHANG L, WANG S, LIU B. Deep learning for sentiment analysis: a survey[J]. WIREs Data Mining and Knowledge Discovery, 2018, 8(4): No.e1253. 10.1002/widm.1253 | 
| 2 | LIN B, ZAMPETTI F, BAVOTA G, et al. Sentiment analysis for software engineering: how far can we go?[C]// Proceedings of the ACM/IEEE 40th International Conference on Software Engineering. New York: ACM, 2018: 94-104. 10.1145/3180155.3180195 | 
| 3 | QIU X P, SUN T X, XU Y G, et al. Pre-trained models for natural language processing: a survey[J]. Science China Technological Sciences, 2020, 63(10): 1872-1897. 10.1007/s11431-020-1647-3 | 
| 4 | TANG D Y, QIN B, FENG X C, et al. Effective LSTMs for target-dependent sentiment classification[C]// Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers. [S.l.]: The COLING 2016 Organizing Committee, 2016: 3298-3307. | 
| 5 | LIU M Z, ZHOU F Y, CHEN K, et al. Co-attention networks based on aspect and context for aspect-level sentiment analysis[J]. Knowledge-Based Systems, 2021, 217: No.106810. 10.1016/j.knosys.2021.106810 | 
| 6 | CHEN P, SUN Z Q, BING L D, et al. Recurrent attention network on memory for aspect sentiment analysis[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2017: 452-461. 10.18653/v1/d17-1047 | 
| 7 | CHEN Y Z, ZHUANG T H, GUO K. Memory network with hierarchical multi-head attention for aspect-based sentiment analysis[J]. Applied Intelligence, 2021, 51(7): 4287-4304. 10.1007/s10489-020-02069-5 | 
| 8 | PENNINGTON J, SOCHER R, MANNING C D. GloVe: global vectors for word representation[C]// Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2014: 1532-1543. 10.3115/v1/d14-1162 | 
| 9 | MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]// Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2. Red Hook, NY: Curran Associates Inc., 2013: 3111-3119. | 
| 10 | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: ACL, 2019: 4171-4186. 10.18653/v1/n18-2 | 
| 11 | PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Stroudsburg, PA: ACL, 2018: 2227-2237. 10.18653/v1/n18-1202 | 
| 12 | SUN C, HUANG L Y, QIU X P. Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: ACL, 2019: 380-385. | 
| 13 | XIA C Y, ZHANG C W, NGUYEN H, et al. CG-BERT: conditional text generation with BERT for generalized few-shot intent detection[EB/OL]. (2020-04-04) [2022-07-12].. | 
| 14 | ZHANG K, ZHANG K, ZHANG M D, et al. Incorporating dynamic semantics into pre-trained language model for aspect-based sentiment analysis[EB/OL]. [2022-05-25]. . 10.18653/v1/2022.findings-acl.285 | 
| 15 | BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2020: 1877-1901. 10.18653/v1/2021.emnlp-main.734 | 
| 16 | LI C X, GAO F Y, BU J J, et al. SentiPrompt: sentiment knowledge enhanced prompt-tuning for aspect-based sentiment analysis[EB/OL]. (2021-09-17) [2022-07-12].. | 
| 17 | JIANG Z B, XU F F, ARAKI J, et al. How can we know what language models know?[J]. Transactions of the Association for Computational Linguistics, 2020, 8:423-438. 10.1162/tacl_a_00324 | 
| 18 | GAO T Y, FISCH A, CHEN D Q. Making pre-trained language models better few-shot learners[C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg, PA: ACL, 2021: 3816-3830. 10.18653/v1/2021.acl-long.295 | 
| 19 | LIU X, ZHENG Y N, DU Z X, et al. GPT understands, too[EB/OL]. (2021-03-18) [2022-07-12].. 10.1016/j.aiopen.2023.08.012 | 
| 20 | SHIN T, RAZEGHI Y, LOGAN R L IV, et al. Autoprompt: eliciting knowledge from language models with automatically generated prompts[C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 4222-4235. 10.18653/v1/2020.emnlp-main.346 | 
| 21 | SCHICK T, SCHÜTZE H. Exploiting cloze-questions for few-shot text classification and natural language inference[C]// Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Stroudsburg, PA: ACL, 2021: 255-269. 10.18653/v1/2021.eacl-main.20 | 
| 22 | SCHICK T, SCHMID H, SCHUTZE H. Automatically identifying words that can serve as labels for few-shot text classification[C]// Proceedings of the 28th International Conference on Computational Linguistics. [S.l.]: International Committee on Computational Linguistics, 2020: 5569-5578. 10.18653/v1/2020.coling-main.488 | 
| 23 | HU S D, DING N, WANG H D, et al. Knowledgeable prompt-tuning: incorporating knowledge into prompt verbalizer for text classification[C]// Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA: ACL, 2022: 2225-2240. 10.18653/v1/2022.acl-long.158 | 
| 24 | 赵亚欧,张家重,李贻斌,等. 基于ELMo和Transformer混合模型的情感分析[J]. 中文信息学报, 2021, 35(3): 115-124. 10.3969/j.issn.1003-0077.2021.03.012 | 
| ZHAO Y O, ZHANG J C, LI Y B, et al. Sentiment analysis based on hybrid model of ELMo and Transformer[J]. Journal of Chinese Information Processing, 2021, 35(3): 115-124. 10.3969/j.issn.1003-0077.2021.03.012 | |
| 25 | NGUYEN Q T, NGUYEN T L, LUONG N H, et al. Fine-tuning bert for sentiment analysis of vietnamese reviews[C]// Proceedings of the 7th NAFOSTED Conference on Information and Computer Science. Piscataway: IEEE, 2020: 302-307. 10.1109/nics51282.2020.9335899 | 
| 26 | SHAHEEN M, NIGAM S. Plumeria at SemEval-2022 Task 6: sarcasm detection for english and arabic using transformers and data augmentation[C]// Proceedings of the 16th International Workshop on Semantic Evaluation. Stroudsburg, PA: ACL, 2022: 923-937. 10.18653/v1/2022.semeval-1.130 | 
| 27 | 袁勋,刘蓉,刘明. 融合多层注意力的方面级情感分析模型[J]. 计算机工程与应用, 2021, 57(22): 147-152. | 
| YUAN X, LIU R, LIU M. Aspect-level analysis model incorporating multi-layer attention[J]. Computer Engineering and Applications, 2021, 57(22): 147-152. | |
| 28 | SUN T X, LIU X Y, QIU X P, et al. Paradigm shift in natural language processing[J]. Machine Intelligence Research, 2022, 19(3):169-183. 10.1007/s11633-022-1331-6 | 
| [1] | Qi SHUAI, Hairui WANG, Guifu ZHU. Chinese story ending generation model based on bidirectional contrastive training [J]. Journal of Computer Applications, 2024, 44(9): 2683-2688. | 
| [2] | Chenyang LI, Long ZHANG, Qiusheng ZHENG, Shaohua QIAN. Multivariate controllable text generation based on diffusion sequences [J]. Journal of Computer Applications, 2024, 44(8): 2414-2420. | 
| [3] | Quanmei ZHANG, Runping HUANG, Fei TENG, Haibo ZHANG, Nan ZHOU. Automatic international classification of disease coding method incorporating heterogeneous information [J]. Journal of Computer Applications, 2024, 44(8): 2476-2482. | 
| [4] | Youren YU, Yangsen ZHANG, Yuru JIANG, Gaijuan HUANG. Chinese named entity recognition model incorporating multi-granularity linguistic knowledge and hierarchical information [J]. Journal of Computer Applications, 2024, 44(6): 1706-1712. | 
| [5] | Zhengyu ZHAO, Jing LUO, Xinhui TU. Information retrieval method based on multi-granularity semantic fusion [J]. Journal of Computer Applications, 2024, 44(6): 1775-1780. | 
| [6] | Longtao GAO, Nana LI. Aspect sentiment triplet extraction based on aspect-aware attention enhancement [J]. Journal of Computer Applications, 2024, 44(4): 1049-1057. | 
| [7] | Xianfeng YANG, Yilei TANG, Ziqiang LI. Aspect-level sentiment analysis model based on alternating‑attention mechanism and graph convolutional network [J]. Journal of Computer Applications, 2024, 44(4): 1058-1064. | 
| [8] | Baoshan YANG, Zhi YANG, Xingyuan CHEN, Bing HAN, Xuehui DU. Analysis of consistency between sensitive behavior and privacy policy of Android applications [J]. Journal of Computer Applications, 2024, 44(3): 788-796. | 
| [9] | Kaitian WANG, Qing YE, Chunlei CHENG. Classification method for traditional Chinese medicine electronic medical records based on heterogeneous graph representation [J]. Journal of Computer Applications, 2024, 44(2): 411-417. | 
| [10] | Xiang LIN, Biao JIN, Weijing YOU, Zhiqiang YAO, Jinbo XIONG. Model integrity verification framework of deep neural network based on fragile fingerprint [J]. Journal of Computer Applications, 2024, 44(11): 3479-3486. | 
| [11] | Yushan JIANG, Yangsen ZHANG. Large language model-driven stance-aware fact-checking [J]. Journal of Computer Applications, 2024, 44(10): 3067-3073. | 
| [12] | Chenghao FENG, Zhenping XIE, Bowen DING. Selective generation method of test cases for Chinese text error correction software [J]. Journal of Computer Applications, 2024, 44(1): 101-112. | 
| [13] | Yuelin TIAN, Ruizhang HUANG, Lina REN. Scholar fine-grained information extraction method fused with local semantic features [J]. Journal of Computer Applications, 2023, 43(9): 2707-2714. | 
| [14] | Bihui YU, Xingye CAI, Jingxuan WEI. Few-shot text classification method based on prompt learning [J]. Journal of Computer Applications, 2023, 43(9): 2735-2740. | 
| [15] | Xiaomin ZHOU, Fei TENG, Yi ZHANG. Automatic international classification of diseases coding model based on meta-network [J]. Journal of Computer Applications, 2023, 43(9): 2721-2726. | 
| Viewed | ||||||
| Full text |  | |||||
| Abstract |  | |||||