《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (8): 2412-2419.DOI: 10.11772/j.issn.1001-9081.2022071041
所属专题: 人工智能
衡红军, 杨鼎诚
收稿日期:
2022-07-19
修回日期:
2022-10-28
接受日期:
2022-11-11
发布日期:
2023-01-15
出版日期:
2023-08-10
通讯作者:
杨鼎诚
作者简介:
衡红军(1968—),男,河南周口人,副教授,博士,主要研究方向:自然语言处理、智能信息处理;
Received:
2022-07-19
Revised:
2022-10-28
Accepted:
2022-11-11
Online:
2023-01-15
Published:
2023-08-10
Contact:
Dingcheng YANG
About author:
HENG Hongjun, born in 1968, Ph. D., associate professor. His research interests include natural language processing, intelligent information processing.
摘要:
现有的方面级情感分析方法对句法依存树蕴含信息使用不足,忽略多方面词之间的关联,并且缺少对外部知识的使用。针对这些问题,提出一种知识增强的方面词交互图神经网络(KEAIG)模型。首先利用融合领域知识的BERT-PT (Bidirectional Encoder Representation from Transformers with Post-Train)编码文本,并利用知识图谱增加句法树的情感信息。模型分两部分对句法依存树蕴含的信息进行提取:第一部分利用句法依存树中的关联关系和每个单词的词性标签提取句子特征,第二部分对融入知识图谱的句法依存树进行特征提取。之后使用融合门控单元将多方面词关联特征融合进提取到的特征中。最后将两部分句子表示拼接起来作为最终分类依据。在4个数据集上的实验结果表明,所提模型相较于基准模型关系图注意力网络(RGAT),在准确率上分别提升了2.17%、5.54%、2.60%和2.83%,在F1值(Macro?F1)上分别提升了2.69%、6.87%、8.77%和14.70%,充分表明了利用句法树、引入外部知识和提取多方面词关联的有效性。
中图分类号:
衡红军, 杨鼎诚. 知识增强的方面词交互图神经网络[J]. 计算机应用, 2023, 43(8): 2412-2419.
Hongjun HENG, Dingcheng YANG. Knowledge enhanced aspect word interactive graph neural network[J]. Journal of Computer Applications, 2023, 43(8): 2412-2419.
数据集 | 积极样本数 | 中性样本数 | 消极样本数 | |||
---|---|---|---|---|---|---|
训练集 | 测试集 | 训练集 | 测试集 | 训练集 | 测试集 | |
LAPTOP14 | 994 | 341 | 870 | 128 | 464 | 169 |
REST14 | 2 164 | 728 | 807 | 196 | 637 | 196 |
REST15 | 912 | 326 | 36 | 34 | 256 | 182 |
REST16 | 1 240 | 469 | 69 | 30 | 439 | 117 |
表1 数据集分布
Tab. 1 Dataset distribution
数据集 | 积极样本数 | 中性样本数 | 消极样本数 | |||
---|---|---|---|---|---|---|
训练集 | 测试集 | 训练集 | 测试集 | 训练集 | 测试集 | |
LAPTOP14 | 994 | 341 | 870 | 128 | 464 | 169 |
REST14 | 2 164 | 728 | 807 | 196 | 637 | 196 |
REST15 | 912 | 326 | 36 | 34 | 256 | 182 |
REST16 | 1 240 | 469 | 69 | 30 | 439 | 117 |
模型方法 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
TD⁃LSTM | 0.780 0 | 0.667 3 | 0.718 3 | 0.684 3 | 0.763 9 | 0.587 0 | 0.821 6 | 0.542 1 |
ATAE⁃LSTM | 0.786 0 | 0.670 2 | 0.688 8 | 0.639 3 | 0.784 8 | 0.605 3 | 0.837 7 | 0.617 1 |
IAN | 0.786 0 | 0.669 0 | 0.719 6 | 0.684 8 | 0.769 4 | 0.587 9 | 0.855 5 | 0.557 7 |
AOA | 0.799 7 | 0.704 2 | 0.726 2 | 0.675 2 | 0.781 7 | 0.570 2 | 0.875 0 | 0.662 1 |
RAM | 0.802 3 | 0.708 0 | 0.744 9 | 0.713 5 | 0.799 8 | 0.605 7 | 0.838 8 | 0.621 4 |
ASGCN⁃DT | 0.808 6 | 0.721 9 | 0.741 4 | 0.692 4 | 0.793 4 | 0.607 8 | 0.886 9 | 0.666 4 |
ASGCN⁃DG | 0.807 7 | 0.720 2 | 0.755 5 | 0.710 5 | 0.798 9 | 0.618 9 | 0.889 9 | 0.674 8 |
BERT | 0.841 1 | 0.766 8 | 0.775 9 | 0.732 9 | 0.834 8 | 0.661 8 | 0.901 0 | 0.741 6 |
BERT⁃PT | 0.859 8* | 0.793 0* | 0.780 6* | 0.735 3* | 0.849 6* | 0.710 3* | 0.917 6* | 0.740 8* |
AEN+BERT | 0.831 2 | 0.737 6 | 0.799 3 | 0.763 1 | 0.840 3* | 0.648 2* | 0.897 1* | 0.720 3* |
SD⁃GCN+BERT | 0.835 7 | 0.764 7 | 0.813 5 | 0.783 4 | — | — | — | — |
R⁃GAT+BERT | 0.866 0 | 0.813 5 | 0.782 1 | 0.740 7 | 0.850 5* | 0.725 1* | 0.918 8* | 0.711 7* |
SA⁃GCN+BERT | 0.861 6 | 0.805 4 | 0.803 1 | 0.771 2 | 0.841 8 | 0.694 2 | 0.914 1 | 0.803 9 |
DGEDT+BERT | 0.863 0 | 0.800 0 | 0.798 0 | 0.756 0 | 0.840 0 | 0.710 0 | 0.919 0 | 0.790 0 |
DualGCN+BERT | 0.871 3 | 0.811 6 | 0.818 0 | 0.781 0 | — | — | — | — |
InterGCN+BERT | 0.871 2 | 0.810 2 | 0.828 7 | 0.793 2 | 0.854 2 | 0.710 5 | 0.912 7 | 0.783 2 |
BiSyn⁃GAT+ | 0.879 4 | 0.824 3 | 0.8291 | 0.7938 | — | — | — | — |
KEAIG(本文方法) | 0.8848 | 0.8354 | 0.825 4 | 0.791 6 | 0.8726 | 0.7887 | 0.9448 | 0.8163 |
表2 不同模型的实验结果表对比
Tab. 2 Comparison of experimental results of different models
模型方法 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
TD⁃LSTM | 0.780 0 | 0.667 3 | 0.718 3 | 0.684 3 | 0.763 9 | 0.587 0 | 0.821 6 | 0.542 1 |
ATAE⁃LSTM | 0.786 0 | 0.670 2 | 0.688 8 | 0.639 3 | 0.784 8 | 0.605 3 | 0.837 7 | 0.617 1 |
IAN | 0.786 0 | 0.669 0 | 0.719 6 | 0.684 8 | 0.769 4 | 0.587 9 | 0.855 5 | 0.557 7 |
AOA | 0.799 7 | 0.704 2 | 0.726 2 | 0.675 2 | 0.781 7 | 0.570 2 | 0.875 0 | 0.662 1 |
RAM | 0.802 3 | 0.708 0 | 0.744 9 | 0.713 5 | 0.799 8 | 0.605 7 | 0.838 8 | 0.621 4 |
ASGCN⁃DT | 0.808 6 | 0.721 9 | 0.741 4 | 0.692 4 | 0.793 4 | 0.607 8 | 0.886 9 | 0.666 4 |
ASGCN⁃DG | 0.807 7 | 0.720 2 | 0.755 5 | 0.710 5 | 0.798 9 | 0.618 9 | 0.889 9 | 0.674 8 |
BERT | 0.841 1 | 0.766 8 | 0.775 9 | 0.732 9 | 0.834 8 | 0.661 8 | 0.901 0 | 0.741 6 |
BERT⁃PT | 0.859 8* | 0.793 0* | 0.780 6* | 0.735 3* | 0.849 6* | 0.710 3* | 0.917 6* | 0.740 8* |
AEN+BERT | 0.831 2 | 0.737 6 | 0.799 3 | 0.763 1 | 0.840 3* | 0.648 2* | 0.897 1* | 0.720 3* |
SD⁃GCN+BERT | 0.835 7 | 0.764 7 | 0.813 5 | 0.783 4 | — | — | — | — |
R⁃GAT+BERT | 0.866 0 | 0.813 5 | 0.782 1 | 0.740 7 | 0.850 5* | 0.725 1* | 0.918 8* | 0.711 7* |
SA⁃GCN+BERT | 0.861 6 | 0.805 4 | 0.803 1 | 0.771 2 | 0.841 8 | 0.694 2 | 0.914 1 | 0.803 9 |
DGEDT+BERT | 0.863 0 | 0.800 0 | 0.798 0 | 0.756 0 | 0.840 0 | 0.710 0 | 0.919 0 | 0.790 0 |
DualGCN+BERT | 0.871 3 | 0.811 6 | 0.818 0 | 0.781 0 | — | — | — | — |
InterGCN+BERT | 0.871 2 | 0.810 2 | 0.828 7 | 0.793 2 | 0.854 2 | 0.710 5 | 0.912 7 | 0.783 2 |
BiSyn⁃GAT+ | 0.879 4 | 0.824 3 | 0.8291 | 0.7938 | — | — | — | — |
KEAIG(本文方法) | 0.8848 | 0.8354 | 0.825 4 | 0.791 6 | 0.8726 | 0.7887 | 0.9448 | 0.8163 |
模型方法 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
Full model | 0.8848 | 0.8354 | 0.8254 | 0.7916 | 0.8875 | 0.7819 | 0.9448 | 0.8163 |
w/o tag part | 0.878 3 | 0.825 8 | 0.811 9 | 0.782 1 | 0.876 3 | 0.772 4 | 0.932 7 | 0.807 2 |
w/o kno part | 0.879 4 | 0.822 7 | 0.810 5 | 0.765 8 | 0.878 5 | 0.768 9 | 0.929 7 | 0.802 3 |
w/o senticnet | 0.881 3 | 0.829 8 | 0.820 7 | 0.787 3 | 0.883 1 | 0.775 1 | 0.938 5 | 0.809 4 |
w/o asp mask | 0.880 7 | 0.827 6 | 0.813 2 | 0.786 9 | 0.878 1 | 0.775 2 | 0.938 6 | 0.810 7 |
w/o asp inter | 0.879 8 | 0.826 1 | 0.821 2 | 0.782 3 | 0.879 4 | 0.779 1 | 0.935 1 | 0.808 3 |
w/o domin bert | 0.872 3 | 0.818 5 | 0.811 7 | 0.769 3 | 0.874 8 | 0.762 1 | 0.925 1 | 0.794 6 |
表3 模块消融实验结果
Tab. 3 Experimental results of module ablation
模型方法 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
Full model | 0.8848 | 0.8354 | 0.8254 | 0.7916 | 0.8875 | 0.7819 | 0.9448 | 0.8163 |
w/o tag part | 0.878 3 | 0.825 8 | 0.811 9 | 0.782 1 | 0.876 3 | 0.772 4 | 0.932 7 | 0.807 2 |
w/o kno part | 0.879 4 | 0.822 7 | 0.810 5 | 0.765 8 | 0.878 5 | 0.768 9 | 0.929 7 | 0.802 3 |
w/o senticnet | 0.881 3 | 0.829 8 | 0.820 7 | 0.787 3 | 0.883 1 | 0.775 1 | 0.938 5 | 0.809 4 |
w/o asp mask | 0.880 7 | 0.827 6 | 0.813 2 | 0.786 9 | 0.878 1 | 0.775 2 | 0.938 6 | 0.810 7 |
w/o asp inter | 0.879 8 | 0.826 1 | 0.821 2 | 0.782 3 | 0.879 4 | 0.779 1 | 0.935 1 | 0.808 3 |
w/o domin bert | 0.872 3 | 0.818 5 | 0.811 7 | 0.769 3 | 0.874 8 | 0.762 1 | 0.925 1 | 0.794 6 |
层数 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
1 | 0.819 3 | 0.788 7 | 0.881 2 | 0.824 2 | 0.869 8 | 0.782 3 | 0.935 0 | 0.806 0 |
2 | 0.825 4 | 0.791 6 | 0.884 8 | 0.835 4 | 0.872 6 | 0.788 7 | 0.944 8 | 0.816 4 |
3 | 0.822 3 | 0.789 8 | 0.884 8 | 0.828 8 | 0.869 0 | 0.778 9 | 0.941 6 | 0.814 2 |
4 | 0.817 8 | 0.784 2 | 0.875 0 | 0.815 1 | 0.868 2 | 0.773 4 | 0.936 7 | 0.809 4 |
5 | 0.813 6 | 0.779 7 | 0.878 5 | 0.813 6 | 0.865 3 | 0.769 5 | 0.935 0 | 0.804 6 |
6 | 0.815 7 | 0.776 9 | 0.879 4 | 0.814 2 | 0.867 1 | 0.759 0 | 0.933 4 | 0.805 2 |
7 | 0.808 7 | 0.768 9 | 0.870 5 | 0.808 7 | 0.869 0 | 0.752 2 | 0.933 9 | 0.800 9 |
8 | 0.802 5 | 0.763 8 | 0.871 6 | 0.803 1 | 0.863 4 | 0.754 2 | 0.931 8 | 0.797 9 |
表4 不同GCN层数时模型的实验结果
Tab. 4 Experiment results of model with different GCN layers
层数 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
1 | 0.819 3 | 0.788 7 | 0.881 2 | 0.824 2 | 0.869 8 | 0.782 3 | 0.935 0 | 0.806 0 |
2 | 0.825 4 | 0.791 6 | 0.884 8 | 0.835 4 | 0.872 6 | 0.788 7 | 0.944 8 | 0.816 4 |
3 | 0.822 3 | 0.789 8 | 0.884 8 | 0.828 8 | 0.869 0 | 0.778 9 | 0.941 6 | 0.814 2 |
4 | 0.817 8 | 0.784 2 | 0.875 0 | 0.815 1 | 0.868 2 | 0.773 4 | 0.936 7 | 0.809 4 |
5 | 0.813 6 | 0.779 7 | 0.878 5 | 0.813 6 | 0.865 3 | 0.769 5 | 0.935 0 | 0.804 6 |
6 | 0.815 7 | 0.776 9 | 0.879 4 | 0.814 2 | 0.867 1 | 0.759 0 | 0.933 4 | 0.805 2 |
7 | 0.808 7 | 0.768 9 | 0.870 5 | 0.808 7 | 0.869 0 | 0.752 2 | 0.933 9 | 0.800 9 |
8 | 0.802 5 | 0.763 8 | 0.871 6 | 0.803 1 | 0.863 4 | 0.754 2 | 0.931 8 | 0.797 9 |
序号 | 句子 | 方面词(情感极性) | BiSyn-GAT+ | KEAIG |
---|---|---|---|---|
案例1 | the food was great - sushi was good, but the cooked food amazed us. | food(积极) sushi(积极) cooked food(积极) | 积极(√) 积极(√) 积极(√) | 积极(√) 积极(√) 消极(×) |
案例2 | try the rose roll (not on menu). | rose roll(积极) menu(中性) | 积极(√) 消极(×) | 积极(√) 中性(√) |
案例3 | even when the chef is not in the house, the food and service are right on target. | chef(中性) food(积极) service(积极) | 积极(×) 积极(√) 积极(√) | 中性(√) 积极(√) 积极(√) |
表5 不同案例下的模型差异性分析
Tab. 5 Difference analysis of models under different cases
序号 | 句子 | 方面词(情感极性) | BiSyn-GAT+ | KEAIG |
---|---|---|---|---|
案例1 | the food was great - sushi was good, but the cooked food amazed us. | food(积极) sushi(积极) cooked food(积极) | 积极(√) 积极(√) 积极(√) | 积极(√) 积极(√) 消极(×) |
案例2 | try the rose roll (not on menu). | rose roll(积极) menu(中性) | 积极(√) 消极(×) | 积极(√) 中性(√) |
案例3 | even when the chef is not in the house, the food and service are right on target. | chef(中性) food(积极) service(积极) | 积极(×) 积极(√) 积极(√) | 中性(√) 积极(√) 积极(√) |
1 | 陈龙,管子玉,何金红,等. 情感分类研究进展[J]. 计算机研究与发展, 2017, 54(6):1150-1170. 10.7544/issn1000-1239.2017.20160807 |
CHEN L, GUAN Z Y, HE J H, et al. A survey on sentiment classification[J]. Journal of Computer Research and Development, 2017, 54(6):1150-1170. 10.7544/issn1000-1239.2017.20160807 | |
2 | HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780. 10.1162/neco.1997.9.8.1735 |
3 | LI D, WEI F R, TAN C Q, et al. Adaptive recursive neural network for target-dependent twitter sentiment classification[C]// Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA: ACL, 2014:49-54. 10.3115/v1/p14-2009 |
4 | BACCIU D, ERRICA F, MICHELI A, et al. A gentle introduction to deep learning for graphs[J]. Neural Networks, 2020, 129:203-221. 10.1016/j.neunet.2020.06.006 |
5 | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: ACL, 2019: 4171-4186. 10.18653/v1/n18-2 |
6 | TANG D Y, QIN B, FENG X C, et al. Effective LSTMs for target-dependent sentiment classification[C]// Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers. [S.l.]: The COLING 2016 Organizing Committee, 2016:3298-3307. |
7 | WANG Y Q, HUANG M L, ZHU X Y, et al. Attention-based LSTM for aspect-level sentiment classification[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2016:606-615. 10.18653/v1/d16-1058 |
8 | TANG D Y, QIN B, LIU T, et al. Aspect level sentiment classification with deep memory network[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2016:214-224. 10.18653/v1/d16-1021 |
9 | MA D H, LI S J, ZHANG X D, et al. Interactive attention networks for aspect-level sentiment classification[C]// Proceedings of the 26th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2017:4068-4074. 10.24963/ijcai.2017/568 |
10 | HUANG B X, OU Y L, CARLEY K M. Aspect level sentiment classification with attention-over-attention neural networks[C]// Proceedings of the 2018 International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, LNCS 10899. Cham: Springer, 2018: 197-206. |
11 | CHEN P, SUN Z Q, BING L D, et al. Recurrent attention network on memory for aspect sentiment analysis[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2017: 452-461. 10.18653/v1/d17-1047 |
12 | SUN K, ZHANG R C, MENSAH S, et al. Aspect-level sentiment analysis via convolution over dependency tree[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, PA: ACL, 2019:5679-5688. 10.18653/v1/d19-1569 |
13 | ZHANG C, LI Q C, SONG D W. Aspect-based sentiment classification with aspect-specific graph convolutional networks[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, PA: ACL, 2019: 4568-4578. 10.18653/v1/d19-1464 |
14 | ZHAO P L, HOU L L, WU O. Modeling sentiment dependencies with graph convolutional networks for aspect-level sentiment classification[J]. Knowledge-Based Systems, 2020, 193: No.105443. 10.1016/j.knosys.2019.105443 |
15 | WANG K, SHEN W Z, YANG Y Y, et al. Relational graph attention network for aspect-based sentiment analysis[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 3229-3238. 10.18653/v1/2020.acl-main.295 |
16 | XU H, LIU B, SHU L, et al. BERT post-training for review reading comprehension and aspect-based sentiment analysis[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: ACL, 2019:2324-2335. |
17 | CAMBRIA E, LI Y, XING F Z, et al. SenticNet 6: ensemble application of symbolic and subsymbolic AI for sentiment analysis[C]// Proceedings of the 29th ACM International Conference on Information and Knowledge Management. New York: ACM, 2020: 105-114. 10.1145/3340531.3412003 |
18 | SONG Y W, WANG J H, JIANG T, et al. Attentional encoder network for targeted sentiment classification[C]// Proceedings of the 2019 Artificial Neural Networks, LNCS 11730. Cham: Springer, 2019:93-103. |
19 | HOU X C, HUANG J, WANG G T, et al. Selective attention based graph convolutional networks for aspect-level sentiment classification[C]// Proceedings of the 15th Workshop on Graph-Based Methods for Natural Language Processing. Stroudsburg, PA: ACL, 2021: 83-93. 10.18653/v1/11.textgraphs-1.8 |
20 | TANG H, JI D H, LI C L, et al. Dependency graph enhanced dual-transformer structure for aspect-based sentiment classification[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 6578-6588. 10.18653/v1/2020.acl-main.588 |
21 | LI R F, CHEN H, FENG F X, et al. Dual graph convolutional networks for aspect-based sentiment analysis[C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg, PA: ACL, 2021: 6319-6329. 10.18653/v1/2021.acl-long.494 |
22 | LIANG B, YIN R D, GUI L, et al. Jointly learning aspect-focused and inter-aspect relations with graph convolutional networks for aspect sentiment analysis[C]// Proceedings of the 28th International Conference on Computational Linguistics. [S.l.]: International Committee on Computational Linguistics, 2020:150-161. 10.18653/v1/2020.coling-main.13 |
23 | LIANG S, WEI W, MAO X L, et al. BiSyn-GAT+: bi-syntax aware graph attention network for aspect-based sentiment analysis[EB/OL].(2022-04-06)[2022-07-15].. 10.18653/v1/2022.findings-acl.144 |
[1] | 杜郁, 朱焱. 构建预训练动态图神经网络预测学术合作行为消失[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2726-2731. |
[2] | 武杰, 张安思, 吴茂东, 张仪宗, 王从宝. 知识图谱在装备故障诊断领域的研究与应用综述[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2651-2659. |
[3] | 杨航, 李汪根, 张根生, 王志格, 开新. 基于图神经网络的多层信息交互融合算法用于会话推荐[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2719-2725. |
[4] | 杨兴耀, 陈羽, 于炯, 张祖莲, 陈嘉颖, 王东晓. 结合自我特征和对比学习的推荐模型[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2704-2710. |
[5] | 薛桂香, 王辉, 周卫峰, 刘瑜, 李岩. 基于知识图谱和时空扩散图卷积网络的港口交通流量预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2952-2957. |
[6] | 唐廷杰, 黄佳进, 秦进. 基于图辅助学习的会话推荐[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2711-2718. |
[7] | 杨帆, 邹窈, 朱明志, 马振伟, 程大伟, 蒋昌俊. 基于图注意力Transformer神经网络的信用卡欺诈检测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2634-2642. |
[8] | 杨莹, 郝晓燕, 于丹, 马垚, 陈永乐. 面向图神经网络模型提取攻击的图数据生成方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2483-2492. |
[9] | 赵宇博, 张丽萍, 闫盛, 侯敏, 高茂. 基于改进分段卷积神经网络和知识蒸馏的学科知识实体间关系抽取[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2421-2429. |
[10] | 柯添赐, 刘建华, 孙水华, 郑智雄, 蔡子杰. 融合强关联依赖和简洁语法的方面级情感分析模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1786-1795. |
[11] | 李健京, 李贯峰, 秦飞舟, 李卫军. 基于不确定知识图谱嵌入的多关系近似推理模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1751-1759. |
[12] | 于右任, 张仰森, 蒋玉茹, 黄改娟. 融合多粒度语言知识与层级信息的中文命名实体识别模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1706-1712. |
[13] | 林欣蕊, 王晓菲, 朱焱. 基于局部扩展社区发现的学术异常引用群体检测[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1855-1861. |
[14] | 汪炅, 唐韬韬, 贾彩燕. 无负采样的正样本增强图对比学习推荐方法PAGCL[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1485-1492. |
[15] | 郭洁, 林佳瑜, 梁祖红, 罗孝波, 孙海涛. 基于知识感知和跨层次对比学习的推荐方法[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1121-1127. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||