Journal of Computer Applications ›› 2023, Vol. 43 ›› Issue (8): 2412-2419.DOI: 10.11772/j.issn.1001-9081.2022071041
Special Issue: 人工智能
• Artificial intelligence • Previous Articles Next Articles
Received:
2022-07-19
Revised:
2022-10-28
Accepted:
2022-11-11
Online:
2023-01-15
Published:
2023-08-10
Contact:
Dingcheng YANG
About author:
HENG Hongjun, born in 1968, Ph. D., associate professor. His research interests include natural language processing, intelligent information processing.
衡红军, 杨鼎诚
通讯作者:
杨鼎诚
作者简介:
衡红军(1968—),男,河南周口人,副教授,博士,主要研究方向:自然语言处理、智能信息处理;
CLC Number:
Hongjun HENG, Dingcheng YANG. Knowledge enhanced aspect word interactive graph neural network[J]. Journal of Computer Applications, 2023, 43(8): 2412-2419.
衡红军, 杨鼎诚. 知识增强的方面词交互图神经网络[J]. 《计算机应用》唯一官方网站, 2023, 43(8): 2412-2419.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2022071041
数据集 | 积极样本数 | 中性样本数 | 消极样本数 | |||
---|---|---|---|---|---|---|
训练集 | 测试集 | 训练集 | 测试集 | 训练集 | 测试集 | |
LAPTOP14 | 994 | 341 | 870 | 128 | 464 | 169 |
REST14 | 2 164 | 728 | 807 | 196 | 637 | 196 |
REST15 | 912 | 326 | 36 | 34 | 256 | 182 |
REST16 | 1 240 | 469 | 69 | 30 | 439 | 117 |
Tab. 1 Dataset distribution
数据集 | 积极样本数 | 中性样本数 | 消极样本数 | |||
---|---|---|---|---|---|---|
训练集 | 测试集 | 训练集 | 测试集 | 训练集 | 测试集 | |
LAPTOP14 | 994 | 341 | 870 | 128 | 464 | 169 |
REST14 | 2 164 | 728 | 807 | 196 | 637 | 196 |
REST15 | 912 | 326 | 36 | 34 | 256 | 182 |
REST16 | 1 240 | 469 | 69 | 30 | 439 | 117 |
模型方法 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
TD⁃LSTM | 0.780 0 | 0.667 3 | 0.718 3 | 0.684 3 | 0.763 9 | 0.587 0 | 0.821 6 | 0.542 1 |
ATAE⁃LSTM | 0.786 0 | 0.670 2 | 0.688 8 | 0.639 3 | 0.784 8 | 0.605 3 | 0.837 7 | 0.617 1 |
IAN | 0.786 0 | 0.669 0 | 0.719 6 | 0.684 8 | 0.769 4 | 0.587 9 | 0.855 5 | 0.557 7 |
AOA | 0.799 7 | 0.704 2 | 0.726 2 | 0.675 2 | 0.781 7 | 0.570 2 | 0.875 0 | 0.662 1 |
RAM | 0.802 3 | 0.708 0 | 0.744 9 | 0.713 5 | 0.799 8 | 0.605 7 | 0.838 8 | 0.621 4 |
ASGCN⁃DT | 0.808 6 | 0.721 9 | 0.741 4 | 0.692 4 | 0.793 4 | 0.607 8 | 0.886 9 | 0.666 4 |
ASGCN⁃DG | 0.807 7 | 0.720 2 | 0.755 5 | 0.710 5 | 0.798 9 | 0.618 9 | 0.889 9 | 0.674 8 |
BERT | 0.841 1 | 0.766 8 | 0.775 9 | 0.732 9 | 0.834 8 | 0.661 8 | 0.901 0 | 0.741 6 |
BERT⁃PT | 0.859 8* | 0.793 0* | 0.780 6* | 0.735 3* | 0.849 6* | 0.710 3* | 0.917 6* | 0.740 8* |
AEN+BERT | 0.831 2 | 0.737 6 | 0.799 3 | 0.763 1 | 0.840 3* | 0.648 2* | 0.897 1* | 0.720 3* |
SD⁃GCN+BERT | 0.835 7 | 0.764 7 | 0.813 5 | 0.783 4 | — | — | — | — |
R⁃GAT+BERT | 0.866 0 | 0.813 5 | 0.782 1 | 0.740 7 | 0.850 5* | 0.725 1* | 0.918 8* | 0.711 7* |
SA⁃GCN+BERT | 0.861 6 | 0.805 4 | 0.803 1 | 0.771 2 | 0.841 8 | 0.694 2 | 0.914 1 | 0.803 9 |
DGEDT+BERT | 0.863 0 | 0.800 0 | 0.798 0 | 0.756 0 | 0.840 0 | 0.710 0 | 0.919 0 | 0.790 0 |
DualGCN+BERT | 0.871 3 | 0.811 6 | 0.818 0 | 0.781 0 | — | — | — | — |
InterGCN+BERT | 0.871 2 | 0.810 2 | 0.828 7 | 0.793 2 | 0.854 2 | 0.710 5 | 0.912 7 | 0.783 2 |
BiSyn⁃GAT+ | 0.879 4 | 0.824 3 | 0.8291 | 0.7938 | — | — | — | — |
KEAIG(本文方法) | 0.8848 | 0.8354 | 0.825 4 | 0.791 6 | 0.8726 | 0.7887 | 0.9448 | 0.8163 |
Tab. 2 Comparison of experimental results of different models
模型方法 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
TD⁃LSTM | 0.780 0 | 0.667 3 | 0.718 3 | 0.684 3 | 0.763 9 | 0.587 0 | 0.821 6 | 0.542 1 |
ATAE⁃LSTM | 0.786 0 | 0.670 2 | 0.688 8 | 0.639 3 | 0.784 8 | 0.605 3 | 0.837 7 | 0.617 1 |
IAN | 0.786 0 | 0.669 0 | 0.719 6 | 0.684 8 | 0.769 4 | 0.587 9 | 0.855 5 | 0.557 7 |
AOA | 0.799 7 | 0.704 2 | 0.726 2 | 0.675 2 | 0.781 7 | 0.570 2 | 0.875 0 | 0.662 1 |
RAM | 0.802 3 | 0.708 0 | 0.744 9 | 0.713 5 | 0.799 8 | 0.605 7 | 0.838 8 | 0.621 4 |
ASGCN⁃DT | 0.808 6 | 0.721 9 | 0.741 4 | 0.692 4 | 0.793 4 | 0.607 8 | 0.886 9 | 0.666 4 |
ASGCN⁃DG | 0.807 7 | 0.720 2 | 0.755 5 | 0.710 5 | 0.798 9 | 0.618 9 | 0.889 9 | 0.674 8 |
BERT | 0.841 1 | 0.766 8 | 0.775 9 | 0.732 9 | 0.834 8 | 0.661 8 | 0.901 0 | 0.741 6 |
BERT⁃PT | 0.859 8* | 0.793 0* | 0.780 6* | 0.735 3* | 0.849 6* | 0.710 3* | 0.917 6* | 0.740 8* |
AEN+BERT | 0.831 2 | 0.737 6 | 0.799 3 | 0.763 1 | 0.840 3* | 0.648 2* | 0.897 1* | 0.720 3* |
SD⁃GCN+BERT | 0.835 7 | 0.764 7 | 0.813 5 | 0.783 4 | — | — | — | — |
R⁃GAT+BERT | 0.866 0 | 0.813 5 | 0.782 1 | 0.740 7 | 0.850 5* | 0.725 1* | 0.918 8* | 0.711 7* |
SA⁃GCN+BERT | 0.861 6 | 0.805 4 | 0.803 1 | 0.771 2 | 0.841 8 | 0.694 2 | 0.914 1 | 0.803 9 |
DGEDT+BERT | 0.863 0 | 0.800 0 | 0.798 0 | 0.756 0 | 0.840 0 | 0.710 0 | 0.919 0 | 0.790 0 |
DualGCN+BERT | 0.871 3 | 0.811 6 | 0.818 0 | 0.781 0 | — | — | — | — |
InterGCN+BERT | 0.871 2 | 0.810 2 | 0.828 7 | 0.793 2 | 0.854 2 | 0.710 5 | 0.912 7 | 0.783 2 |
BiSyn⁃GAT+ | 0.879 4 | 0.824 3 | 0.8291 | 0.7938 | — | — | — | — |
KEAIG(本文方法) | 0.8848 | 0.8354 | 0.825 4 | 0.791 6 | 0.8726 | 0.7887 | 0.9448 | 0.8163 |
模型方法 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
Full model | 0.8848 | 0.8354 | 0.8254 | 0.7916 | 0.8875 | 0.7819 | 0.9448 | 0.8163 |
w/o tag part | 0.878 3 | 0.825 8 | 0.811 9 | 0.782 1 | 0.876 3 | 0.772 4 | 0.932 7 | 0.807 2 |
w/o kno part | 0.879 4 | 0.822 7 | 0.810 5 | 0.765 8 | 0.878 5 | 0.768 9 | 0.929 7 | 0.802 3 |
w/o senticnet | 0.881 3 | 0.829 8 | 0.820 7 | 0.787 3 | 0.883 1 | 0.775 1 | 0.938 5 | 0.809 4 |
w/o asp mask | 0.880 7 | 0.827 6 | 0.813 2 | 0.786 9 | 0.878 1 | 0.775 2 | 0.938 6 | 0.810 7 |
w/o asp inter | 0.879 8 | 0.826 1 | 0.821 2 | 0.782 3 | 0.879 4 | 0.779 1 | 0.935 1 | 0.808 3 |
w/o domin bert | 0.872 3 | 0.818 5 | 0.811 7 | 0.769 3 | 0.874 8 | 0.762 1 | 0.925 1 | 0.794 6 |
Tab. 3 Experimental results of module ablation
模型方法 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
Full model | 0.8848 | 0.8354 | 0.8254 | 0.7916 | 0.8875 | 0.7819 | 0.9448 | 0.8163 |
w/o tag part | 0.878 3 | 0.825 8 | 0.811 9 | 0.782 1 | 0.876 3 | 0.772 4 | 0.932 7 | 0.807 2 |
w/o kno part | 0.879 4 | 0.822 7 | 0.810 5 | 0.765 8 | 0.878 5 | 0.768 9 | 0.929 7 | 0.802 3 |
w/o senticnet | 0.881 3 | 0.829 8 | 0.820 7 | 0.787 3 | 0.883 1 | 0.775 1 | 0.938 5 | 0.809 4 |
w/o asp mask | 0.880 7 | 0.827 6 | 0.813 2 | 0.786 9 | 0.878 1 | 0.775 2 | 0.938 6 | 0.810 7 |
w/o asp inter | 0.879 8 | 0.826 1 | 0.821 2 | 0.782 3 | 0.879 4 | 0.779 1 | 0.935 1 | 0.808 3 |
w/o domin bert | 0.872 3 | 0.818 5 | 0.811 7 | 0.769 3 | 0.874 8 | 0.762 1 | 0.925 1 | 0.794 6 |
层数 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
1 | 0.819 3 | 0.788 7 | 0.881 2 | 0.824 2 | 0.869 8 | 0.782 3 | 0.935 0 | 0.806 0 |
2 | 0.825 4 | 0.791 6 | 0.884 8 | 0.835 4 | 0.872 6 | 0.788 7 | 0.944 8 | 0.816 4 |
3 | 0.822 3 | 0.789 8 | 0.884 8 | 0.828 8 | 0.869 0 | 0.778 9 | 0.941 6 | 0.814 2 |
4 | 0.817 8 | 0.784 2 | 0.875 0 | 0.815 1 | 0.868 2 | 0.773 4 | 0.936 7 | 0.809 4 |
5 | 0.813 6 | 0.779 7 | 0.878 5 | 0.813 6 | 0.865 3 | 0.769 5 | 0.935 0 | 0.804 6 |
6 | 0.815 7 | 0.776 9 | 0.879 4 | 0.814 2 | 0.867 1 | 0.759 0 | 0.933 4 | 0.805 2 |
7 | 0.808 7 | 0.768 9 | 0.870 5 | 0.808 7 | 0.869 0 | 0.752 2 | 0.933 9 | 0.800 9 |
8 | 0.802 5 | 0.763 8 | 0.871 6 | 0.803 1 | 0.863 4 | 0.754 2 | 0.931 8 | 0.797 9 |
Tab. 4 Experiment results of model with different GCN layers
层数 | REST14 | LAPTOP14 | REST15 | REST16 | ||||
---|---|---|---|---|---|---|---|---|
Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | Accuracy | Macro⁃F1 | |
1 | 0.819 3 | 0.788 7 | 0.881 2 | 0.824 2 | 0.869 8 | 0.782 3 | 0.935 0 | 0.806 0 |
2 | 0.825 4 | 0.791 6 | 0.884 8 | 0.835 4 | 0.872 6 | 0.788 7 | 0.944 8 | 0.816 4 |
3 | 0.822 3 | 0.789 8 | 0.884 8 | 0.828 8 | 0.869 0 | 0.778 9 | 0.941 6 | 0.814 2 |
4 | 0.817 8 | 0.784 2 | 0.875 0 | 0.815 1 | 0.868 2 | 0.773 4 | 0.936 7 | 0.809 4 |
5 | 0.813 6 | 0.779 7 | 0.878 5 | 0.813 6 | 0.865 3 | 0.769 5 | 0.935 0 | 0.804 6 |
6 | 0.815 7 | 0.776 9 | 0.879 4 | 0.814 2 | 0.867 1 | 0.759 0 | 0.933 4 | 0.805 2 |
7 | 0.808 7 | 0.768 9 | 0.870 5 | 0.808 7 | 0.869 0 | 0.752 2 | 0.933 9 | 0.800 9 |
8 | 0.802 5 | 0.763 8 | 0.871 6 | 0.803 1 | 0.863 4 | 0.754 2 | 0.931 8 | 0.797 9 |
序号 | 句子 | 方面词(情感极性) | BiSyn-GAT+ | KEAIG |
---|---|---|---|---|
案例1 | the food was great - sushi was good, but the cooked food amazed us. | food(积极) sushi(积极) cooked food(积极) | 积极(√) 积极(√) 积极(√) | 积极(√) 积极(√) 消极(×) |
案例2 | try the rose roll (not on menu). | rose roll(积极) menu(中性) | 积极(√) 消极(×) | 积极(√) 中性(√) |
案例3 | even when the chef is not in the house, the food and service are right on target. | chef(中性) food(积极) service(积极) | 积极(×) 积极(√) 积极(√) | 中性(√) 积极(√) 积极(√) |
Tab. 5 Difference analysis of models under different cases
序号 | 句子 | 方面词(情感极性) | BiSyn-GAT+ | KEAIG |
---|---|---|---|---|
案例1 | the food was great - sushi was good, but the cooked food amazed us. | food(积极) sushi(积极) cooked food(积极) | 积极(√) 积极(√) 积极(√) | 积极(√) 积极(√) 消极(×) |
案例2 | try the rose roll (not on menu). | rose roll(积极) menu(中性) | 积极(√) 消极(×) | 积极(√) 中性(√) |
案例3 | even when the chef is not in the house, the food and service are right on target. | chef(中性) food(积极) service(积极) | 积极(×) 积极(√) 积极(√) | 中性(√) 积极(√) 积极(√) |
1 | 陈龙,管子玉,何金红,等. 情感分类研究进展[J]. 计算机研究与发展, 2017, 54(6):1150-1170. 10.7544/issn1000-1239.2017.20160807 |
CHEN L, GUAN Z Y, HE J H, et al. A survey on sentiment classification[J]. Journal of Computer Research and Development, 2017, 54(6):1150-1170. 10.7544/issn1000-1239.2017.20160807 | |
2 | HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997, 9(8): 1735-1780. 10.1162/neco.1997.9.8.1735 |
3 | LI D, WEI F R, TAN C Q, et al. Adaptive recursive neural network for target-dependent twitter sentiment classification[C]// Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA: ACL, 2014:49-54. 10.3115/v1/p14-2009 |
4 | BACCIU D, ERRICA F, MICHELI A, et al. A gentle introduction to deep learning for graphs[J]. Neural Networks, 2020, 129:203-221. 10.1016/j.neunet.2020.06.006 |
5 | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: ACL, 2019: 4171-4186. 10.18653/v1/n18-2 |
6 | TANG D Y, QIN B, FENG X C, et al. Effective LSTMs for target-dependent sentiment classification[C]// Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers. [S.l.]: The COLING 2016 Organizing Committee, 2016:3298-3307. |
7 | WANG Y Q, HUANG M L, ZHU X Y, et al. Attention-based LSTM for aspect-level sentiment classification[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2016:606-615. 10.18653/v1/d16-1058 |
8 | TANG D Y, QIN B, LIU T, et al. Aspect level sentiment classification with deep memory network[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2016:214-224. 10.18653/v1/d16-1021 |
9 | MA D H, LI S J, ZHANG X D, et al. Interactive attention networks for aspect-level sentiment classification[C]// Proceedings of the 26th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2017:4068-4074. 10.24963/ijcai.2017/568 |
10 | HUANG B X, OU Y L, CARLEY K M. Aspect level sentiment classification with attention-over-attention neural networks[C]// Proceedings of the 2018 International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, LNCS 10899. Cham: Springer, 2018: 197-206. |
11 | CHEN P, SUN Z Q, BING L D, et al. Recurrent attention network on memory for aspect sentiment analysis[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2017: 452-461. 10.18653/v1/d17-1047 |
12 | SUN K, ZHANG R C, MENSAH S, et al. Aspect-level sentiment analysis via convolution over dependency tree[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, PA: ACL, 2019:5679-5688. 10.18653/v1/d19-1569 |
13 | ZHANG C, LI Q C, SONG D W. Aspect-based sentiment classification with aspect-specific graph convolutional networks[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, PA: ACL, 2019: 4568-4578. 10.18653/v1/d19-1464 |
14 | ZHAO P L, HOU L L, WU O. Modeling sentiment dependencies with graph convolutional networks for aspect-level sentiment classification[J]. Knowledge-Based Systems, 2020, 193: No.105443. 10.1016/j.knosys.2019.105443 |
15 | WANG K, SHEN W Z, YANG Y Y, et al. Relational graph attention network for aspect-based sentiment analysis[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 3229-3238. 10.18653/v1/2020.acl-main.295 |
16 | XU H, LIU B, SHU L, et al. BERT post-training for review reading comprehension and aspect-based sentiment analysis[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: ACL, 2019:2324-2335. |
17 | CAMBRIA E, LI Y, XING F Z, et al. SenticNet 6: ensemble application of symbolic and subsymbolic AI for sentiment analysis[C]// Proceedings of the 29th ACM International Conference on Information and Knowledge Management. New York: ACM, 2020: 105-114. 10.1145/3340531.3412003 |
18 | SONG Y W, WANG J H, JIANG T, et al. Attentional encoder network for targeted sentiment classification[C]// Proceedings of the 2019 Artificial Neural Networks, LNCS 11730. Cham: Springer, 2019:93-103. |
19 | HOU X C, HUANG J, WANG G T, et al. Selective attention based graph convolutional networks for aspect-level sentiment classification[C]// Proceedings of the 15th Workshop on Graph-Based Methods for Natural Language Processing. Stroudsburg, PA: ACL, 2021: 83-93. 10.18653/v1/11.textgraphs-1.8 |
20 | TANG H, JI D H, LI C L, et al. Dependency graph enhanced dual-transformer structure for aspect-based sentiment classification[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 6578-6588. 10.18653/v1/2020.acl-main.588 |
21 | LI R F, CHEN H, FENG F X, et al. Dual graph convolutional networks for aspect-based sentiment analysis[C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg, PA: ACL, 2021: 6319-6329. 10.18653/v1/2021.acl-long.494 |
22 | LIANG B, YIN R D, GUI L, et al. Jointly learning aspect-focused and inter-aspect relations with graph convolutional networks for aspect sentiment analysis[C]// Proceedings of the 28th International Conference on Computational Linguistics. [S.l.]: International Committee on Computational Linguistics, 2020:150-161. 10.18653/v1/2020.coling-main.13 |
23 | LIANG S, WEI W, MAO X L, et al. BiSyn-GAT+: bi-syntax aware graph attention network for aspect-based sentiment analysis[EB/OL].(2022-04-06)[2022-07-15].. 10.18653/v1/2022.findings-acl.144 |
[1] | Guixiang XUE, Hui WANG, Weifeng ZHOU, Yu LIU, Yan LI. Port traffic flow prediction based on knowledge graph and spatio-temporal diffusion graph convolutional network [J]. Journal of Computer Applications, 2024, 44(9): 2952-2957. |
[2] | Jie WU, Ansi ZHANG, Maodong WU, Yizong ZHANG, Congbao WANG. Overview of research and application of knowledge graph in equipment fault diagnosis [J]. Journal of Computer Applications, 2024, 44(9): 2651-2659. |
[3] | Tingjie TANG, Jiajin HUANG, Jin QIN. Session-based recommendation with graph auxiliary learning [J]. Journal of Computer Applications, 2024, 44(9): 2711-2718. |
[4] | Yubo ZHAO, Liping ZHANG, Sheng YAN, Min HOU, Mao GAO. Relation extraction between discipline knowledge entities based on improved piecewise convolutional neural network and knowledge distillation [J]. Journal of Computer Applications, 2024, 44(8): 2421-2429. |
[5] | Xinrui LIN, Xiaofei WANG, Yan ZHU. Academic anomaly citation group detection based on local extended community detection [J]. Journal of Computer Applications, 2024, 44(6): 1855-1861. |
[6] | Tianci KE, Jianhua LIU, Shuihua SUN, Zhixiong ZHENG, Zijie CAI. Aspect-level sentiment analysis model combining strong association dependency and concise syntax [J]. Journal of Computer Applications, 2024, 44(6): 1786-1795. |
[7] | Jianjing LI, Guanfeng LI, Feizhou QIN, Weijun LI. Multi-relation approximate reasoning model based on uncertain knowledge graph embedding [J]. Journal of Computer Applications, 2024, 44(6): 1751-1759. |
[8] | Youren YU, Yangsen ZHANG, Yuru JIANG, Gaijuan HUANG. Chinese named entity recognition model incorporating multi-granularity linguistic knowledge and hierarchical information [J]. Journal of Computer Applications, 2024, 44(6): 1706-1712. |
[9] | Xiaoyan ZHAO, Yan KUANG, Menghan WANG, Peiyan YUAN. Device-to-device content sharing mechanism based on knowledge graph [J]. Journal of Computer Applications, 2024, 44(4): 995-1001. |
[10] | Jie GUO, Jiayu LIN, Zuhong LIANG, Xiaobo LUO, Haitao SUN. Recommendation method based on knowledge‑awareness and cross-level contrastive learning [J]. Journal of Computer Applications, 2024, 44(4): 1121-1127. |
[11] | Dapeng XU, Xinmin HOU. Feature selection method for graph neural network based on network architecture design [J]. Journal of Computer Applications, 2024, 44(3): 663-670. |
[12] | Linqin WANG, Te ZHANG, Zhihong XU, Yongfeng DONG, Guowei YANG. Fusing entity semantic and structural information for knowledge graph reasoning [J]. Journal of Computer Applications, 2024, 44(11): 3371-3378. |
[13] | Wenjuan JIANG, Yi GUO, Jiaojiao FU. Reasoning question answering model of complex temporal knowledge graph with graph attention [J]. Journal of Computer Applications, 2024, 44(10): 3047-3057. |
[14] | Beijing ZHOU, Hairong WANG, Yimeng WANG, Lisi ZHANG, He MA. Recommendation method using knowledge graph embedding propagation [J]. Journal of Computer Applications, 2024, 44(10): 3252-3259. |
[15] | Hongbin WANG, Xiao FANG, Hong JIANG. Commonsense reasoning and question answering method with three-dimensional semantic features [J]. Journal of Computer Applications, 2024, 44(1): 138-144. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||