Journal of Computer Applications ›› 2022, Vol. 42 ›› Issue (12): 3700-3707.DOI: 10.11772/j.issn.1001-9081.2021101779
Special Issue: 人工智能
• Artificial intelligence • Previous Articles Next Articles
Jiana MENG1, Pin LYU1, Yuhai YU1(), Shichang SUN1, Hongfei LIN2
Received:
2021-10-18
Revised:
2021-12-29
Accepted:
2022-01-14
Online:
2022-01-24
Published:
2022-12-10
Contact:
Yuhai YU
About author:
MENG Jiana,born in 1972, Ph. D., professor. Her research interests include machine learning, text mining.Supported by:
通讯作者:
于玉海
作者简介:
孟佳娜(1972—),女,吉林四平人,教授,博士,CCF会员,主要研究方向:机器学习、文本挖掘基金资助:
CLC Number:
Jiana MENG, Pin LYU, Yuhai YU, Shichang SUN, Hongfei LIN. Aspect-level cross-domain sentiment analysis based on capsule network[J]. Journal of Computer Applications, 2022, 42(12): 3700-3707.
孟佳娜, 吕品, 于玉海, 孙世昶, 林鸿飞. 基于胶囊网络的方面级跨领域情感分析[J]. 《计算机应用》唯一官方网站, 2022, 42(12): 3700-3707.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2021101779
超参数 | 值 |
---|---|
Bert_size | 768 |
capsule_size | 300 |
句子长度 | 80 |
词向量维度 | 300 |
Batch_size | 32 |
Dropout比率 | 0.2 |
epoch | 5 |
微调数据比例(m) | 0、0.05、0.1、0.15、0.2 |
Tab.1 Experimental parameters setting
超参数 | 值 |
---|---|
Bert_size | 768 |
capsule_size | 300 |
句子长度 | 80 |
词向量维度 | 300 |
Batch_size | 32 |
Dropout比率 | 0.2 |
epoch | 5 |
微调数据比例(m) | 0、0.05、0.1、0.15、0.2 |
源域 → 目标域 | 准确率 | F1值 | ||
---|---|---|---|---|
句子级 | 方面级 | 句子级 | 方面级 | |
平均值 | 0.781 | 0.883 | 0.707 | 0.886 |
C→B | 0.725 | 0.869 | 0.870 | 0.875 |
C→H | 0.683 | 0.863 | 0.641 | 0.858 |
H→B | 0.780 | 0.840 | 0.785 | 0.864 |
H→C | 0.835 | 0.934 | 0.882 | 0.913 |
B→C | 0.848 | 0.932 | 0.854 | 0.945 |
B→H | 0.812 | 0.862 | 0.813 | 0.860 |
Tab.2 Comparison of different granularity results on Chinese corpus (m=0.1)
源域 → 目标域 | 准确率 | F1值 | ||
---|---|---|---|---|
句子级 | 方面级 | 句子级 | 方面级 | |
平均值 | 0.781 | 0.883 | 0.707 | 0.886 |
C→B | 0.725 | 0.869 | 0.870 | 0.875 |
C→H | 0.683 | 0.863 | 0.641 | 0.858 |
H→B | 0.780 | 0.840 | 0.785 | 0.864 |
H→C | 0.835 | 0.934 | 0.882 | 0.913 |
B→C | 0.848 | 0.932 | 0.854 | 0.945 |
B→H | 0.812 | 0.862 | 0.813 | 0.860 |
源域→ 目标域 | 准确率 | F1值 | ||
---|---|---|---|---|
句子级 | 方面级 | 句子级 | 方面级 | |
平均值 | 0.792 | 0.858 | 0.802 | 0.861 |
B→D | 0.804 | 0.867 | 0.841 | 0.876 |
B→E | 0.743 | 0.854 | 0.820 | 0.852 |
B→K | 0.785 | 0.848 | 0.793 | 0.868 |
D→B | 0.824 | 0.877 | 0.813 | 0.866 |
D→E | 0.780 | 0.826 | 0.762 | 0.837 |
D→K | 0.813 | 0.890 | 0.825 | 0.870 |
E→D | 0.760 | 0.857 | 0.796 | 0.848 |
E→K | 0.831 | 0.890 | 0.836 | 0.865 |
E→B | 0.772 | 0.854 | 0.766 | 0.855 |
K→D | 0.812 | 0.858 | 0.785 | 0.846 |
K→E | 0.816 | 0.864 | 0.804 | 0.878 |
K→B | 0.759 | 0.805 | 0.781 | 0.876 |
Tab. 3 Comparison of different granularity results on English corpus (m=0.1)
源域→ 目标域 | 准确率 | F1值 | ||
---|---|---|---|---|
句子级 | 方面级 | 句子级 | 方面级 | |
平均值 | 0.792 | 0.858 | 0.802 | 0.861 |
B→D | 0.804 | 0.867 | 0.841 | 0.876 |
B→E | 0.743 | 0.854 | 0.820 | 0.852 |
B→K | 0.785 | 0.848 | 0.793 | 0.868 |
D→B | 0.824 | 0.877 | 0.813 | 0.866 |
D→E | 0.780 | 0.826 | 0.762 | 0.837 |
D→K | 0.813 | 0.890 | 0.825 | 0.870 |
E→D | 0.760 | 0.857 | 0.796 | 0.848 |
E→K | 0.831 | 0.890 | 0.836 | 0.865 |
E→B | 0.772 | 0.854 | 0.766 | 0.855 |
K→D | 0.812 | 0.858 | 0.785 | 0.846 |
K→E | 0.816 | 0.864 | 0.804 | 0.878 |
K→B | 0.759 | 0.805 | 0.781 | 0.876 |
源域→ 目标域 | 准确率 | F1值 | ||
---|---|---|---|---|
句子级 | 方面级 | 句子级 | 方面级 | |
平均值 | 0.833 | 0.915 | 0.845 | 0.909 |
C→B | 0.794 | 0.906 | 0.874 | 0.893 |
C→H | 0.773 | 0.890 | 0.752 | 0.883 |
H→B | 0.852 | 0.934 | 0.846 | 0.910 |
H→C | 0.881 | 0.945 | 0.903 | 0.947 |
B→C | 0.860 | 0.947 | 0.865 | 0.957 |
B→H | 0.839 | 0.868 | 0.827 | 0.862 |
Tab. 4 Comparison of different granularity results on Chinese corpus (m=0.2)
源域→ 目标域 | 准确率 | F1值 | ||
---|---|---|---|---|
句子级 | 方面级 | 句子级 | 方面级 | |
平均值 | 0.833 | 0.915 | 0.845 | 0.909 |
C→B | 0.794 | 0.906 | 0.874 | 0.893 |
C→H | 0.773 | 0.890 | 0.752 | 0.883 |
H→B | 0.852 | 0.934 | 0.846 | 0.910 |
H→C | 0.881 | 0.945 | 0.903 | 0.947 |
B→C | 0.860 | 0.947 | 0.865 | 0.957 |
B→H | 0.839 | 0.868 | 0.827 | 0.862 |
源域→ 目标域 | 准确率 | F1值 | ||
---|---|---|---|---|
句子级 | 方面级 | 句子级 | 方面级 | |
平均值 | 0.817 | 0.889 | 0.824 | 0.893 |
B→D | 0.827 | 0.904 | 0.850 | 0.904 |
B→E | 0.812 | 0.891 | 0.764 | 0.889 |
B→K | 0.794 | 0.898 | 0.814 | 0.890 |
D→B | 0.831 | 0.894 | 0.861 | 0.902 |
D→E | 0.805 | 0.879 | 0.809 | 0.886 |
D→K | 0.826 | 0.911 | 0.837 | 0.902 |
E→D | 0.819 | 0.877 | 0.794 | 0.886 |
E→K | 0.883 | 0.903 | 0.892 | 0.918 |
E→B | 0.765 | 0.880 | 0.822 | 0.900 |
K→D | 0.822 | 0.874 | 0.801 | 0.883 |
K→E | 0.834 | 0.899 | 0.843 | 0.890 |
K→B | 0.790 | 0.858 | 0.796 | 0.865 |
Tab. 5 Comparison of different granularity results on English corpus(m=0.2)
源域→ 目标域 | 准确率 | F1值 | ||
---|---|---|---|---|
句子级 | 方面级 | 句子级 | 方面级 | |
平均值 | 0.817 | 0.889 | 0.824 | 0.893 |
B→D | 0.827 | 0.904 | 0.850 | 0.904 |
B→E | 0.812 | 0.891 | 0.764 | 0.889 |
B→K | 0.794 | 0.898 | 0.814 | 0.890 |
D→B | 0.831 | 0.894 | 0.861 | 0.902 |
D→E | 0.805 | 0.879 | 0.809 | 0.886 |
D→K | 0.826 | 0.911 | 0.837 | 0.902 |
E→D | 0.819 | 0.877 | 0.794 | 0.886 |
E→K | 0.883 | 0.903 | 0.892 | 0.918 |
E→B | 0.765 | 0.880 | 0.822 | 0.900 |
K→D | 0.822 | 0.874 | 0.801 | 0.883 |
K→E | 0.834 | 0.899 | 0.843 | 0.890 |
K→B | 0.790 | 0.858 | 0.796 | 0.865 |
源域→目标域 | SCL-MI | ITIAD | DANN | GCAE-TL | 本文方法 |
---|---|---|---|---|---|
平均值 | 0.743 | 0.742 | 0.748 | 0.751 | 0.795 |
B→D | 0.788 | 0.805 | 0.737 | 0.774 | 0.756 |
B→E | 0.719 | 0.730 | 0.680 | 0.763 | 0.812 |
B→K | 0.772 | 0.720 | 0.788 | 0.764 | 0.748 |
D→B | 0.732 | 0.670 | 0.750 | 0.752 | 0.796 |
D→E | 0.715 | 0.740 | 0.745 | 0.746 | 0.783 |
D→K | 0.740 | 0.710 | 0.776 | 0.747 | 0.786 |
E→B | 0.685 | 0.683 | 0.700 | 0.734 | 0.797 |
E→D | 0.704 | 0.775 | 0.710 | 0.794 | 0.856 |
E→K | 0.829 | 0.857 | 0.845 | 0.713 | 0.848 |
K→B | 0.693 | 0.679 | 0.712 | 0.723 | 0.806 |
K→D | 0.720 | 0.740 | 0.714 | 0.788 | 0.759 |
K→E | 0.822 | 0.800 | 0.821 | 0.718 | 0.788 |
Tab. 6 Accuracy comparison of the proposed method and transfer learning methods on English corpus
源域→目标域 | SCL-MI | ITIAD | DANN | GCAE-TL | 本文方法 |
---|---|---|---|---|---|
平均值 | 0.743 | 0.742 | 0.748 | 0.751 | 0.795 |
B→D | 0.788 | 0.805 | 0.737 | 0.774 | 0.756 |
B→E | 0.719 | 0.730 | 0.680 | 0.763 | 0.812 |
B→K | 0.772 | 0.720 | 0.788 | 0.764 | 0.748 |
D→B | 0.732 | 0.670 | 0.750 | 0.752 | 0.796 |
D→E | 0.715 | 0.740 | 0.745 | 0.746 | 0.783 |
D→K | 0.740 | 0.710 | 0.776 | 0.747 | 0.786 |
E→B | 0.685 | 0.683 | 0.700 | 0.734 | 0.797 |
E→D | 0.704 | 0.775 | 0.710 | 0.794 | 0.856 |
E→K | 0.829 | 0.857 | 0.845 | 0.713 | 0.848 |
K→B | 0.693 | 0.679 | 0.712 | 0.723 | 0.806 |
K→D | 0.720 | 0.740 | 0.714 | 0.788 | 0.759 |
K→E | 0.822 | 0.800 | 0.821 | 0.718 | 0.788 |
源域→目标域 | CDT | ASGCN | RAM | 本文方法 |
---|---|---|---|---|
平均值 | 0.778 | 0.769 | 0.736 | 0.795 |
B→D | 0.745 | 0.742 | 0.726 | 0.756 |
B→E | 0.782 | 0.805 | 0.647 | 0.812 |
B→K | 0.764 | 0.774 | 0.751 | 0.748 |
D→B | 0.738 | 0.727 | 0.739 | 0.796 |
D→E | 0.781 | 0.741 | 0.775 | 0.783 |
D→K | 0.752 | 0.735 | 0.706 | 0.786 |
E→B | 0.728 | 0.764 | 0.713 | 0.797 |
E→D | 0.836 | 0.782 | 0.750 | 0.856 |
E→K | 0.832 | 0.830 | 0.784 | 0.848 |
K→B | 0.773 | 0.781 | 0.732 | 0.806 |
K→D | 0.751 | 0.784 | 0.730 | 0.759 |
K→E | 0.820 | 0.763 | 0.781 | 0.788 |
Tab. 7 Accuracy comparison of the proposed method and aspect-level sentiment analysis methods on English corpus
源域→目标域 | CDT | ASGCN | RAM | 本文方法 |
---|---|---|---|---|
平均值 | 0.778 | 0.769 | 0.736 | 0.795 |
B→D | 0.745 | 0.742 | 0.726 | 0.756 |
B→E | 0.782 | 0.805 | 0.647 | 0.812 |
B→K | 0.764 | 0.774 | 0.751 | 0.748 |
D→B | 0.738 | 0.727 | 0.739 | 0.796 |
D→E | 0.781 | 0.741 | 0.775 | 0.783 |
D→K | 0.752 | 0.735 | 0.706 | 0.786 |
E→B | 0.728 | 0.764 | 0.713 | 0.797 |
E→D | 0.836 | 0.782 | 0.750 | 0.856 |
E→K | 0.832 | 0.830 | 0.784 | 0.848 |
K→B | 0.773 | 0.781 | 0.732 | 0.806 |
K→D | 0.751 | 0.784 | 0.730 | 0.759 |
K→E | 0.820 | 0.763 | 0.781 | 0.788 |
源域→目标域 | SCL-MI | ITIAD | DANN | GCAE-TL | 本文方法 |
---|---|---|---|---|---|
平均值 | 0.694 | 0.714 | 0.732 | 0.707 | 0.779 |
C→B | 0.640 | 0.725 | 0.702 | 0.667 | 0.742 |
C→H | 0.703 | 0.740 | 0.735 | 0.704 | 0.751 |
H→B | 0.682 | 0.694 | 0.750 | 0.529 | 0.736 |
H→C | 0.754 | 0.746 | 0.723 | 0.857 | 0.884 |
B→C | 0.793 | 0.769 | 0.775 | 0.758 | 0.805 |
B→H | 0.591 | 0.607 | 0.706 | 0.724 | 0.760 |
Tab. 8 Accuracy comparison of the proposed method and transfer learning methods on Chinese corpus
源域→目标域 | SCL-MI | ITIAD | DANN | GCAE-TL | 本文方法 |
---|---|---|---|---|---|
平均值 | 0.694 | 0.714 | 0.732 | 0.707 | 0.779 |
C→B | 0.640 | 0.725 | 0.702 | 0.667 | 0.742 |
C→H | 0.703 | 0.740 | 0.735 | 0.704 | 0.751 |
H→B | 0.682 | 0.694 | 0.750 | 0.529 | 0.736 |
H→C | 0.754 | 0.746 | 0.723 | 0.857 | 0.884 |
B→C | 0.793 | 0.769 | 0.775 | 0.758 | 0.805 |
B→H | 0.591 | 0.607 | 0.706 | 0.724 | 0.760 |
源域→目标域 | CDT | ASGCN | RAM | 本文方法 |
---|---|---|---|---|
平均值 | 0.741 | 0.738 | 0.709 | 0.779 |
C→B | 0.721 | 0.715 | 0.705 | 0.742 |
C→H | 0.740 | 0.726 | 0.734 | 0.751 |
H→B | 0.705 | 0.724 | 0.694 | 0.736 |
H→C | 0.751 | 0.787 | 0.686 | 0.884 |
B→C | 0.772 | 0.732 | 0.714 | 0.805 |
B→H | 0.756 | 0.744 | 0.721 | 0.760 |
Tab. 9 Accuracy comparison of the proposed method and aspect-level sentiment analysis methods on Chinese corpus
源域→目标域 | CDT | ASGCN | RAM | 本文方法 |
---|---|---|---|---|
平均值 | 0.741 | 0.738 | 0.709 | 0.779 |
C→B | 0.721 | 0.715 | 0.705 | 0.742 |
C→H | 0.740 | 0.726 | 0.734 | 0.751 |
H→B | 0.705 | 0.724 | 0.694 | 0.736 |
H→C | 0.751 | 0.787 | 0.686 | 0.884 |
B→C | 0.772 | 0.732 | 0.714 | 0.805 |
B→H | 0.756 | 0.744 | 0.721 | 0.760 |
1 | PANG B, LEE L. Opinion mining and sentiment analysis[J]. Foundations and Trends in Information Retrieval, 2008, 2(1/2): 1-135. 10.1561/1500000011 |
2 | ISHAQ A, ASGHAR S, GILLANI S A. Aspect-based sentiment analysis using a hybridized approach based on CNN and GA[J]. IEEE Access, 2020, 8: 135499-135512. 10.1109/access.2020.3011802 |
3 | LI X, BING L D, ZHANG W X, et al. Exploiting BERT for end-to-end aspect-based sentiment analysis[C]// Proceedings of the 5th Workshop on Noisy User-generated Text. Stroudsburg, PA: Association for Computational Linguistics, 2019: 34-41. 10.18653/v1/d19-5505 |
4 | 贺萌.基于域适应的跨领域情感分类方法研究[D]. 北京:北京工业大学, 2020:45-64. |
HE M. Research on domain adaption based cross-domain sentiment classification method[D]. Beijing: Beijing University of Technology, 2020:45-64. | |
5 | 赵光耀,吕成国,付国宏,等. 基于领域特有情感词注意力模型的跨领域属性情感分析[J]. 中文信息学报, 2021, 35(6): 93-102. 10.3969/j.issn.1003-0077.2021.06.010 |
ZHAO G Y, LYU C G, FU G H, et al. Domain specific sentiment words based attention model for cross-domain attribute-oriented sentiment analysis[J]. Journal of Chinese Information Processing, 2021, 35(6): 93-102. 10.3969/j.issn.1003-0077.2021.06.010 | |
6 | 孟佳娜,吕品,于玉海,等. 基于CNN的方面级跨领域情感分析研究[J]. 计算机工程与应用, 2022, 58(16):175-183. 10.3778/j.issn.1002-8331.2101-0316 |
MENG J N, LYU P, YU Y H, et al. Aspect-level cross-domain sentiment analysis based on CNN[J]. Computer Engineering and Applications, 2022, 58(16):175-183. 10.3778/j.issn.1002-8331.2101-0316 | |
7 | MA D H, LI S J, ZHANG X D, et al. Interactive attention networks for aspect-level sentiment classification[C]// Proceedings of the 26th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2017: 4068-4074. 10.24963/ijcai.2017/568 |
8 | 王昆,郑毅,方书雅,等. 基于文本筛选和改进BERT的长文本方面级情感分析[J]. 计算机应用, 2020, 40(10):2838-2844. 10.11772/j.issn.1001-9081.2020020164 |
WANG K, ZHENG Y, FANG S Y, et al. Long text aspect-level sentiment analysis based on text filtering and improved BERT[J]. Journal of Computer Applications, 2020, 40(10):2838-2844. 10.11772/j.issn.1001-9081.2020020164 | |
9 | DU C N, SUN H F, WANG J Y, et al. Capsule network with interactive attention for aspect-level sentiment classification[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2019: 5489-5498. 10.18653/v1/d19-1551 |
10 | 支淑婷,李晓戈,王京博,等. 基于多注意力长短时记忆的实体属性情感分析[J]. 计算机应用, 2019, 39(1):160-167. 10.11772/j.issn.1001-9081.2018061232 |
ZHI S T, LI X G, WANG J B, et al. Sentiment analysis of entity aspects based on multi-attention long short-term memory[J]. Journal of Computer Applications, 2019, 39(1):160-167. 10.11772/j.issn.1001-9081.2018061232 | |
11 | ZHU P S, QIAN T Y. Enhanced aspect level sentiment classification with auxiliary memory[C]// Proceedings of the 27th International Conference on Computational Linguistics. Stroudsburg, PA: Association for Computational Linguistics, 2018: 1077-1087. |
12 | 武婷,曹春萍. 融合位置权重的基于注意力交叉注意力的长短期记忆方面情感分析模型[J]. 计算机应用, 2019, 39(8):2198-2203. 10.11772/j.issn.1001-9081.2018122565 |
WU T, CAO C P. Aspect level sentiment classification model with location weight and long-short term memory based on attention-over-attention[J]. Journal of Computer Applications, 2019, 39(8):2198-2203. 10.11772/j.issn.1001-9081.2018122565 | |
13 | YOSINSKI J, CLUNE J, BENGIOY, et al. How transferable are features in deep neural networks?[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2014: 3320-3328. |
14 | LONG M S, CAO Y, WANG J M, et al. Learning transferable features with deep adaptation networks[C]// Proceedings of the 32nd Conference on International Conference on Machine Learning. New York: JMLR.org, 2015: 97-105. |
15 | GANIN Y, USTINOVA E, AJAKAN H, et al. Domain-adversarial training of neural networks[J]. Journal of Machine Learning Research, 2016, 17: 1-35. 10.1007/978-3-319-58347-1_10 |
16 | XU R F, XU J, WANG X L. Instance level transfer learning for cross lingual opinion analysis[C]// Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis. Stroudsburg, PA: Association for Computational Linguistics, 2011: 182-188. |
17 | LI Z, ZHANG Y, WEI Y, et al. End-to-end adversarial memory network for cross-domain sentiment classification[C]// Proceedings of the 26th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2017: 2237-2243. 10.24963/ijcai.2017/311 |
18 | HINTON G, SABOUR S, FROSST N. Matrix capsules with EM routing[EB/OL]. (2018-02-16) [2021-10-18].. |
19 | ZHAO W, YE J B, YANG M, et al. Investigating capsule networks with dynamic routing for text classification[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2018: 3110-3119. 10.18653/v1/d18-1350 |
20 | WU Q, TAN S B. A two-stage framework for cross-domain sentiment classification[J]. Expert Systems with Applications, 2011, 38(11): 14269-14275. |
21 | BLITZER J, DREDZE M, PEREIRA F. Biographies, Bollywood, boom-boxes and blenders: domain adaptation for sentiment classification[C]// Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Stroudsburg, PA: Association for Computational Linguistics, 2007: 440-447. 10.3115/1557690.1557765 |
22 | SHARMA R, BHATTACHARYYA P, DANDAPAT S, et al. Identifying transferable information across domains for cross-domain sentiment classification[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: Association for Computational Linguistics, 2018: 968-978. 10.18653/v1/p18-1089 |
23 | SUN K, ZHANG R C, MENSAH S, et al. Aspect-level sentiment analysis via convolution over dependency tree[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2019: 5679-5688. 10.18653/v1/d19-1569 |
24 | ZHANG C, LI Q C, SONG D W. Aspect-based sentiment classification with aspect-specific graph convolutional networks[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2019: 4568-4578. 10.18653/v1/d19-1464 |
25 | CHEN P, SUN Z Q, BING L D, et al. Recurrent attention network on memory for aspect sentiment analysis[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2017: 452-461. 10.18653/v1/d17-1047 |
[1] | Yu DU, Yan ZHU. Constructing pre-trained dynamic graph neural network to predict disappearance of academic cooperation behavior [J]. Journal of Computer Applications, 2024, 44(9): 2726-2731. |
[2] | Xiyuan WANG, Zhancheng ZHANG, Shaokang XU, Baocheng ZHANG, Xiaoqing LUO, Fuyuan HU. Unsupervised cross-domain transfer network for 3D/2D registration in surgical navigation [J]. Journal of Computer Applications, 2024, 44(9): 2911-2918. |
[3] | Kaipeng XUE, Tao XU, Chunjie LIAO. Multimodal sentiment analysis network with self-supervision and multi-layer cross attention [J]. Journal of Computer Applications, 2024, 44(8): 2387-2392. |
[4] | Tianci KE, Jianhua LIU, Shuihua SUN, Zhixiong ZHENG, Zijie CAI. Aspect-level sentiment analysis model combining strong association dependency and concise syntax [J]. Journal of Computer Applications, 2024, 44(6): 1786-1795. |
[5] | Xianfeng YANG, Yilei TANG, Ziqiang LI. Aspect-level sentiment analysis model based on alternating‑attention mechanism and graph convolutional network [J]. Journal of Computer Applications, 2024, 44(4): 1058-1064. |
[6] | Lei GUO, Zhen JIA, Tianrui LI. Relational and interactive graph attention network for aspect-level sentiment analysis [J]. Journal of Computer Applications, 2024, 44(3): 696-701. |
[7] | Hang YU, Yanling ZHOU, Mengxin ZHAI, Han LIU. Text classification based on pre-training model and label fusion [J]. Journal of Computer Applications, 2024, 44(3): 709-714. |
[8] | Kaitian WANG, Qing YE, Chunlei CHENG. Classification method for traditional Chinese medicine electronic medical records based on heterogeneous graph representation [J]. Journal of Computer Applications, 2024, 44(2): 411-417. |
[9] | Yunyun GAO, Lasheng ZHAO, Qiang ZHANG. Acoustic word embedding model based on Bi-LSTM and convolutional-Transformer [J]. Journal of Computer Applications, 2024, 44(1): 123-128. |
[10] | Zexi JIN, Lei LI, Ji LIU. Transfer learning model based on improved domain separation network [J]. Journal of Computer Applications, 2023, 43(8): 2382-2389. |
[11] | Qinghai XU, Shifei DING, Tongfeng SUN, Jian ZHANG, Lili GUO. Improved capsule network based on multipath feature [J]. Journal of Computer Applications, 2023, 43(5): 1330-1335. |
[12] | Hao SUN, Jian CAO, Haisheng LI, Dianhui MAO. Session-based recommendation model based on enhanced capsule network [J]. Journal of Computer Applications, 2023, 43(4): 1043-1049. |
[13] | Huiru WANG, Xiuhong LI, Zhe LI, Chunming MA, Zeyu REN, Dan YANG. Survey of multimodal pre-training models [J]. Journal of Computer Applications, 2023, 43(4): 991-1004. |
[14] | Cong YIN, Hanping HU. Parameter identification model for time-delay chaotic systems based on temporal attention mechanism [J]. Journal of Computer Applications, 2023, 43(3): 842-847. |
[15] | Nanfan LI, Wenwen SI, Siyuan DU, Zhiyong WANG, Chongyang ZHONG, Shihong XIA. Hidden state initialization method for recurrent neural network-based human motion model [J]. Journal of Computer Applications, 2023, 43(3): 723-727. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||