《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (10): 3062-3069.DOI: 10.11772/j.issn.1001-9081.2022091449
所属专题: 人工智能
收稿日期:
2022-09-30
修回日期:
2023-01-24
接受日期:
2023-02-01
发布日期:
2023-02-28
出版日期:
2023-10-10
通讯作者:
周栋
作者简介:
吴明月(1999—),男,湖南娄底人,硕士研究生,CCF会员,主要研究方向:自然语言处理、深度学习基金资助:
Mingyue WU1,2, Dong ZHOU1(), Wenyu ZHAO1,2, Wei QU1,2
Received:
2022-09-30
Revised:
2023-01-24
Accepted:
2023-02-01
Online:
2023-02-28
Published:
2023-10-10
Contact:
Dong ZHOU
About author:
WU Mingyue, born in 1999, M. S. candidate. His researchinterests include natural language processing, deep learning.Supported by:
摘要:
句向量是自然语言处理的核心技术之一,影响着自然语言处理系统的质量和性能。然而,已有的方法无法高效推理句与句之间的全局语义关系,致使句子在欧氏空间中的语义相似性度量仍存在一定问题。为解决该问题,从句子的局部几何结构入手,提出一种基于流形学习的句向量优化方法。该方法利用局部线性嵌入(LLE)对句子及其语义相似句子进行两次加权局部线性组合,这样不仅保持了句子之间的局部几何信息,而且有助于推理全局几何信息,进而使句子在欧氏空间中的语义相似性更贴近人类真实语义。在7个文本语义相似度任务上的实验结果表明,所提方法的斯皮尔曼相关系数(SRCC)平均值相较于基于对比学习的方法SimCSE(Simple Contrastive learning of Sentence Embeddings)提升了1.21个百分点。此外,将所提方法运用于主流预训练模型上的结果表明,相较于原始预训练模型,所提方法优化后模型的SRCC平均值提升了3.32~7.70个百分点。
中图分类号:
吴明月, 周栋, 赵文玉, 屈薇. 基于流形学习的句向量优化[J]. 计算机应用, 2023, 43(10): 3062-3069.
Mingyue WU, Dong ZHOU, Wenyu ZHAO, Wei QU. Sentence embedding optimization based on manifold learning[J]. Journal of Computer Applications, 2023, 43(10): 3062-3069.
类别 | 模型 | STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | 平均值 |
---|---|---|---|---|---|---|---|---|---|
生成模型 | BERT | 54.71 | 54.52 | 58.81 | 67.36 | 68.18 | 53.88 | 62.06 | 59.93 |
Skip_Thoughts | 44.32 | 34.56 | 41.65 | 46.52 | 55.45 | 74.28 | 79.21 | 53.71 | |
InferSent_FastText | 58.16 | 55.29 | 58.53 | 67.34 | 68.29 | 72.45 | 78.34 | 65.48 | |
USE_TF | 64.21 | 68.54 | 67.96 | 77.12 | 76.86 | 77.94 | 80.73 | 73.33 | |
ConSERT | 64.64 | 78.49 | 69.42 | 79.72 | 75.95 | 73.97 | 67.31 | 72.78 | |
SBERT | 66.21 | 74.21 | 74.43 | 77.27 | 73.86 | 74.16 | 78.29 | 74.06 | |
SimCSE | 70.14 | 79.56 | 75.91 | 81.46 | 79.07 | 76.85 | 72.55 | 76.50 | |
优化模型 | Glove_WR | 57.13 | 68.24 | 65.31 | 72.25 | 70.16 | 64.26 | 70.43 | 66.82 |
BERT_flow | 58.40 | 67.10 | 60.85 | 75.16 | 71.22 | 68.66 | 64.47 | 66.55 | |
BERT_whitening | 57.83 | 66.90 | 60.90 | 75.08 | 71.31 | 68.24 | 63.73 | 66.28 | |
SimMSE | 72.00 | 80.10 | 76.03 | 83.03 | 81.04 | 78.18 | 73.52 | 77.71 |
表1 句向量优化模型的实验结果对比 (%)
Tab. 1 Comparison of experimental results of sentence embedding optimization models
类别 | 模型 | STS12 | STS13 | STS14 | STS15 | STS16 | STS-B | SICK-R | 平均值 |
---|---|---|---|---|---|---|---|---|---|
生成模型 | BERT | 54.71 | 54.52 | 58.81 | 67.36 | 68.18 | 53.88 | 62.06 | 59.93 |
Skip_Thoughts | 44.32 | 34.56 | 41.65 | 46.52 | 55.45 | 74.28 | 79.21 | 53.71 | |
InferSent_FastText | 58.16 | 55.29 | 58.53 | 67.34 | 68.29 | 72.45 | 78.34 | 65.48 | |
USE_TF | 64.21 | 68.54 | 67.96 | 77.12 | 76.86 | 77.94 | 80.73 | 73.33 | |
ConSERT | 64.64 | 78.49 | 69.42 | 79.72 | 75.95 | 73.97 | 67.31 | 72.78 | |
SBERT | 66.21 | 74.21 | 74.43 | 77.27 | 73.86 | 74.16 | 78.29 | 74.06 | |
SimCSE | 70.14 | 79.56 | 75.91 | 81.46 | 79.07 | 76.85 | 72.55 | 76.50 | |
优化模型 | Glove_WR | 57.13 | 68.24 | 65.31 | 72.25 | 70.16 | 64.26 | 70.43 | 66.82 |
BERT_flow | 58.40 | 67.10 | 60.85 | 75.16 | 71.22 | 68.66 | 64.47 | 66.55 | |
BERT_whitening | 57.83 | 66.90 | 60.90 | 75.08 | 71.31 | 68.24 | 63.73 | 66.28 | |
SimMSE | 72.00 | 80.10 | 76.03 | 83.03 | 81.04 | 78.18 | 73.52 | 77.71 |
模型 | STS12 | STS13 | STS14 | STS15 | STS16 | STS- B | SICK- R | 平均 值 |
---|---|---|---|---|---|---|---|---|
BERT(1) | 21.02 | 20.12 | 16.77 | 20.14 | 27.43 | 6.43 | 30.11 | 20.28 |
BERT(2) | 54.71 | 54.5 | 58.81 | 67.36 | 68.17 | 53.87 | 62.06 | 59.92 |
BERT(3) | 50.07 | 52.91 | 54.91 | 63.37 | 64.94 | 47.29 | 58.22 | 55.95 |
BERT_MFL | 60.65 | 64.53 | 63.40 | 74.11 | 72.69 | 62.21 | 64.27 | 65.98 |
Roberta(1) | 45.27 | 36.88 | 47.71 | 53.84 | 59.50 | 39.13 | 61.76 | 49.15 |
Roberta(2) | 57.59 | 48.98 | 59.36 | 66.87 | 64.20 | 58.56 | 61.63 | 59.59 |
Roberta(3) | 53.82 | 46.58 | 56.64 | 64.96 | 63.62 | 55.40 | 62.02 | 57.57 |
Roberta_MFL | 59.86 | 61.06 | 64.98 | 70.59 | 69.91 | 61.43 | 66.08 | 64.84 |
XLNET(1) | 47.91 | 33.92 | 44.54 | 57.67 | 49.94 | 41.21 | 49.23 | 46.34 |
XLNET(2) | 37.15 | 20.83 | 27.27 | 35.05 | 35.65 | 31.62 | 37.01 | 32.08 |
XLNET(3) | 37.40 | 20.73 | 27.14 | 34.91 | 35.55 | 31.49 | 36.19 | 31.91 |
XLNET_MFL | 53.45 | 49.14 | 52.42 | 62.07 | 54.65 | 49.62 | 52.11 | 53.35 |
GPT-2(1) | 44.37 | 16.51 | 24.23 | 35.27 | 44.40 | 22.74 | 42.33 | 32.83 |
GPT-2(2) | 36.80 | 24.76 | 31.24 | 33.75 | 37.41 | 27.06 | 43.26 | 33.46 |
GPT-2(3) | 36.18 | 23.80 | 30.46 | 32.94 | 36.84 | 26.27 | 42.72 | 32.74 |
GPT-2_MFL | 48.87 | 32.87 | 40.24 | 43.91 | 42.26 | 38.76 | 43.97 | 41.16 |
BART(1) | 59.78 | 53.70 | 61.54 | 71.01 | 69.64 | 60.92 | 61.77 | 62.62 |
BART(2) | 53.08 | 45.14 | 53.86 | 65.75 | 63.94 | 52.46 | 53.16 | 55.34 |
BART(3) | 51.34 | 50.50 | 55.57 | 67.07 | 64.59 | 51.16 | 54.91 | 56.44 |
BART_MFL | 60.80 | 63.58 | 64.74 | 73.03 | 72.26 | 63.87 | 63.12 | 65.94 |
T5(1) | 37.52 | 32.88 | 39.91 | 44.02 | 47.96 | 36.42 | 37.13 | 39.40 |
T5(2) | 66.01 | 71.04 | 73.45 | 73.74 | 64.83 | 62.77 | 60.18 | 67.43 |
T5(3) | 59.01 | 64.19 | 69.87 | 68.87 | 83.17 | 60.35 | 60.84 | 66.61 |
T5_MFL | 68.25 | 74.56 | 80.26 | 75.94 | 84.37 | 68.35 | 63.78 | 73.64 |
表2 主流预训练模型的实验结果对比 (%)
Tab. 2 Comparison of experimental results of mainstream pre-trained models
模型 | STS12 | STS13 | STS14 | STS15 | STS16 | STS- B | SICK- R | 平均 值 |
---|---|---|---|---|---|---|---|---|
BERT(1) | 21.02 | 20.12 | 16.77 | 20.14 | 27.43 | 6.43 | 30.11 | 20.28 |
BERT(2) | 54.71 | 54.5 | 58.81 | 67.36 | 68.17 | 53.87 | 62.06 | 59.92 |
BERT(3) | 50.07 | 52.91 | 54.91 | 63.37 | 64.94 | 47.29 | 58.22 | 55.95 |
BERT_MFL | 60.65 | 64.53 | 63.40 | 74.11 | 72.69 | 62.21 | 64.27 | 65.98 |
Roberta(1) | 45.27 | 36.88 | 47.71 | 53.84 | 59.50 | 39.13 | 61.76 | 49.15 |
Roberta(2) | 57.59 | 48.98 | 59.36 | 66.87 | 64.20 | 58.56 | 61.63 | 59.59 |
Roberta(3) | 53.82 | 46.58 | 56.64 | 64.96 | 63.62 | 55.40 | 62.02 | 57.57 |
Roberta_MFL | 59.86 | 61.06 | 64.98 | 70.59 | 69.91 | 61.43 | 66.08 | 64.84 |
XLNET(1) | 47.91 | 33.92 | 44.54 | 57.67 | 49.94 | 41.21 | 49.23 | 46.34 |
XLNET(2) | 37.15 | 20.83 | 27.27 | 35.05 | 35.65 | 31.62 | 37.01 | 32.08 |
XLNET(3) | 37.40 | 20.73 | 27.14 | 34.91 | 35.55 | 31.49 | 36.19 | 31.91 |
XLNET_MFL | 53.45 | 49.14 | 52.42 | 62.07 | 54.65 | 49.62 | 52.11 | 53.35 |
GPT-2(1) | 44.37 | 16.51 | 24.23 | 35.27 | 44.40 | 22.74 | 42.33 | 32.83 |
GPT-2(2) | 36.80 | 24.76 | 31.24 | 33.75 | 37.41 | 27.06 | 43.26 | 33.46 |
GPT-2(3) | 36.18 | 23.80 | 30.46 | 32.94 | 36.84 | 26.27 | 42.72 | 32.74 |
GPT-2_MFL | 48.87 | 32.87 | 40.24 | 43.91 | 42.26 | 38.76 | 43.97 | 41.16 |
BART(1) | 59.78 | 53.70 | 61.54 | 71.01 | 69.64 | 60.92 | 61.77 | 62.62 |
BART(2) | 53.08 | 45.14 | 53.86 | 65.75 | 63.94 | 52.46 | 53.16 | 55.34 |
BART(3) | 51.34 | 50.50 | 55.57 | 67.07 | 64.59 | 51.16 | 54.91 | 56.44 |
BART_MFL | 60.80 | 63.58 | 64.74 | 73.03 | 72.26 | 63.87 | 63.12 | 65.94 |
T5(1) | 37.52 | 32.88 | 39.91 | 44.02 | 47.96 | 36.42 | 37.13 | 39.40 |
T5(2) | 66.01 | 71.04 | 73.45 | 73.74 | 64.83 | 62.77 | 60.18 | 67.43 |
T5(3) | 59.01 | 64.19 | 69.87 | 68.87 | 83.17 | 60.35 | 60.84 | 66.61 |
T5_MFL | 68.25 | 74.56 | 80.26 | 75.94 | 84.37 | 68.35 | 63.78 | 73.64 |
模型 | None | 随机采样 | 拒绝采样 | 句频采样 |
---|---|---|---|---|
BERT | 53.87 | 46.63 | 49.61 | 62.21 |
Roberta | 58.56 | 58.67 | 59.75 | 61.43 |
XLNET | 31.62 | 28.68 | 32.65 | 49.62 |
GPT-2 | 27.06 | 29.84 | 30.26 | 38.76 |
BART | 52.46 | 51.56 | 50.68 | 63.87 |
T5 | 62.77 | 58.67 | 60.84 | 68.35 |
表3 不同采样方法在STS-B测试任务上的性能对比 (%)
Tab. 3 Performance comparison of different sampling methods on STS-B test task
模型 | None | 随机采样 | 拒绝采样 | 句频采样 |
---|---|---|---|---|
BERT | 53.87 | 46.63 | 49.61 | 62.21 |
Roberta | 58.56 | 58.67 | 59.75 | 61.43 |
XLNET | 31.62 | 28.68 | 32.65 | 49.62 |
GPT-2 | 27.06 | 29.84 | 30.26 | 38.76 |
BART | 52.46 | 51.56 | 50.68 | 63.87 |
T5 | 62.77 | 58.67 | 60.84 | 68.35 |
1 | 赵京胜,宋梦雪,高祥,等. 自然语言处理中的文本表示研究[J]. 软件学报, 2022, 33(1): 102-128. |
ZHAO J S, SONG M X, GAO X, et al. Research on text representation in natural language processing[J]. Journal of Software, 2022, 33(1): 102-128. | |
2 | RAJATH S, KUMAR A, AGARWAL M, et al. Data mining tool to help the scientific community develop answers to Covid-19 queries[C]// Proceedings of the 5th International Conference on Intelligent Computing in Data Sciences. Piscataway: IEEE, 2021: 1-5. 10.1109/icds53782.2021.9626771 |
3 | SASTRE J, VAHID A H, McDONAGH C, et al. A text mining approach to discovering COVID-19 relevant factors[C]// Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine. Piscataway: IEEE, 2020: 486-490. 10.1109/bibm49941.2020.9313149 |
4 | BOATENG G. Towards real-time multimodal emotion recognition among couples[C]// Proceedings of the 2020 International Conference on Multimodal Interaction. New York: ACM, 2020: 748-753. 10.1145/3382507.3421154 |
5 | BOATENG G, KOWATSCH T. Speech emotion recognition among elderly individuals using multimodal fusion and transfer learning[C]// Companion Publication of the 2020 International Conference on Multimodal Interaction. New York: ACM, 2020: 12-16. 10.1145/3395035.3425255 |
6 | ESTEVA A, KALE A, PAULUS R, et al. COVID-19 information retrieval with deep-learning based semantic search, question answering, and abstractive summarization[J]. npj Digital Medicine, 2021, 4: No.68. 10.1038/s41746-021-00437-0 |
7 | LIN J. A proposed conceptual framework for a representational approach to information retrieval[J]. ACM SIGIR Forum, 2021, 55(2): No.4. 10.1145/3527546.3527552 |
8 | LI R, ZHAO X, MOENS M F. A brief overview of universal sentence representation methods: a linguistic view[J]. ACM Computing Surveys, 2023, 55(3): No.56. 10.1145/3482853 |
9 | ARORA S, LIANG Y, MA T. A simple but tough-to-beat baseline for sentence embeddings[EB/OL]. (2022-07-22) [2022-07-20].. |
10 | KIROS R, ZHU Y, SALAKHUTDINOV R, et al. Skip-thought vectors[C]// Proceedings of the 28th International Conference on Neural Information Processing Systems — Volume 2. Cambridge: MIT Press, 2015: 3294-3302. |
11 | WIETING J, BANSAL M, GIMPEL K, et al. Towards universal paraphrastic sentence embeddings[EB/OL]. (2016-03-04) [2022-07-20].. |
12 | ZHANG M, WU Y, LI W, et al. Learning universal sentence representations with mean-max attention autoencoder[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2018: 1532-1543. 10.18653/v1/d18-1481 |
13 | LIU Z Y, LIN Y K, SUN M S. Representation Learning for Natural Language Processing[M]. Berlin: Springer, 2020. 10.1007/978-981-15-5573-2 |
14 | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: ACL, 2019: 4171-4186. 10.18653/v1/n18-2 |
15 | LI B, ZHOU H, HE J, et al. On the sentence embeddings from pre-trained language models[C]// Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2020: 9119-9130. 10.18653/v1/2020.emnlp-main.733 |
16 | SU J, CAO J, LIU W, et al. Whitening sentence representations for better semantics and faster retrieval[EB/OL]. (2021-03-29) [2022-05-23].. |
17 | REIMERS N, GUREVYCH I. Sentence-BERT: sentence embeddings using siamese BERT-networks[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, PA: ACL, 2019: 3982-3992. 10.18653/v1/d19-1410 |
18 | YAN Y, LI R, WANG S, et al. ConSERT: a contrastive framework for self-supervised sentence representation transfer[C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg, PA: ACL, 2021: 5065-5075. 10.18653/v1/2021.acl-long.393 |
19 | GAO T, YAO X, CHEN D. SimCSE: simple contrastive learning of sentence embeddings[C]// Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2021: 6894-6910. 10.18653/v1/2021.emnlp-main.552 |
20 | HASHIMOTO T B, ALVAREZ-MELIS D, JAAKKOLA T S. Word embeddings as metric recovery in semantic spaces[J]. Transactions of the Association for Computational Linguistics, 2016, 4: 273-286. 10.1162/tacl_a_00098 |
21 | HASAN S, CURRY E. Word re-embedding via manifold dimensionality retention[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Stroudsburg, PA: ACL, 2017: 321-326. 10.18653/v1/d17-1033 |
22 | ZHAO D, WANG J, CHU Y, et al. Improving biomedical word representation with locally linear embedding[J]. Neurocomputing, 2021, 447: 172-182. 10.1016/j.neucom.2021.02.071 |
23 | ZHAO W, ZHOU D, LI L, et al. Manifold learning-based word representation refinement incorporating global and local information[C]// Proceedings of the 28th International Conference on Computational Linguistics. [S.l.]: International Committee on Computational Linguistics, 2020: 3401-3412. 10.18653/v1/2020.coling-main.301 |
24 | NASER MOGHADASI M, ZHUANG Y. Sent2Vec: a new sentence embedding representation with sentimental semantic[C]// Proceedings of the 2020 IEEE International Conference on Big Data. Piscataway: IEEE, 2020: 4672-4680. 10.1109/bigdata50022.2020.9378337 |
25 | ZHAO D, WANG J, LIN H, et al. Sentence representation with manifold learning for biomedical texts[J]. Knowledge-Based Systems, 2021, 218: No.106869. 10.1016/j.knosys.2021.106869 |
26 | BOMMASANI R, DAVIS K, CARDIE C. Interpreting pretrained contextualized representations via reductions to static embeddings[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 4758-4781. 10.18653/v1/2020.acl-main.431 |
27 | 韩程程,李磊,刘婷婷,等. 语义文本相似度计算方法[J]. 华东师范大学学报(自然科学版), 2020(5):95-112. |
HAN C C, LI L, LIU T T, et al. Approaches for semantic textual similarity[J]. Journal of East China Normal University (Natural Science), 2020(5):95-112. | |
28 | CER D, YANG Y, KONG S Y, et al. Universal sentence encoder for English[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Stroudsburg, PA: ACL, 2018: 169-174. 10.18653/v1/d18-2029 |
29 | CONNEAU A, KIELA D, SCHWENK H, et al. Supervised learning of universal sentence representations from natural language inference data[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2017: 670-680. 10.18653/v1/d17-1070 |
30 | 岳增营,叶霞,刘睿珩. 基于语言模型的预训练技术研究综述[J]. 中文信息学报, 2021, 35(9): 15-29. 10.3969/j.issn.1003-0077.2021.09.002 |
YUE Z Y, YE X, LIU R H. A survey of language model based pre-training technology[J]. Journal of Chinese Information Processing, 2021, 35(9): 15-29. 10.3969/j.issn.1003-0077.2021.09.002 | |
31 | ROWEIS S T, SAUL L K. Nonlinear dimensionality reduction by locally linear embedding[J]. Science, 2000, 290(5500): 2323-2326. 10.1126/science.290.5500.2323 |
[1] | 帅奇, 王海瑞, 朱贵富. 基于双向对比训练的中文故事结尾生成模型[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2683-2688. |
[2] | 杨兴耀, 陈羽, 于炯, 张祖莲, 陈嘉颖, 王东晓. 结合自我特征和对比学习的推荐模型[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2704-2710. |
[3] | 薛凯鹏, 徐涛, 廖春节. 融合自监督和多层交叉注意力的多模态情感分析网络[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2387-2392. |
[4] | 李晨阳, 张龙, 郑秋生, 钱少华. 基于扩散序列的多元可控文本生成[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2414-2420. |
[5] | 张全梅, 黄润萍, 滕飞, 张海波, 周南. 融合异构信息的自动国际疾病分类编码方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2476-2482. |
[6] | 徐松, 张文博, 王一帆. 基于时空信息的轻量视频显著性目标检测网络[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2192-2199. |
[7] | 陆潜慧, 张羽, 王梦灵, 吴庭伟, 单玉忠. 基于改进循环池化网络的核电装备质量文本分类模型[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2034-2040. |
[8] | 蒋小霞, 黄瑞章, 白瑞娜, 任丽娜, 陈艳平. 基于事件表示和对比学习的深度事件聚类方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1734-1742. |
[9] | 于右任, 张仰森, 蒋玉茹, 黄改娟. 融合多粒度语言知识与层级信息的中文命名实体识别模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1706-1712. |
[10] | 刘耀, 李雨萌, 宋苗苗. 基于业务流程的认知图谱[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1699-1705. |
[11] | 赵征宇, 罗景, 涂新辉. 基于多粒度语义融合的信息检索方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1775-1780. |
[12] | 汪炅, 唐韬韬, 贾彩燕. 无负采样的正样本增强图对比学习推荐方法PAGCL[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1485-1492. |
[13] | 郭洁, 林佳瑜, 梁祖红, 罗孝波, 孙海涛. 基于知识感知和跨层次对比学习的推荐方法[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1121-1127. |
[14] | 高龙涛, 李娜娜. 基于方面感知注意力增强的方面情感三元组抽取[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1049-1057. |
[15] | 杨先凤, 汤依磊, 李自强. 基于交替注意力机制和图卷积网络的方面级情感分析模型[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1058-1064. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||