Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (8): 2491-2496.DOI: 10.11772/j.issn.1001-9081.2024071037
• Artificial intelligence • Previous Articles
Received:
2024-07-23
Revised:
2024-09-23
Accepted:
2024-09-26
Online:
2024-11-19
Published:
2025-08-10
Contact:
Yan ZHU
About author:
YANG Qing, born in 1999, M. S. candidate. Her research interests include figurative language analysis.
Supported by:
通讯作者:
朱焱
作者简介:
杨青(1999—),女,湖南株洲人,硕士研究生,主要研究方向:比喻语言分析
基金资助:
CLC Number:
Qing YANG, Yan ZHU. Metaphor detection for improving representation in linguistic rules[J]. Journal of Computer Applications, 2025, 45(8): 2491-2496.
杨青, 朱焱. 改进语言规则中的表示的隐喻识别[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2491-2496.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024071037
数据集 | 目标词数 | 隐喻词比例% | 句子数 | 句子的平均长度 |
---|---|---|---|---|
VUA-18 train | 116 622 | 11.2 | 6 323 | 18.4 |
VUA-18 val | 38 628 | 11.6 | 1 550 | 24.9 |
VUA-18 test | 50 175 | 12.4 | 2 694 | 18.6 |
VUA Verb train | 15 516 | 27.9 | 7 479 | 20.2 |
VUA Verb val | 1 724 | 26.9 | 1 541 | 25.0 |
VUA Verb test | 2 694 | 30.0 | 2 694 | 18.6 |
MOH-X | 647 | 48.7 | 647 | 8.0 |
TroFi | 3 737 | 43.5 | 3 737 | 28.3 |
Tab. 1 Detailed statistical data of benchmark datasets
数据集 | 目标词数 | 隐喻词比例% | 句子数 | 句子的平均长度 |
---|---|---|---|---|
VUA-18 train | 116 622 | 11.2 | 6 323 | 18.4 |
VUA-18 val | 38 628 | 11.6 | 1 550 | 24.9 |
VUA-18 test | 50 175 | 12.4 | 2 694 | 18.6 |
VUA Verb train | 15 516 | 27.9 | 7 479 | 20.2 |
VUA Verb val | 1 724 | 26.9 | 1 541 | 25.0 |
VUA Verb test | 2 694 | 30.0 | 2 694 | 18.6 |
MOH-X | 647 | 48.7 | 647 | 8.0 |
TroFi | 3 737 | 43.5 | 3 737 | 28.3 |
模型 | VUA-18 | VUA Verb | MOH-X(10 fold) | ||||||
---|---|---|---|---|---|---|---|---|---|
P | R | F1 | P | R | F1 | P | R | F1 | |
RNN_HG | 74.5 | 73.1 | 73.8 | 67.9 | 71.3 | 69.5 | 78.3 | 80.6 | 79.2 |
RNN_MHCA | 76.9 | 71.8 | 74.3 | 70.8 | 71.3 | 71.0 | 77.5 | 79.6 | 78.3 |
RoBERTa_SEQ | 78.4 | 75.1 | 76.7 | 72.1 | 74.6 | 73.3 | — | — | — |
DeepMet | 82.0 | 71.3 | 76.3 | 79.5 | 70.8 | 74.9 | — | — | — |
MelBERT | 79.6 | 76.7 | 78.1 | 74.0 | 76.1 | 75.0 | — | — | — |
MrBERT | 81.1 | 72.2 | 76.4 | 79.0 | 71.5 | 75.1 | 83.3 | 80.3 | 81.6 |
MisNet | 82.0 | 73.5 | 77.5 | 72.6 | 76.8 | 74.6 | 88.5 | 76.7 | 82.1 |
MeRL | 82.2 | 75.5 | 78.7 | 77.2 | 74.8 | 76.0 | 84.3 | 82.9 | 83.3 |
Tab. 2 Comparison of metaphor detection performance on three benchmark datasets
模型 | VUA-18 | VUA Verb | MOH-X(10 fold) | ||||||
---|---|---|---|---|---|---|---|---|---|
P | R | F1 | P | R | F1 | P | R | F1 | |
RNN_HG | 74.5 | 73.1 | 73.8 | 67.9 | 71.3 | 69.5 | 78.3 | 80.6 | 79.2 |
RNN_MHCA | 76.9 | 71.8 | 74.3 | 70.8 | 71.3 | 71.0 | 77.5 | 79.6 | 78.3 |
RoBERTa_SEQ | 78.4 | 75.1 | 76.7 | 72.1 | 74.6 | 73.3 | — | — | — |
DeepMet | 82.0 | 71.3 | 76.3 | 79.5 | 70.8 | 74.9 | — | — | — |
MelBERT | 79.6 | 76.7 | 78.1 | 74.0 | 76.1 | 75.0 | — | — | — |
MrBERT | 81.1 | 72.2 | 76.4 | 79.0 | 71.5 | 75.1 | 83.3 | 80.3 | 81.6 |
MisNet | 82.0 | 73.5 | 77.5 | 72.6 | 76.8 | 74.6 | 88.5 | 76.7 | 82.1 |
MeRL | 82.2 | 75.5 | 78.7 | 77.2 | 74.8 | 76.0 | 84.3 | 82.9 | 83.3 |
模型 | P | R | F1 |
---|---|---|---|
RoBERTa_SEQ | 53.0 | 69.7 | 60.2 |
DeepMet | 53.7 | 72.9 | 61.7 |
MelBERT | 52.6 | 72.7 | 61.0 |
MrBERT | 54.0 | 72.4 | 61.9 |
MisNet | 53.9 | 73.2 | 62.1 |
MeRL | 54.0 | 74.9 | 62.8 |
Tab. 3 Zero-shot transfer results on TroFi dataset
模型 | P | R | F1 |
---|---|---|---|
RoBERTa_SEQ | 53.0 | 69.7 | 60.2 |
DeepMet | 53.7 | 72.9 | 61.7 |
MelBERT | 52.6 | 72.7 | 61.0 |
MrBERT | 54.0 | 72.4 | 61.9 |
MisNet | 53.9 | 73.2 | 62.1 |
MeRL | 54.0 | 74.9 | 62.8 |
类型 | 模型 | P | R | F1 | 词性 | 模型 | P | R | F1 |
---|---|---|---|---|---|---|---|---|---|
学术 | RNN_HG | 80.0 | 78.5 | 79.2 | 动词 | RNN_HG | 67.9 | 72.0 | 69.9 |
RNN_MHCA | 84.2 | 77.4 | 80.6 | RNN_MHCA | 70.5 | 71.6 | 71.1 | ||
RoBERTa_SEQ | 84.3 | 80.6 | 82.4 | RoBERTa_SEQ | 72.3 | 74.9 | 73.6 | ||
DeepMet | 88.4 | 74.7 | 81.0 | DeepMet | 78.8 | 68.5 | 73.3 | ||
MelBERT | 86.3 | 80.3 | 83.2 | MelBERT | 74.3 | 76.5 | 75.4 | ||
MisNet | 87.6 | 78.2 | 82.6 | MisNet | 78.3 | 72.4 | 75.2 | ||
MeRL | 87.3 | 81.0 | 84.0 | MeRL | 78.2 | 74.0 | 76.1 | ||
对话 | RNN_HG | 65.1 | 67.5 | 66.3 | 形容词 | RNN_HG | 62.9 | 61.1 | 62.0 |
RNN_MHCA | 67.3 | 66.5 | 66.9 | RNN_MHCA | 69.9 | 53.2 | 60.4 | ||
RoBERTa_SEQ | 66.8 | 71.7 | 69.1 | RoBERTa_SEQ | 67.7 | 62.1 | 64.8 | ||
DeepMet | 71.6 | 71.1 | 71.4 | DeepMet | 79.0 | 52.9 | 63.3 | ||
MelBERT | 69.5 | 71.4 | 70.4 | MelBERT | 69.7 | 59.3 | 64.1 | ||
MisNet | 71.1 | 68.7 | 69.9 | MisNet | 73.2 | 57.5 | 64.4 | ||
MeRL | 71.6 | 69.6 | 70.6 | MeRL | 73.1 | 58.4 | 65.0 | ||
小说 | RNN_HG | 67.6 | 71.3 | 69.4 | 副词 | RNN_HG | 68.7 | 59.4 | 63.7 |
RNN_MHCA | 68.1 | 70.3 | 69.2 | RNN_MHCA | 79.8 | 56.9 | 66.5 | ||
RoBERTa_SEQ | 73.0 | 72.2 | 72.6 | RoBERTa_SEQ | 77.5 | 62.2 | 69.0 | ||
DeepMet | 76.1 | 70.1 | 73.0 | DeepMet | 79.4 | 66.4 | 72.3 | ||
MelBERT | 73.1 | 75.2 | 74.1 | MelBERT | 78.5 | 68.8 | 73.3 | ||
MisNet | 76.9 | 72.9 | 74.8 | MisNet | 80.8 | 63.9 | 71.4 | ||
MeRL | 75.3 | 71.5 | 73.3 | MeRL | 81.3 | 64.3 | 71.9 | ||
新闻 | RNN_HG | 77.2 | 70.3 | 73.6 | 名词 | RNN_HG | 69.6 | 58.0 | 63.3 |
RNN_MHCA | 78.9 | 68.5 | 73.3 | RNN_MHCA | 71.7 | 53.8 | 61.5 | ||
RoBERTa_SEQ | 81.4 | 71.6 | 76.2 | RoBERTa_SEQ | 74.0 | 62.6 | 67.8 | ||
DeepMet | 84.1 | 67.6 | 75.0 | DeepMet | 76.5 | 57.1 | 65.4 | ||
MelBERT | 81.0 | 75.7 | 78.2 | MelBERT | 75.8 | 63.8 | 69.3 | ||
MisNet | 84.3 | 70.5 | 76.8 | MisNet | 76.2 | 62.0 | 68.4 | ||
MeRL | 84.8 | 73.4 | 78.7 | MeRL | 77.1 | 63.1 | 69.4 |
Tab. 4 Model performance of different types and parts of speech on VUA-18 dataset
类型 | 模型 | P | R | F1 | 词性 | 模型 | P | R | F1 |
---|---|---|---|---|---|---|---|---|---|
学术 | RNN_HG | 80.0 | 78.5 | 79.2 | 动词 | RNN_HG | 67.9 | 72.0 | 69.9 |
RNN_MHCA | 84.2 | 77.4 | 80.6 | RNN_MHCA | 70.5 | 71.6 | 71.1 | ||
RoBERTa_SEQ | 84.3 | 80.6 | 82.4 | RoBERTa_SEQ | 72.3 | 74.9 | 73.6 | ||
DeepMet | 88.4 | 74.7 | 81.0 | DeepMet | 78.8 | 68.5 | 73.3 | ||
MelBERT | 86.3 | 80.3 | 83.2 | MelBERT | 74.3 | 76.5 | 75.4 | ||
MisNet | 87.6 | 78.2 | 82.6 | MisNet | 78.3 | 72.4 | 75.2 | ||
MeRL | 87.3 | 81.0 | 84.0 | MeRL | 78.2 | 74.0 | 76.1 | ||
对话 | RNN_HG | 65.1 | 67.5 | 66.3 | 形容词 | RNN_HG | 62.9 | 61.1 | 62.0 |
RNN_MHCA | 67.3 | 66.5 | 66.9 | RNN_MHCA | 69.9 | 53.2 | 60.4 | ||
RoBERTa_SEQ | 66.8 | 71.7 | 69.1 | RoBERTa_SEQ | 67.7 | 62.1 | 64.8 | ||
DeepMet | 71.6 | 71.1 | 71.4 | DeepMet | 79.0 | 52.9 | 63.3 | ||
MelBERT | 69.5 | 71.4 | 70.4 | MelBERT | 69.7 | 59.3 | 64.1 | ||
MisNet | 71.1 | 68.7 | 69.9 | MisNet | 73.2 | 57.5 | 64.4 | ||
MeRL | 71.6 | 69.6 | 70.6 | MeRL | 73.1 | 58.4 | 65.0 | ||
小说 | RNN_HG | 67.6 | 71.3 | 69.4 | 副词 | RNN_HG | 68.7 | 59.4 | 63.7 |
RNN_MHCA | 68.1 | 70.3 | 69.2 | RNN_MHCA | 79.8 | 56.9 | 66.5 | ||
RoBERTa_SEQ | 73.0 | 72.2 | 72.6 | RoBERTa_SEQ | 77.5 | 62.2 | 69.0 | ||
DeepMet | 76.1 | 70.1 | 73.0 | DeepMet | 79.4 | 66.4 | 72.3 | ||
MelBERT | 73.1 | 75.2 | 74.1 | MelBERT | 78.5 | 68.8 | 73.3 | ||
MisNet | 76.9 | 72.9 | 74.8 | MisNet | 80.8 | 63.9 | 71.4 | ||
MeRL | 75.3 | 71.5 | 73.3 | MeRL | 81.3 | 64.3 | 71.9 | ||
新闻 | RNN_HG | 77.2 | 70.3 | 73.6 | 名词 | RNN_HG | 69.6 | 58.0 | 63.3 |
RNN_MHCA | 78.9 | 68.5 | 73.3 | RNN_MHCA | 71.7 | 53.8 | 61.5 | ||
RoBERTa_SEQ | 81.4 | 71.6 | 76.2 | RoBERTa_SEQ | 74.0 | 62.6 | 67.8 | ||
DeepMet | 84.1 | 67.6 | 75.0 | DeepMet | 76.5 | 57.1 | 65.4 | ||
MelBERT | 81.0 | 75.7 | 78.2 | MelBERT | 75.8 | 63.8 | 69.3 | ||
MisNet | 84.3 | 70.5 | 76.8 | MisNet | 76.2 | 62.0 | 68.4 | ||
MeRL | 84.8 | 73.4 | 78.7 | MeRL | 77.1 | 63.1 | 69.4 |
模型 | P | R | F1 |
---|---|---|---|
MeRL | 82.2 | 75.5 | 78.7 |
-MIP | 80.9 | 73.6 | 77.0 |
-SPV | 82.4 | 73.5 | 77.7 |
-Def | 82.3 | 73.1 | 77.4 |
-Syn | 74.5 | 80.2 | 77.3 |
Tab. 5 Ablation experimental results
模型 | P | R | F1 |
---|---|---|---|
MeRL | 82.2 | 75.5 | 78.7 |
-MIP | 80.9 | 73.6 | 77.0 |
-SPV | 82.4 | 73.5 | 77.7 |
-Def | 82.3 | 73.1 | 77.4 |
-Syn | 74.5 | 80.2 | 77.3 |
融合方法 | P | R | F1 |
---|---|---|---|
82.2 | 75.5 | 78.7 | |
81.4 | 75.4 | 78.3 | |
79.3 | 76.3 | 77.8 |
Tab. 6 Results of different feature representation fusion methods
融合方法 | P | R | F1 |
---|---|---|---|
82.2 | 75.5 | 78.7 | |
81.4 | 75.4 | 78.3 | |
79.3 | 76.3 | 77.8 |
真实标签 | 预测标签 | 句子 |
---|---|---|
非隐喻 | 隐喻 | 1. Design : Crossed lines over the toytown tram: City transport could soon be back on the right track, says Jonathan Glancey |
非隐喻 | 隐喻 | 2.they’re treating alcohol as food . |
非隐喻 | 隐喻 | 3.I had nothing particular planned, merely an idea that it might be interesting to thrash our way out into the open ocean, though, mindful of the danger of tropical storms, I had no intention of going too far from the safe shelter of a Bahamian hurricane hole. |
隐喻 | 非隐喻 | 4.Right here, in this gaping mouth, lies the end of the chain . |
Tab. 7 Error case analysis on VUA-18 dataset
真实标签 | 预测标签 | 句子 |
---|---|---|
非隐喻 | 隐喻 | 1. Design : Crossed lines over the toytown tram: City transport could soon be back on the right track, says Jonathan Glancey |
非隐喻 | 隐喻 | 2.they’re treating alcohol as food . |
非隐喻 | 隐喻 | 3.I had nothing particular planned, merely an idea that it might be interesting to thrash our way out into the open ocean, though, mindful of the danger of tropical storms, I had no intention of going too far from the safe shelter of a Bahamian hurricane hole. |
隐喻 | 非隐喻 | 4.Right here, in this gaping mouth, lies the end of the chain . |
[1] | WILKS Y. A preferential, pattern-seeking, semantics for natural language inference[J]. Artificial Intelligence, 1975, 6(1): 53-74. |
[2] | WILKS Y. Making preferences more active[J]. Artificial Intelligence, 1978, 11(3): 197-223. |
[3] | GROUP P. MIP: a method for identifying metaphorically used words in discourse[J]. Metaphor and Symbol, 2007, 22(1): 1-39. |
[4] | STEEN G J, DORST A G, HERRMANN J B, et al. A method for linguistic metaphor identification[M]. Amsterdam: John Benjamins Publishing Company, 2010. |
[5] | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg: ACL, 2019: 4171-4186. |
[6] | LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[EB/OL]. [2024-09-20].. |
[7] | GE M, MAO R, CAMBRIA E. A survey on computational metaphor processing techniques: from identification, interpretation, generation to application[J]. Artificial Intelligence Review, 2023, 56(S2): 1829-1895. |
[8] | MAO R, LIN C, GUERIN F. End-to-end sequential metaphor identification inspired by linguistic theories[C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 3888-3898. |
[9] | GAO G, CHOI E, CHOI Y, et al. Neural metaphor detection in context[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 607-613. |
[10] | SU C, FUKUMOTO F, HUANG X, et al. DeepMet: a reading comprehension paradigm for token-level metaphor detection[C]// Proceedings of the 2nd Workshop on Figurative Language Processing. Stroudsburg: ACL, 2020: 30-39. |
[11] | SONG W, ZHOU S, FU R, et al. Verb metaphor detection via contextual relation learning[C]// Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg: ACL, 2021: 4240-4251. |
[12] | LI Y, WANG S, LIN C, et al. Metaphor detection via explicit basic meanings modelling[C]// Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg: ACL, 2023: 91-100. |
[13] | CHOI M, LEE S, CHOI E, et al. MelBERT: metaphor detection via contextualized late interaction using metaphorical identification theories[C]// Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2021: 1763-1773. |
[14] | ZHANG S, LIU Y. Metaphor detection via linguistics enhanced Siamese network[C]// Proceedings of the 29th International Conference on Computational Linguistics. [S.l.]: International Committee on Computational Linguistics, 2022: 4149-4159. |
[15] | WANG S, LI Y, LIN C, et al. Metaphor detection with effective context denoising[C]// Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 1404-1409. |
[16] | SU C, WU K, CHEN Y. Enhanced metaphor detection via incorporation of external knowledge based on linguistic theories[C]// Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Stroudsburg: ACL, 2021: 1280-1287. |
[17] | RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[EB/OL]. [2024-09-20].. |
[18] | HONNIBAL M. spaCy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing[EB/OL].[2024-09-20]. . |
[19] | 胡壮麟. 语言学教程(修订版中译本)[M]. 北京:北京大学出版社, 2002:69. |
HU Z L. Linguistics course (revised Chinese translation)[M]. Beijing: Peking University Press, 2002:69. | |
[20] | LEONG C W, KLEBANOV B B, SHUTOVA E. A report on the 2018 VUA metaphor detection shared task[C]// Proceedings of the 2018 Workshop on Figurative Language Processing. Stroudsburg: ACL, 2018: 56-66. |
[21] | BIRKE J, SARKAR A. A clustering approach for nearly unsupervised recognition of nonliteral language[C]// Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg: ACL, 2006: 329-336. |
[22] | BIRKE J, SARKAR A. Active learning for the identification of nonliteral language[C]// Proceedings of the 2007 Workshop on Computational Approaches to Figurative Language. Stroudsburg: ACL, 2007: 21-28. |
[23] | MOHAMMAD S, SHUTOVA E, TURNEY P. Metaphor as a medium for emotion: an empirical study[C]// Proceedings of the 5th Joint Conference on Lexical and Computational Semantics. Stroudsburg: ACL, 2016: 23-33. |
[24] | LEONG C W, KLEBANOV B B, HAMILL C, et al. A report on the 2020 VUA and TOEFL metaphor detection shared task[C]// Proceedings of the 2nd Workshop on Figurative Language Processing. Stroudsburg: ACL, 2020: 18-29. |
[25] | REIMERS N, GUREVYCH I. Sentence-BERT: sentence embeddings using Siamese BERT-networks[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: ACL, 2019: 3982-3992. |
[1] | Jie YANG, Tashi NYIMA, Dongrub RINCHEN, Jindong QI, Dondrub TSHERING. Tibetan word segmentation system based on pre-trained model tokenization reconstruction [J]. Journal of Computer Applications, 2025, 45(4): 1199-1204. |
[2] | Jiaxin LI, Site MO. Power work order classification in substation area based on MiniRBT-LSTM-GAT and label smoothing [J]. Journal of Computer Applications, 2025, 45(4): 1356-1362. |
[3] | Haitao SUN, Jiayu LIN, Zuhong LIANG, Jie GUO. Data augmentation technique incorporating label confusion for Chinese text classification [J]. Journal of Computer Applications, 2025, 45(4): 1113-1119. |
[4] | Kun SHENG, Zhongqing WANG. Synaesthesia metaphor analysis based on large language model and data augmentation [J]. Journal of Computer Applications, 2025, 45(3): 794-800. |
[5] | Chenyang LI, Long ZHANG, Qiusheng ZHENG, Shaohua QIAN. Multivariate controllable text generation based on diffusion sequences [J]. Journal of Computer Applications, 2024, 44(8): 2414-2420. |
[6] | Zhengyu ZHAO, Jing LUO, Xinhui TU. Information retrieval method based on multi-granularity semantic fusion [J]. Journal of Computer Applications, 2024, 44(6): 1775-1780. |
[7] | Xiang LIN, Biao JIN, Weijing YOU, Zhiqiang YAO, Jinbo XIONG. Model integrity verification framework of deep neural network based on fragile fingerprint [J]. Journal of Computer Applications, 2024, 44(11): 3479-3486. |
[8] | Yuelin TIAN, Ruizhang HUANG, Lina REN. Scholar fine-grained information extraction method fused with local semantic features [J]. Journal of Computer Applications, 2023, 43(9): 2707-2714. |
[9] | Xinyue ZHANG, Rong LIU, Chiyu WEI, Ke FANG. Aspect-based sentiment analysis method with integrating prompt knowledge [J]. Journal of Computer Applications, 2023, 43(9): 2753-2759. |
[10] | Bihui YU, Xingye CAI, Jingxuan WEI. Few-shot text classification method based on prompt learning [J]. Journal of Computer Applications, 2023, 43(9): 2735-2740. |
[11] | Xiaoyan ZHANG, Zhengyu DUAN. Cross-lingual zero-resource named entity recognition model based on sentence-level generative adversarial network [J]. Journal of Computer Applications, 2023, 43(8): 2406-2411. |
[12] | Lifeng SHI, Zhengwei NI. Dialogue state tracking model based on slot correlation information extraction [J]. Journal of Computer Applications, 2023, 43(5): 1430-1437. |
[13] | Ming XU, Linhao LI, Qiaoling QI, Liqin WANG. Abductive reasoning model based on attention balance list [J]. Journal of Computer Applications, 2023, 43(2): 349-355. |
[14] | Mingyue WU, Dong ZHOU, Wenyu ZHAO, Wei QU. Sentence embedding optimization based on manifold learning [J]. Journal of Computer Applications, 2023, 43(10): 3062-3069. |
[15] | Yaming LI, Kai XING, Hongwu DENG, Zhiyong WANG, Xuan HU. Derivative-free few-shot learning based performance optimization method of pre-trained models with convolution structure [J]. Journal of Computer Applications, 2022, 42(2): 365-374. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||