《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (9): 2773-2782.DOI: 10.11772/j.issn.1001-9081.2024081193
• 人工智能 • 上一篇
收稿日期:
2024-08-26
修回日期:
2024-12-03
接受日期:
2024-12-10
发布日期:
2024-12-17
出版日期:
2025-09-10
通讯作者:
范菁
作者简介:
梁一鸣(1997—),男,河南商丘人,硕士研究生,CCF会员,主要研究方向:自然语言处理、情感分析基金资助:
Yiming LIANG1,2,3, Jing FAN1,2,3(), Wenze CHAI1,2,3
Received:
2024-08-26
Revised:
2024-12-03
Accepted:
2024-12-10
Online:
2024-12-17
Published:
2025-09-10
Contact:
Jing FAN
About author:
LIANG Yiming, born in 1997, M. S. candidate. His research interests include natural language processing, sentiment analysis.Supported by:
摘要:
针对现有情感分类模型在深层情感理解上的局限性、传统注意力机制的单向性束缚以及自然语言处理(NLP)中的类别不平衡等问题,提出一种融合多尺度BERT(Bidirectional Encoder Representations from Transformers)特征和双向交叉注意力机制的情感分类模型M-BCA(Multi-scale BERT features with Bidirectional Cross Attention)。首先,从BERT的低层、中层和高层分别提取多尺度特征,以捕捉句子文本的表面信息、语法信息和深层语义信息;其次,利用三通道门控循环单元(GRU)进一步提取深层语义特征,从而增强模型对文本的理解能力;最后,为促进不同尺度特征之间的交互与学习,引入双向交叉注意力机制,从而增强多尺度特征之间的相互作用。此外,针对不平衡数据问题,设计数据增强策略,并采用混合损失函数优化模型对少数类别样本的学习。实验结果表明,在细粒度情感分类任务中,M-BCA表现优异。M-BCA在处理分布不平衡的多分类情感数据集时,它的性能显著优于大多数基线模型。此外,M-BCA在少数类别样本的分类任务中表现突出,尤其是在NLPCC 2014与Online_Shopping_10_Cats数据集上,M-BCA的少数类别的Macro-Recall领先其他所有对比模型。可见,该模型在细粒度情感分类任务中取得了显著的性能提升,并适用于处理不平衡数据集。
中图分类号:
梁一鸣, 范菁, 柴汶泽. 基于双向交叉注意力的多尺度特征融合情感分类[J]. 计算机应用, 2025, 45(9): 2773-2782.
Yiming LIANG, Jing FAN, Wenze CHAI. Multi-scale feature fusion sentiment classification based on bidirectional cross attention[J]. Journal of Computer Applications, 2025, 45(9): 2773-2782.
数据集 | 样本数 | 类别数 | 平均 长度 | ||
---|---|---|---|---|---|
训练集 | 验证集 | 测试集 | |||
SMP2020-EWECT | 31 905 | 4 513 | 9 161 | 6 | 39 |
NLPCC 2014 | 26 794 | 4 543 | 9 084 | 8 | 28 |
OCEMOTION | 24 985 | 3 569 | 7 140 | 7 | 47 |
Online_Shopping_10_Cats | 23 940 | 6 277 | 12 556 | 2 | 58 |
表1 数据集详情
Tab. 1 Dataset details
数据集 | 样本数 | 类别数 | 平均 长度 | ||
---|---|---|---|---|---|
训练集 | 验证集 | 测试集 | |||
SMP2020-EWECT | 31 905 | 4 513 | 9 161 | 6 | 39 |
NLPCC 2014 | 26 794 | 4 543 | 9 084 | 8 | 28 |
OCEMOTION | 24 985 | 3 569 | 7 140 | 7 | 47 |
Online_Shopping_10_Cats | 23 940 | 6 277 | 12 556 | 2 | 58 |
模型 | OCEMOTION | On-Shopping | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Pre | M-R | M-F1 | Pre | M-R | M-F1 | Pre | M-R | M-F1 | Pre | M-R | M-F1 | |
DPCNN | 77.41 | 75.89 | 76.01 | 72.70 | 35.86 | 39.50 | 59.00 | 43.67 | 46.89 | 94.74 | 90.39 | 90.43 |
TextRCNN | 77.49 | 75.14 | 75.74 | 72.73 | 39.99 | 43.39 | 58.10 | 42.00 | 45.33 | 94.64 | 89.04 | 90.01 |
SCA-HDNN | 76.48 | 74.89 | 73.87 | 72.15 | 37.63 | 35.00 | 59.05 | 44.28 | 47.15 | 94.26 | 90.16 | 90.10 |
ABCDM | 77.51 | 75.66 | 76.05 | 72.30 | 35.00 | 36.80 | 55.11 | 32.91 | 31.56 | 94.56 | 88.51 | 89.78 |
T-E-GRU | 76.97 | 74.72 | 75.30 | 73.01 | 40.32 | 45.41 | 58.56 | 42.70 | 45.78 | 94.60 | 87.76 | 89.67 |
ACR-SA | 77.09 | 74.74 | 75.46 | 72.24 | 35.68 | 40.26 | 58.28 | 44.52 | 46.54 | 94.81 | 89.50 | 90.36 |
BERT-CNN | 77.39 | 75.01 | 75.95 | 73.01 | 37.56 | 42.55 | 59.09 | 43.91 | 47.03 | 94.88 | 90.70 | 90.69 |
BiGRU-Att-HCNN | 76.98 | 74.49 | 75.53 | 72.92 | 37.99 | 41.68 | 58.42 | 43.92 | 47.31 | 94.83 | 89.12 | 90.30 |
HSAN-capsule | 76.84 | 75.50 | 75.05 | 72.81 | 38.61 | 43.71 | 59.36 | 44.68 | 47.83 | 94.96 | 90.35 | 91.02 |
GGC | 78.81 | 76.34 | 77.22 | 74.23 | 39.38 | 44.39 | 59.92 | 44.85 | 48.30 | 95.11 | 90.24 | 90.46 |
M-BCA | 77.85 | 76.24 | 76.45 | 73.76 | 45.08 | 48.55 | 59.73 | 45.03 | 48.63 | 94.73 | 90.92 | 90.50 |
表2 不同模型在不同数据集上的结果对比 (%)
Tab. 2 Comparison results of different models on different datasets
模型 | OCEMOTION | On-Shopping | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Pre | M-R | M-F1 | Pre | M-R | M-F1 | Pre | M-R | M-F1 | Pre | M-R | M-F1 | |
DPCNN | 77.41 | 75.89 | 76.01 | 72.70 | 35.86 | 39.50 | 59.00 | 43.67 | 46.89 | 94.74 | 90.39 | 90.43 |
TextRCNN | 77.49 | 75.14 | 75.74 | 72.73 | 39.99 | 43.39 | 58.10 | 42.00 | 45.33 | 94.64 | 89.04 | 90.01 |
SCA-HDNN | 76.48 | 74.89 | 73.87 | 72.15 | 37.63 | 35.00 | 59.05 | 44.28 | 47.15 | 94.26 | 90.16 | 90.10 |
ABCDM | 77.51 | 75.66 | 76.05 | 72.30 | 35.00 | 36.80 | 55.11 | 32.91 | 31.56 | 94.56 | 88.51 | 89.78 |
T-E-GRU | 76.97 | 74.72 | 75.30 | 73.01 | 40.32 | 45.41 | 58.56 | 42.70 | 45.78 | 94.60 | 87.76 | 89.67 |
ACR-SA | 77.09 | 74.74 | 75.46 | 72.24 | 35.68 | 40.26 | 58.28 | 44.52 | 46.54 | 94.81 | 89.50 | 90.36 |
BERT-CNN | 77.39 | 75.01 | 75.95 | 73.01 | 37.56 | 42.55 | 59.09 | 43.91 | 47.03 | 94.88 | 90.70 | 90.69 |
BiGRU-Att-HCNN | 76.98 | 74.49 | 75.53 | 72.92 | 37.99 | 41.68 | 58.42 | 43.92 | 47.31 | 94.83 | 89.12 | 90.30 |
HSAN-capsule | 76.84 | 75.50 | 75.05 | 72.81 | 38.61 | 43.71 | 59.36 | 44.68 | 47.83 | 94.96 | 90.35 | 91.02 |
GGC | 78.81 | 76.34 | 77.22 | 74.23 | 39.38 | 44.39 | 59.92 | 44.85 | 48.30 | 95.11 | 90.24 | 90.46 |
M-BCA | 77.85 | 76.24 | 76.45 | 73.76 | 45.08 | 48.55 | 59.73 | 45.03 | 48.63 | 94.73 | 90.92 | 90.50 |
方法 | ||||
---|---|---|---|---|
Baseline | — | |||
— | ||||
多尺度特征 | w/o L w/o L | |||
w/o Mid w/o Mid | ||||
w/o High w/o High | ||||
— | ||||
— | ||||
— |
表3 消融实验结果 (%)
Tab. 3 Ablation experimental results
方法 | ||||
---|---|---|---|---|
Baseline | — | |||
— | ||||
多尺度特征 | w/o L w/o L | |||
w/o Mid w/o Mid | ||||
w/o High w/o High | ||||
— | ||||
— | ||||
— |
方法 | Pre | M-R | M-F1 |
---|---|---|---|
BERT-CNN | 73.01 | 37.56 | 42.55 |
BERT-CNN+(Our*) | 73.12 | 40.15 | 44.61 |
SCA-HDNN | 72.15 | 37.63 | 35.00 |
SCA-HDNN+(Our*) | 72.31 | 40.02 | 44.00 |
ACR-SA | 72.24 | 35.68 | 40.26 |
ACR-SA+(Our*) | 72.03 | 38.79 | 42.34 |
ABCDM | 72.30 | 35.00 | 36.80 |
ABCDM+(Our*) | 72.55 | 35.46 | 37.82 |
HSAN-capsule | 72.49 | 36.19 | 40.75 |
HSAN-capsule+(Our*) | 72.61 | 40.97 | 45.21 |
TextRCNN | 72.73 | 39.99 | 43.39 |
TextRCNN+(Our*) | 72.91 | 42.27 | 45.44 |
DPCNN | 72.70 | 35.86 | 39.50 |
DPCNN+(Our*) | 73.07 | 37.90 | 42.15 |
BiGRU-Att-HCNN | 72.92 | 37.99 | 41.68 |
BiGRU-Att-HCNN+(Our*) | 73.02 | 38.50 | 42.86 |
T-E-GRU | 73.01 | 40.32 | 45.41 |
T-E-GRU+(Our*) | 72.61 | 41.07 | 45.44 |
GGC | 74.23 | 39.38 | 44.39 |
GGC+(Our*) | 73.96 | 41.06 | 45.75 |
表4 加入数据增强与联合训练前后基线模型的测试结果 (%)
Tab. 4 Test results of baseline models before and after adding data augmentation and joint training
方法 | Pre | M-R | M-F1 |
---|---|---|---|
BERT-CNN | 73.01 | 37.56 | 42.55 |
BERT-CNN+(Our*) | 73.12 | 40.15 | 44.61 |
SCA-HDNN | 72.15 | 37.63 | 35.00 |
SCA-HDNN+(Our*) | 72.31 | 40.02 | 44.00 |
ACR-SA | 72.24 | 35.68 | 40.26 |
ACR-SA+(Our*) | 72.03 | 38.79 | 42.34 |
ABCDM | 72.30 | 35.00 | 36.80 |
ABCDM+(Our*) | 72.55 | 35.46 | 37.82 |
HSAN-capsule | 72.49 | 36.19 | 40.75 |
HSAN-capsule+(Our*) | 72.61 | 40.97 | 45.21 |
TextRCNN | 72.73 | 39.99 | 43.39 |
TextRCNN+(Our*) | 72.91 | 42.27 | 45.44 |
DPCNN | 72.70 | 35.86 | 39.50 |
DPCNN+(Our*) | 73.07 | 37.90 | 42.15 |
BiGRU-Att-HCNN | 72.92 | 37.99 | 41.68 |
BiGRU-Att-HCNN+(Our*) | 73.02 | 38.50 | 42.86 |
T-E-GRU | 73.01 | 40.32 | 45.41 |
T-E-GRU+(Our*) | 72.61 | 41.07 | 45.44 |
GGC | 74.23 | 39.38 | 44.39 |
GGC+(Our*) | 73.96 | 41.06 | 45.75 |
[1] | JIANG X, SONG C, XU Y, et al. Research on sentiment classification for netizens based on the BERT-BiLSTM-TextCNN model [J]. PeerJ Computer Science, 2022, 8: No.e1005. |
[2] | SINGH M, JAKHAR A K, PANDEY S. Sentiment analysis on the impact of coronavirus in social life using the BERT model [J]. Social Network Analysis and Mining, 2021, 11: No.33. |
[3] | TAN X, ZHUANG M, LU X, et al. An analysis of the emotional evolution of large-scale internet public opinion events based on the BERT-LDA hybrid model [J]. IEEE Access, 2021, 9: 15860-15871. |
[4] | CAHYA L D, LUTHFIARTA A, KRISNA J I T, et al. Improving multi-label classification performance on imbalanced datasets through SMOTE technique and data augmentation using IndoBERT model [J]. Jurnal Teknologi dan Sistem Informasi, 2024, 9(3): 290-298. |
[5] | CAI T, ZHANG X. Imbalanced text sentiment classification based on multi-channel BLTCN-BLSTM self-attention [J]. Sensors, 2023, 23(4): No.2257. |
[6] | LI W, QI F, TANG M, et al. Bidirectional LSTM with self-attention mechanism and multi-channel features for sentiment classification [J]. Neurocomputing, 2020, 387: 63-77. |
[7] | 王伟,孙玉霞,齐庆杰,等. 基于BiGRU-attention神经网络的文本情感分类模型[J]. 计算机应用研究, 2019, 36(12): 3558-3564. |
WANG W, SUN Y X, QI Q J, et al. Text sentiment classification model based on BiGRU-attention neural network [J]. Application Research of Computers, 2019, 36(12): 3558-3564. | |
[8] | SHI S, ZHAO M, GUAN J, et al. A hierarchical LSTM model with multiple features for sentiment analysis of Sina Weibo texts [C]// Proceedings of the 2017 International Conference on Asian Language Processing. Piscataway: IEEE, 2017: 379-382. |
[9] | ZHOU J, LU Y, DAI H N, et al. Sentiment analysis of Chinese microblog based on stacked bidirectional LSTM [J]. IEEE Access, 2019, 7: 38856-38866. |
[10] | HUANG F, LI X, YUAN C, et al. Attention-emotion-enhanced convolutional LSTM for sentiment analysis [J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(9): 4332-4345. |
[11] | PARVEEN N, CHAKRABARTI P, HUNG B T, et al. Twitter sentiment analysis using hybrid gated attention recurrent network[J]. Journal of Big Data, 2023, 10: No.50. |
[12] | DENG D, JING L, YU J, et al. Sparse self-attention LSTM for sentiment lexicon construction [J]. IEEE-ACM Transactions on Audio Speech and Language Processing, 2019, 27(11): 1777-1790. |
[13] | 李卫疆,漆芳,余正涛. 基于多通道特征和自注意力的情感分类方法 [J]. 软件学报, 2021, 32(9): 2783-2800. |
LI W J, QI F, YU Z T. Sentiment classification method based on multi-channel features and self-attention [J]. Journal of Software, 2021, 32(9): 2783-2800. | |
[14] | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional Transformers for language understanding [C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg: ACL, 2019: 4171-4186. |
[15] | CAI R, QIN B, CHEN Y, et al. Sentiment analysis about investors and consumers in energy market based on BERT-BiLSTM[J]. IEEE Access, 2020, 8: 171408-171415. |
[16] | LIU Y, LU J, YANG J, et al. Sentiment analysis for e-commerce product reviews by deep learning model of BERT-BiGRU-Softmax[J]. Mathematical Biosciences and Engineering, 2020, 17(6): 7819-7837. |
[17] | LIN S Y, KUNG Y C, LEU F Y. Predictive intelligence in harmful news identification by BERT-based ensemble learning model with text sentiment analysis [J]. Information Processing and Management, 2022, 59(2): No.102872. |
[18] | XIAO J, LUO X. Aspect-level sentiment analysis based on BERT fusion multi-attention [C]// Proceedings of the 14th International Conference on Intelligent Human-Machine Systems and Cybernetics. Piscataway: IEEE, 2022: 32-35. |
[19] | JAWAHAR G, SAGOT B, SEDDAH D. What does BERT learn about the structure of language? [C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 3651-3657. |
[20] | MADABUSHI H T, KOCHKINA E, CASTELLE M. Cost-sensitive BERT for generalisable sentence classification with imbalanced data [C]// Proceedings of the 2nd Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda. Stroudsburg: ACL, 2019: 125-134. |
[21] | WEI Z, WANG C, YANG X, et al. Imbalanced sentiment classification of online reviews based on SimBERT [J]. Journal of Intelligent and Fuzzy Systems, 2023, 45(5): 8015-8025. |
[22] | BANSAL A, CHOUDHRY A, SHARMA A, et al. Adaptation of domain-specific Transformer models with text oversampling for sentiment analysis of social media posts on Covid-19 vaccines [J]. Computer Science, 2023, 24(2): 163-182. |
[23] | JABIR B, DE LA DIEZ L, THOMPSON E F B, et al. Ensemble Partition Sampling (EPS) for improved multi-class classification[J]. IEEE Access, 2023, 11: 48221-48235. |
[24] | AWAL M R, CAO R, LEE R K W, et al. AngryBERT: joint learning target and emotion for hate speech detection [C]// Proceedings of the 2021 Pacific-Asia Conference on Knowledge Discovery and Data Mining, LNCS 12712. Cham: Springer, 2021: 701-713. |
[25] | WEN H, ZHAO J. Sentiment analysis model of imbalanced comment texts based on BiLSTM [EB/OL]. [2024-06-21]. . |
[26] | TAN K L, LEE C P, LIM K M. RoBERTa-GRU: a hybrid deep learning model for enhanced sentiment analysis [J]. Applied Sciences, 2023, 13(6): No.3915. |
[27] | 林夕,陈孜卓,王中卿. 基于不平衡数据与集成学习的属性级情感分类[J]. 计算机科学, 2022, 49(6A): 144-149. |
LIN X, CHEN Z Z, WANG Z Q. Aspect-level sentiment classification based on imbalanced data and ensemble learning [J]. Computer Science, 2022, 49(6A): 144-149. | |
[28] | 颜学明,黄翰,金耀初,等. 面向不平衡短文本情感多分类的三阶语义图数据增广方法[J]. 计算机学报, 2024, 47(12): 2742-2759. |
YAN X M, HUANG H, JIN Y-C, et al. A short text augmentation approach based on three-order semantic graphs for imbalanced sentiment multiclassification [J]. Chinese Journal of Computers, 2024, 47(12): 2742-2759. | |
[29] | LI M, LONG Y, QIN L, et al. Emotion corpus construction based on selection from hashtags [C]// Proceedings of the 10th International Conference on Language Resources and Evaluation. Paris: European Language Resources Association, 2016: 1845-1849. |
[30] | LI X, NING H. Deep pyramid convolutional neural network integrated with self-attention mechanism and highway network for text classification [J]. Journal of Physics: Conference Series, 2020, 1642: No.012008. |
[31] | GUO Z, ZHU L, HAN L. Research on short text classification based on RoBERTa-TextRCNN [C]// Proceedings of the 2021 International Conference on Computer Information Science and Artificial Intelligence. Piscataway: IEEE, 2021: 845-849. |
[32] | KHAN J, AHMAD N, KHALID S, et al. Sentiment and context-aware hybrid DNN with attention for text sentiment classification[J]. IEEE Access, 2023, 11: 28162-28179. |
[33] | BASIRI M E, NEMATI S, ABDAR M, et al. ABCDM: an attention-based bidirectional CNN-RNN deep model for sentiment analysis [J]. Future Generation Computer Systems, 2021, 115: 279-294. |
[34] | ZHANG B, ZHOU W. Transformer-Encoder-GRU (T-E-GRU) for Chinese sentiment analysis on Chinese comment text [J]. Neural Processing Letters, 2023, 55(2): 1847-1867. |
[35] | KAMYAB M, LIU G, RASOOL A, et al. ACR-SA: attention-based deep model through two-channel CNN and Bi-RNN for sentiment analysis [J]. PeerJ Computer Science, 2022, 8: No.e877. |
[36] | DONG J, HE F, GUO Y, et al. A commodity review sentiment analysis based on BERT-CNN model [C]// Proceedings of the 5th International Conference on Computer and Communication Systems. Piscataway: IEEE, 2020: 143-147. |
[37] | ZHU Q, JIANG X, YE R. Sentiment analysis of review text based on BiGRU-attention and hybrid CNN [J]. IEEE Access, 2021, 9: 149077-149088. |
[38] | CHENG Y, ZOU H, SUN H, et al. HSAN-capsule: a novel text classification model [J]. Neurocomputing, 2022, 489: 521-533. |
[39] | 梁一鸣,范菁. 融合多通道GRU和CNN的情感分析模型研究[J/OL]. 云南民族大学学报(自然科学版) [2024-12-01].. |
LIANG Y M, FAN J. Research on sentiment analysis model integrating multi-channel GRU and CNN [J/OL]. Journal of Yunnan Minzu University (Natural Sciences Edition) [2024-12-01].. |
[1] | 王闯, 俞璐, 陈健威, 潘成, 杜文博. 开集域适应综述[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2727-2736. |
[2] | 陈亮, 王璇, 雷坤. 复杂场景下跨层多尺度特征融合的安全帽佩戴检测算法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2333-2341. |
[3] | 姜超英, 李倩, 刘宁, 刘磊, 崔立真. 基于图对比学习的再入院预测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1784-1792. |
[4] | 翟社平, 黄妍, 杨晴, 杨锐. 融合三元组和文本属性的多视图实体对齐[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1793-1800. |
[5] | 王向, 崔倩倩, 张晓明, 王建超, 王震洲, 宋佳霖. 改进ConvNeXt的无线胶囊内镜图像分类模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 2016-2024. |
[6] | 李道全, 徐正, 陈思慧, 刘嘉宇. 融合变分自编码器与自适应增强卷积神经网络的网络流量分类模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1841-1848. |
[7] | 张庆, 杨凡, 方宇涵. 基于多模态信息融合的中文拼写纠错算法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1528-1534. |
[8] | 李雪莹, 杨琨, 涂国庆, 刘树波. 基于局部增强的时序数据对抗样本生成方法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1573-1581. |
[9] | 杨定木, 倪龙强, 梁晶, 邱照原, 张永真, 齐志强. 基于语义相似度的协议转换方法[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1263-1270. |
[10] | 郭诗月, 党建武, 王阳萍, 雍玖. 结合注意力机制和多尺度特征融合的三维手部姿态估计[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1293-1299. |
[11] | 田仁杰, 景明利, 焦龙, 王飞. 基于混合负采样的图对比学习推荐算法[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1053-1060. |
[12] | 孙海涛, 林佳瑜, 梁祖红, 郭洁. 结合标签混淆的中文文本分类数据增强技术[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1113-1119. |
[13] | 孙晨伟, 侯俊利, 刘祥根, 吕建成. 面向工程图纸理解的大语言模型提示生成方法[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 801-807. |
[14] | 盛坤, 王中卿. 基于大语言模型和数据增强的通感隐喻分析[J]. 《计算机应用》唯一官方网站, 2025, 45(3): 794-800. |
[15] | 富坤, 应世聪, 郑婷婷, 屈佳捷, 崔静远, 李建伟. 面向小样本节点分类的图数据增强方法[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 392-402. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||