《计算机应用》唯一官方网站

• •    下一篇

基于不确定性感知非似然学习的监督对比生成式情感分析方法

张棣锐1,林佳瑜2,梁祖红2   

  1. 1. 广东工业大学计算机学院
    2. 广东工业大学
  • 收稿日期:2025-05-28 修回日期:2025-07-25 接受日期:2025-08-06 发布日期:2025-08-28 出版日期:2025-08-28
  • 通讯作者: 林佳瑜
  • 基金资助:
    2024年度广州市基础研究计划-市校院企联合资助专题项目

Supervised Contrastive Generative Sentiment Analysis with Uncertainty-Aware Unlikelihood Learning

  • Received:2025-05-28 Revised:2025-07-25 Accepted:2025-08-06 Online:2025-08-28 Published:2025-08-28

摘要: 现有的模型在方面级情感四元组预测任务(Aspect Sentiment Quad Prediction, ASQP)中仍面临着多重挑战。首先,现有模型在处理隐式情感表达(如隐含的方面或观点)时存在困难,隐式情感表达缺乏明确的词汇线索,导致模型难以准确捕捉情感倾向。其次,只有当四元组预测的所有预测元素都与正确元素完全匹配时,它才被认为是准确的,而模型会生成易混淆近义词或同义词从而导致四元组预测完全错误。现有模型致力于提高预测正确词语的概率,却忽略了抑制易混淆词的概率。同时,现有模型采用的交叉熵损失使模型对错误预测过于自信,缺乏对不确定性的建模,难以主动抑制高风险错误。这些问题共同限制了模型在基于方面情感分析任务上的表现。为了解决这些问题,提出了一种基于不确定性感知非似然学习的监督对比生成式情感分析方法(Supervised Contrastive Generative Sentiment Analysis with Uncertainty-Aware Unlikelihood Learning,SCUAUL)。首先采用监督对比学习,通过对比损失拉近同类样本(如相同情感极性)的语义空间距离,从而增强模型对输入数据的关键特征(如情感极性、隐式方面等)的区分能力。其次,利用蒙特卡洛Dropout(MC Dropout)捕捉模型内在不确定性,发现易混淆词,通过边缘化非似然学习动态抑制易混淆词汇的生成概率,保持正确词汇的生成概率,并结合最小熵约束平衡生成多样性与准确性。在Rest15和Rest16数据集上进行5次实验的平均结果显示,相较于AugABSA、PARAPHRASE等基线模型,SCUAUL的精确率(precision)分别提升了0.4~3.98和0.38~3.83个百分点,召回率(recall)分别提升了0.3~2.87和0.43~2.88个百分点,F1 score分别提升了0.35~3.43和0.83~3.37个百分点,验证了SCUAUL在基于方面情感分析任务上的有效性。

关键词: 基于方面的情感分析, 不确定性感知, 非似然学习, 最小熵, 对比学习

Abstract: Existing models still face multiple challenges in the Aspect Sentiment Quad Prediction (ASQP) task. First, existing models have difficulty in dealing with implicit sentiment expressions (such as implicit aspects or opinions). Implicit sentiment expressions lack clear lexical clues, which makes it difficult for the model to accurately capture sentiment tendencies. Second, a quadruple prediction is considered accurate only if all predicted elements exactly match the correct elements, while the model can generate confusing synonyms or synonyms that make the quadruple prediction completely wrong. Existing models are committed to improving the probability of predicting correct words, but ignore the probability of suppressing confusing words. At the same time, the cross-entropy loss used by existing models makes the model overconfident about wrong predictions, lacks modeling of uncertainty, and is difficult to actively suppress high-risk errors. These problems together limit the performance of the model in the aspect-based sentiment analysis task. To address these problems, a supervised contrastive generative sentiment analysis method based on uncertainty-aware unlikelihood learning (SCUAUL) is proposed. Firstly, supervised contrastive learning is used to shorten the semantic space distance of similar samples (such as the same sentiment polarity) through contrastive loss, thereby enhancing the model's ability to distinguish key features of input data (such as sentiment polarity, implicit aspects, etc.). Secondly, Monte Carlo Dropout (MC Dropout) is used to capture the inherent uncertainty of the model and discover confusing words, and the generation probability of easily confused words is dynamically suppressed through marginalized non-likelihood learning, the generation probability of correct words is maintained, and the minimum entropy constraint is combined to balance the generation diversity and accuracy. The average results of five experiments on the Rest15 and Rest16 datasets show that compared with baseline models such as AugABSA and PARAPHRASE, the precision of SCUAUL is improved by 0.4~3.98 and 0.38~3.83 percentage points, the recall is improved by 0.3~2.87 and 0.43~2.88 percentage points, and the F1 score is improved by 0.35~3.43 and 0.83~3.37 percentage points, respectively, which verifies the effectiveness of SCUAUL in aspect-based sentiment analysis tasks.

Key words: aspect-based sentiment analysis, uncertainty perception, non-likelihood learning, minimum entropy, contrastive learning

中图分类号: