Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (8): 2491-2496.DOI: 10.11772/j.issn.1001-9081.2024071037

• Artificial intelligence • Previous Articles    

Metaphor detection for improving representation in linguistic rules

Qing YANG, Yan ZHU()   

  1. School of Computing and Artificial Intelligence,Southwest Jiaotong University,Chengdu Sichuan 611756,China
  • Received:2024-07-23 Revised:2024-09-23 Accepted:2024-09-26 Online:2024-11-19 Published:2025-08-10
  • Contact: Yan ZHU
  • About author:YANG Qing, born in 1999, M. S. candidate. Her research interests include figurative language analysis.
  • Supported by:
    Sichuan Provincial Science and Technology Program(2019YFSY0032)

改进语言规则中的表示的隐喻识别

杨青, 朱焱()   

  1. 西南交通大学 计算机与人工智能学院,成都 611756
  • 通讯作者: 朱焱
  • 作者简介:杨青(1999—),女,湖南株洲人,硕士研究生,主要研究方向:比喻语言分析
  • 基金资助:
    四川省科技计划项目(2019YFSY0032)

Abstract:

The existing research work on metaphor detection task mostly adopts deep learning techniques and does not utilize linguistic rules in depth, which is mainly manifested in defective representation of semantic and basic meanings of target words involved in the rules, resulting in the related models’ inability to focus on differences between the target words and more relevant contextual words, and the boundaries between the basic meanings and the contextual meanings are still fuzzy. To address the above problems, a Metaphor Detection model to improve Representation in Linguistic rules (MeRL) was proposed. Firstly, semantic representation of the target words involved in both Selectional Preference Violation (SPV) and Metaphor Identification Procedure (MIP) rules was enhanced. Secondly, basic meanings of the target words in MIP rules were represented. Finally, the rule-based designed SPV and MIP modules were fused to identify metaphors jointly. Experimental results show that compared with MelBERT (Metaphor-aware late interaction over BERT) and other baseline models, the proposed model has the F1-score improved by at least 0.6, 0.9, and 1.2 percentage points, respectively, on benchmark datasets VUA-18, VUA Verb, and MOH-X, indicating that the proposed model achieves more accurate detection of metaphors. The proposed model has the F1-score improved by at least 0.7 percentage points when performing zero-shot transfer learning on TroFi dataset, indicating that the generalization ability of the proposed model is stronger.

Key words: metaphor, metaphor detection, pre-trained model, linguistic rule, external resource

摘要:

现有的隐喻识别任务研究工作多采用深度学习技术,而并未深入利用语言学规则,主要表现为规则中涉及的目标词的语义与基本义的表征存在缺陷,导致相关模型无法聚焦目标词与更相关上下文词之间的差异,且基本义与上下文含义界限仍然模糊。针对上述问题,提出一种改进语言规则中的表示的隐喻识别模型(MeRL)。首先,增强选择偏好违反(SPV)和隐喻识别过程(MIP)规则都涉及目标词的语义表示;其次,表征MIP规则中的目标词的基本义;最后,融合基于规则设计的SPV与MIP模块来共同识别隐喻。相较于MelBERT(Metaphor-aware late interaction over BERT)等基线模型,在基准数据集VUA-18、VUA Verb、MOH-X上的实验结果表明,所提模型的F1值分别至少提高了0.6、0.9、1.2个百分点,表明该模型识别隐喻更准确;在TroFi数据集上进行zero-shot迁移学习的结果显示,所提模型的F1值至少提高了0.7个百分点,表明该模型的泛化能力更强。

关键词: 隐喻, 隐喻识别, 预训练模型, 语言学规则, 外部资源

CLC Number: