《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (1): 39-46.DOI: 10.11772/j.issn.1001-9081.2023010055

• 跨媒体表征学习与认知推理 • 上一篇    下一篇

基于多通道多步融合的生成式视觉对话模型

陈思航, 江爱文(), 崔朝阳, 王明文   

  1. 江西师范大学 计算机信息工程学院,南昌 330022
  • 收稿日期:2023-01-30 修回日期:2023-05-05 接受日期:2023-05-09 发布日期:2023-06-06 出版日期:2024-01-10
  • 通讯作者: 江爱文
  • 作者简介:陈思航(1997—),男,江西萍乡人,硕士研究生,CCF学生会员,主要研究方向:视觉对话;
    崔朝阳(1998—),男,河北石家庄人,硕士研究生,主要研究方向:视觉对话;
    王明文(1965—),男,江西南康人,教授,博士,CCF高级会员,主要研究方向:自然语言处理。
    第一联系人:江爱文(1984—),男,江西浮梁人,教授,博士,CCF高级会员,主要研究方向:多模态信息处理;
  • 基金资助:
    国家自然科学基金资助项目(61966018)

Multi-channel multi-step integration model for generative visual dialogue

Sihang CHEN, Aiwen JIANG(), Zhaoyang CUI, Mingwen WANG   

  1. School of Computer and Information Engineering,Jiangxi Normal University,Nanchang Jiangxi 330022,China
  • Received:2023-01-30 Revised:2023-05-05 Accepted:2023-05-09 Online:2023-06-06 Published:2024-01-10
  • Contact: Aiwen JIANG
  • About author:CHEN Sihang, born in 1997, M. S. candidate. His research interests include visual dialogue.
    CUI Zhaoyang, born in 1998, M. S. candidate. His research interests include visual dialogue.
    WANG Mingwen, born in 1965, Ph. D., professor. His research interests include natural language processing.
  • Supported by:
    National Natural Science Foundation of China(61966018)

摘要:

当前视觉对话任务在多模态信息融合和推理方面取得了较大进展,但是,在回答一些涉及具有比较明确语义属性和位置空间关系的问题时,主流模型的能力依然有限。比较少的主流模型在正式响应之前能够显式地提供有关图像内容的、语义充分的细粒度表达。视觉特征表示与对话历史、当前问句等文本语义之间缺少必要的、缓解语义鸿沟的桥梁,因此提出一种基于多通道多步融合的视觉对话模型MCMI。该模型显式提供一组关于视觉内容的细粒度语义描述信息,并通过“视觉-语义-对话”历史三者相互作用和多步融合,能够丰富问题的语义表示,实现较为准确的答案解码。在VisDial v0.9/VisDial v1.0数据集中,MCMI模型较基准模型双通道多跳推理模型(DMRM),平均倒数排名(MRR)分别提升了1.95和2.12个百分点,召回率(R@1)分别提升了2.62和3.09个百分点,正确答案平均排名(Mean)分别提升了0.88和0.99;在VisDial v1.0数据集中,较最新模型UTC(Unified Transformer Contrastive learning model), MRR、R@1、Mean分别提升了0.06百分点,0.68百分点和1.47。为了进一步评估生成对话的质量,提出类图灵测试响应通过比例M1和对话质量分数(五分制)M2两个人工评价指标。在VisDial v0.9数据集中,相较于基准模型DMRM,MCMI模型的M1和M2指标分别提高了9.00百分点和0.70。

关键词: 视觉对话, 生成式任务, 视觉语义描述, 多步融合, 多通道融合

Abstract:

Visual dialogue task has made significant progress in multimodal information fusion and inference. However, the ability of mainstream models is still limited when answering questions that involve relatively clear semantic attributes and spatial relationships. A relatively few mainstream models can explicitly provide fine-grained semantic representation of image content before formal response. There is a lack of necessary bridges to the semantic gap between visual feature representation and text semantics such as dialogue history and current questions. Therefore, a visual dialogue model based on Multi-Channel and Multi-step Integration (MCMI) was proposed to explicitly provide a set of fine-grained semantic description information about visual content. Through the interactions and multi-step integration among vision, semantics and dialogue history, the semantic representation of questions was enriched and more accurate decoded answers were achieved. On VisDial v0.9/VisDial v1.0 datasets, compared to Dual-channel Multi-hop Reasoning Model (DMRM), the proposed MCMI model improved Mean Reciprocal Ranking(MRR) by 1.95 and 2.12 percentage points respectively, Recall Rate (R@1) by 2.62 and 3.09 percentage points respectively, and Mean ranking of correct answers (Mean) by 0.88 and 0.99 respectively; On VisDial v1.0 dataset, compared to the latest Unified Transformer Contrastive learning model(UTC), MCMI model improved the MRR, R@1, Mean by 0.06 percentage points, 0.68 percentage points, and 1.47 respectively. In order to further evaluate the quality of generated dialogue, two subjective indicators are proposed. They are the Turing-test passing proportion M1 and the dialogue quality score (five point scale) M2. When compared with baseline model DMRM in the VisDial v0.9 dataset, MCMI model improved M1 by 9.00 percentage points and M2 by 0.70.

Key words: visual dialogue, generative task, visual semantic description, multi-step integration, multi-channel fusion

中图分类号: