《计算机应用》唯一官方网站 ›› 2022, Vol. 42 ›› Issue (3): 854-859.DOI: 10.11772/j.issn.1001-9081.2021030470

• 人工智能 • 上一篇    

面向视觉问答的跨模态交叉融合注意网络

王茂(), 彭亚雄, 陆安江   

  1. 贵州大学 大数据与信息工程学院,贵阳 550025
  • 收稿日期:2021-03-29 修回日期:2021-05-23 接受日期:2021-07-07 发布日期:2022-04-09 出版日期:2022-03-10
  • 通讯作者: 王茂
  • 作者简介:彭亚雄(1963—),男,贵州贵阳人,副教授,主要研究方向:数字通信、音视频处理
    陆安江(1978—),男,贵州贵阳人,副教授,博士,主要研究方向:嵌入式系统与集成、物联网安全、微传感。
  • 基金资助:
    贵州省科技成果转化项目([2017]4856)

Cross-modal chiastopic-fusion attention network for visual question answering

Mao WANG(), Yaxiong PENG, Anjiang LU   

  1. College of Big Data and Information Engineering,Guizhou University,Guiyang Guizhou 550025,China
  • Received:2021-03-29 Revised:2021-05-23 Accepted:2021-07-07 Online:2022-04-09 Published:2022-03-10
  • Contact: Mao WANG
  • About author:PENG Yaxiong, born in 1963, associate professor. His research interests include digital communication, audio and video processing.
    LU Anjiang, born in 1978, Ph. D., associate professor. His research interests include embedded system and integration, Internet of things security, micro-sensing.
  • Supported by:
    Guizhou Province Science and Technology Achievement Transformation Project([2017]4856)

摘要:

为了提高视觉问答(VQA)模型回答复杂图像问题的准确率,提出了面向视觉问答的跨模态交叉融合注意网络(CCAN)。首先,提出了一种改进的残差通道自注意方法对图像进行注意,根据图像整体信息来寻找重要区域,从而引入一种新的联合注意机制,将单词注意和图像区域注意结合在一起;其次,提出一种“跨模态交叉融合”网络生成多个特征,将两个动态信息流整合到一起,每个模态内产生有效的注意流,其中对联合特征使用逐元素相乘的方法。此外,为了避免计算成本增加,网络之间共享参数。在VQA v1.0数据集上的实验结果表明,该模型的准确率达到67.57%,较MLAN模型提高了2.97个百分点,较CAQT模型提高了1.20个百分点。所提方法有效提高了视觉问答模型的准确率,具有有效性和鲁棒性。

关键词: 视觉问答, 联合注意, 交叉融合, 残差通道, 联合特征

Abstract:

In order to improve the accuracy of Visual Question Answering (VQA) model in answering complex image questions, a Cross-modal Chiastopic-fusion Attention Network (CCAN) for VQA was proposed. Firstly, an improved residual channel self-attention method was proposed to pay attention to the image, and to find important areas according to overall information of the image, thereby introduced a new joint attention mechanism that combined word attention and image area attention; secondly, a “cross-modal chiastopic-fusion” network was proposed to generate multiple features to integrate the two dynamic information flows together, and an effective attention flow was generated in each modal. Among them, element-wise multiplication method was used for joint features. In addition, in order to avoid an increase in computational cost, parameters were shared between networks. Experimental results on VQA v1.0 dataset show that the accuracy of the proposed model reaches 67.57%, which is 2.97 percentage points higher than that of MLAN (Multi-level Attention Network) model, 1.20 percentage points higher than that of CAQT (Co-Attention network with Question Type) model. The proposed method effectively improves the accuracy of visual question answering model. The effectiveness and robustness of the method are verified.

Key words: visual question answering, joint attention, chiastopic-fusion, residual channel, joint feature

中图分类号: