《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (S2): 298-305.DOI: 10.11772/j.issn.1001-9081.2023070985

• 前沿与综合应用 • 上一篇    

基于多模态数据的阿尔兹海默病分类方法

张昀枭1, 吴晓红1(), 唐荔莉2, 徐庆华2, 王斌1, 何小海1   

  1. 1.四川大学 电子信息学院,成都 610065
    2.成都颐心源健康管理有限公司,成都 610051
  • 收稿日期:2023-07-21 修回日期:2023-10-08 接受日期:2023-10-08 发布日期:2023-10-26 出版日期:2023-12-31
  • 通讯作者: 吴晓红
  • 作者简介:张昀枭(1998—),男,四川自贡人,硕士研究生,主要研究方向:计算机视觉、医学图像处理;
    吴晓红(1970—),女,四川成都人,副教授,博士,主要研究方向:图像处理与识别、计算机视觉;
    唐荔莉(1981—),女,四川成都人,主要研究方向:阿尔兹海默症照护、阿尔兹海默症作业训练;
    徐庆华(1962—),男,四川夹江人,副主任医师,硕士,主要研究方向:老年医学及老年综合评估;
    王斌(1998—),男,湖北荆州人,硕士,主要研究方向:计算机视觉、医学图像处理;
    何小海(1964—),男,四川成都人,教授,博士,主要研究方向:图像处理与网络通信、人工智能与大数据分析。
  • 基金资助:
    成都市重大科技应用示范项目(2019?YF09?00120?SN)

Alzheimer’s disease classification method based on multimodal data

Yunxiao ZHANG1, Xiaohong WU1(), Lili TANG2, Qinghua XU2, Bin WANG1, Xiaohai HE1   

  1. 1.College of Electronics and Information Engineering,Sichuan University,Chengdu Sichuan 610065,China
    2.Chengdu Yixinyuan Health Management Company Limited,Chengdu Sichuan 610051,China
  • Received:2023-07-21 Revised:2023-10-08 Accepted:2023-10-08 Online:2023-10-26 Published:2023-12-31
  • Contact: Xiaohong WU

摘要:

针对基于单模态影像数据实现阿尔兹海默病(AD)辅助分类的方法存在单模态数据提取病理信息有限、影像特征提取不稳定以及分类准确率偏低等问题,提出一种基于多模态数据的AD分类方法。该方法首先根据临床中对AD诊断时需要多种检查方式综合分析的特点,采用核磁共振成像(MRI)、量表、生物标志物、基因四种多模态数据实现AD辅助诊断,并针对多模态数据的特点设计了多模态分类网络。多模态分类网络搭建了影像数据和非影像数据两条特征提取网络分支:前者将预处理后的MRI影像数据送入改进后的网络进行特征提取,改进的网络以残差网络(ResNet)为主体,将坐标注意力(CA)模块嵌入残差结构中,使网络模型关注到MRI影像中的AD病变位置区域;后者将量表、生物标志物和基因等非影像数据送入多层感知机提取特征信息。最后将提取到的MRI影像特征和非影像特征通过特征融合后实现分类。实验结果表明,在无泄漏多模态数据集下,改进后的MRI影像特征提取网络相较于基础网络ResNet,AD/轻度认知障碍(MCI)/认知正常(CN)三分类准确率提升了5.42个百分点,AD/CN二分类准确率提升了8.87个百分点,验证了网络改进的有效性;多模态融合后的AD/CN准确率为92.89%,相较于单模态MRI影像数据提升了8.40个百分点,AD/MCI/CN分类准确率则提升了13.51个百分点,有效地验证了提出的方法能融合各种模态的病理信息,有效提高AD分类的准确率。综上,所提方法能有效地提升AD辅助诊断性能。

关键词: 多模态数据, 深度学习, 残差网络, 坐标注意力, 核磁共振成像, 阿尔兹海默病

Abstract:

In response to the issues of limited extraction of pathological information from single-modal imaging data, unstable image feature extraction, and low classification accuracy of current auxiliary classification methods for Alzheimer's Disease (AD), a multi-modal data-based AD classification method was proposed. Four types of multi-modal data, including Magnetic Resonance Imaging (MRI), scales, biomarkers and genes, were used to achieve AD auxiliary diagnosis; and a multi-modal classification network was designed to handle the characteristics of multi-modal data. The network consists of two feature extraction branches for image and non-image data: in branch for image, the MRI image data was preprocessed to extract features by an improved network, which incorporates Coordinate Attention (CA) modules into the residual structure of the Residual Network (ResNet) to focus on the regions of AD lesions in MRI images; in branch for non-image, feature information was extracted from non-image data by using a multilayer perceptron. Finally, the extracted MRI image features and non-image features were fused for classification. Experimental results on a leakage-free multi-modal dataset show that the improved MRI image feature extraction network achieves a 5.42 percentage points increase in AD/Mild Cognitive Impairment (MCI) /Cognitively Normal (CN) three-class classification accuracy and an 8.87 percentage points increase in AD/CN binary classification accuracy compared to the basic ResNet network, demonstrating the effectiveness of the network improvement. The accuracy of AD/CN classification after multi-modal fusion is 92.89%, with an 8.40 percentage points increase compared to single-modal MRI image data, and the accuracy of AD/MCI/CN classification is increased by 13.51 percentage points, effectively verifying that the proposed method can fuse pathological information from various modalities to improve AD classification accuracy. In summary, the proposed method can effectively enhance the performance of AD auxiliary diagnosis.

Key words: multimodal data, deep learning, Residual Network (ResNet), Coordinate Attention (CA), Magnetic Resonance Imaging (MRI), Alzheimer's Disease (AD)

中图分类号: