计算机应用 ›› 2017, Vol. 37 ›› Issue (6): 1722-1727.DOI: 10.11772/j.issn.1001-9081.2017.06.1722

• 人工智能 • 上一篇    下一篇

改进耦合字典学习的脑部CT/MR图像融合方法

董侠, 王丽芳, 秦品乐, 高媛   

  1. 中北大学 计算机与控制工程学院, 太原 030051
  • 收稿日期:2016-11-18 修回日期:2017-02-03 出版日期:2017-06-10 发布日期:2017-06-14
  • 通讯作者: 王丽芳
  • 作者简介:董侠(1992-),女,山西临汾人,硕士研究生,主要研究方向:医学图像融合、机器学习;王丽芳(1977-),女,山西长治人,副教授,博士,主要研究方向:机器视觉、大数据处理、医学图像处理;秦品乐(1978-),男,山西长治人,副教授,博士,主要研究方向:机器视觉、大数据处理、三维重建;高媛(1972-),女,山西太原人,副教授,硕士,主要研究方向:大数据处理、医学图像处理、三维重建。
  • 基金资助:
    山西省自然科学基金资助项目(2015011045)。

CT/MR brain image fusion method via improved coupled dictionary learning

DONG Xia, WANG Lifang, QIN Pinle, GAO Yuan   

  1. School of Computer Science and Control Engineering, North University of China, Taiyuan Shanxi 030051, China
  • Received:2016-11-18 Revised:2017-02-03 Online:2017-06-10 Published:2017-06-14
  • Supported by:
    This work is partially supported by the Natural Science Foundation of Shanxi Province (2015011045).

摘要: 针对目前使用单字典表示脑部医学图像难以得到精确的稀疏表示进而导致图像融合效果欠佳,以及字典训练时间过长的问题,提出了一种改进耦合字典学习的脑部计算机断层成像(CT)/磁共振成像(MR)图像融合方法。该方法首先将CT和MR图像对作为训练集,使用改进的K奇异值分解(K-SVD)算法联合训练分别得到耦合的CT字典和MR字典,再将CT和MR字典中的原子作为训练图像的特征,并使用信息熵计算字典原子的特征指标;然后,将特征指标相差较小的原子看作公共特征,其余为各自特征,并分别使用"平均"和"选择最大"的规则融合CT和MR字典的公共特征和各自特征得到融合字典;其次,将配准的源图像编纂成列向量并去除均值,在融合字典的作用下由系数重用正交匹配追踪(CoefROMP)算法计算得到精确的稀疏表示系数,再分别使用"2范数最大"和"加权平均"的规则融合稀疏表示系数和均值向量;最后通过重建得到融合图像。实验结果表明,相对于3种基于多尺度变换的方法和3种基于稀疏表示的方法,所提方法融合后图像在亮度、清晰度和对比度上都更优,客观参数互信息、基于梯度、基于相位一致和基于通用图像质量指标在三组实验条件下的均值分别为:4.1133、0.7131、0.4636和0.7625,字典学习在10次实验条件下所消耗的平均时间为5.96 min。该方法可以应用于临床诊断和辅助治疗。

关键词: 医学图像融合, K奇异值分解, 系数重用正交匹配追踪, 稀疏表示, 字典训练

Abstract: The dictionary training process is time-consuming, and it is difficult to obtain accurate sparse representation by using a single dictionary to express brain medical images currently, which leads to the inefficiency of image fusion. In order to solve the problems, a Computed Tomography (CT)/Magnetic Resonance (MR) brain image fusion method via improved coupled dictionary learning was proposed. Firstly, the CT and MR images were regarded as the training set, and the coupled CT and MR dictionary were obtained through joint dictionary training based on improved K-means-based Singular Value Decomposition (K-SVD) algorithm respectively. The atoms in CT and MR dictionary were regarded as the features of training images, and the feature indicators of the dictionary atoms were calculated by the information entropy. Then, the atoms with the smaller difference feature indicators were regarded as the common features, the rest of the atoms were considered as the innovative features. A fusion dictionary was obtained by using the rule of "mean" and "choose-max" to fuse the common features and innovative features of the CT and MR dictionary separately. Further more, the registered source images were compiled into column vectors and subtracted the mean value. The accurate sparse representation coefficients were computed by the Coefficient Reuse Orthogonal Matching Pursuit (CoefROMP) algorithm under the effect of the fusion dictionary, the sparse representation coefficients and mean vector were fused by the rule of "2-norm max" and "weighted average" separately. Finally, the fusion image was obtained via reconstruction. The experimental results show that, compared with three methods based on multi-scale transform and three methods based on sparse representation, the image visual quality fused by the proposed method outperforms on the brightness, sharpness and contrast, the mean value of the objective parameters such as mutual information, the gradient based, the phase congruency based and the universal image quality indexes under three groups of experimental conditions are 4.1133, 0.7131, 0.4636 and 0.7625 respectively, the average time in the dictionary learning phase under 10 experimental conditions is 5.96 min. The proposed method can be used for clinical diagnosis and assistant treatment.

Key words: medical image fusion, K-means-based Singular Value Decomposition (K-SVD), Coefficient Reuse Orthogonal Matching Pursuit (CoefROMP), sparse representation, dictionary training

中图分类号: