计算机应用 ›› 2018, Vol. 38 ›› Issue (4): 1134-1140.DOI: 10.11772/j.issn.1001-9081.2017092291

• 虚拟现实与多媒体计算 • 上一篇    下一篇

基于自适应联合字典学习的脑部多模态图像融合方法

王丽芳, 董侠, 秦品乐, 高媛   

  1. 中北大学 大数据学院, 太原 030051
  • 收稿日期:2017-09-22 修回日期:2017-10-18 出版日期:2018-04-10 发布日期:2018-04-09
  • 通讯作者: 王丽芳
  • 作者简介:王丽芳(1977-),女,山西长治人,副教授,博士,CCF会员,主要研究方向:机器视觉、大数据处理、医学图像处理;董侠(1992-),女,山西临汾人,硕士研究生,主要研究方向:医学图像融合、机器学习;秦品乐(1978-),男,山西长治人,副教授,博士,主要研究方向:机器视觉、大数据处理、三维重建;高媛(1972-),女,山西太原人,副教授,硕士,主要研究方向:大数据处理、医学图像处理、三维重建。
  • 基金资助:
    山西省自然科学基金资助项目(2015011045)。

Multi-modal brain image fusion method based on adaptive joint dictionary learning

WANG Lifang, DONG Xia, QIN Pinle, GAO Yuan   

  1. School of Data Science and Technology, North University of China, Taiyuan Shanxi 030051, China
  • Received:2017-09-22 Revised:2017-10-18 Online:2018-04-10 Published:2018-04-09
  • Supported by:
    This work is partially supported by the Natural Science Foundation of Shanxi Province (2015011045).

摘要: 针对目前全局训练字典对于脑部医学图像的自适应性不强,以及使用稀疏表示系数的L1范数取极大的融合方式易造成图像的灰度不连续效应进而导致图像融合效果欠佳的问题,提出一种基于自适应联合字典学习的脑部多模态图像融合方法。该方法首先使用改进的K奇异值分解(K-SVD)算法自适应地从已配准的源图像中学习得到子字典并组合成自适应联合字典,在自适应联合字典的作用下由系数重用正交匹配追踪(CoefROMP)算法计算得到稀疏表示系数;然后将稀疏表示系数的"多范数"作为源图像块的活跃度测量,并提出"自适应加权平均"与"选择最大"相结合的无偏规则,根据稀疏表示系数的"多范数"的相似度选择融合规则,当"多范数"的相似度大于阈值时,使用"自适应加权平均"的规则,反之则使用"选择最大"的规则融合稀疏表示系数;最后根据融合系数与自适应联合字典重构融合图像。实验结果表明,与其他三种基于多尺度变换的方法和五种基于稀疏表示的方法相比,所提方法的融合图像能够保留更多的图像细节信息,对比度和清晰度较好,病灶边缘清晰,客观参数标准差、空间频率、互信息、基于梯度指标、基于通用图像质量指标和平均结构相似指标在三组实验条件下的均值分别为:71.0783、21.9708、3.6790、0.6603、0.7352和0.7339。该方法可以应用于临床诊断和辅助治疗。

关键词: 脑部多模态图像融合, K奇异值分解, 自适应联合字典, 系数重用正交匹配追踪, 稀疏表示, 多范数, 无偏规则

Abstract: Currently, the adaptivity of global training dictionary is not strong for brain medical images, and the "max-L1" rule may cause gray inconsistency in the fused image, which cannot get satisfactory image fusion results. A multi-modal brain image fusion method based on adaptive joint dictionary learning was proposed to solve this problem. Firstly, an adaptive joint dictionary was obtained by combining sub-dictionaries which were adaptively learned from registered source images using improved K-means-based Singular Value Decomposition (K-SVD) algorithm. The sparse representation coefficients were computed by the Coefficient Reuse Orthogonal Matching Pursuit (CoefROMP) algorithm by using the adaptive joint dictionary. Furthermore, the activity level measurement of source image patches was regarded as the "multi-norm" of the sparse representation coefficients, and an unbiased rule combining "adaptive weighed average" and "choose-max" was proposed, to chose fusion rule according to the similarity of "multi-norm" of the sparse representation coefficients. Then, the sparse representation coefficients were fused by the rule of "adaptive weighed average" when the similarity of "multi-norm" was greater than the threshold, otherwise the rule of "choose-max" was used. Finally, the fusion image was reconstructed according to the fusion coefficient and the adaptive joint dictionary. The experimental results show that, compared with the other three methods based on multi-scale transform and five methods based on sparse representation, the fusion images of the proposed method have more image detail information, better image contrast and sharpness, and clearer edge of lesion, the mean values of the objective parameters such as standard deviation, spatial frequency, mutual information, the gradient based index, the universal image quality based index and the mean structural similarity index under three groups of experimental conditions are 71.0783, 21.9708, 3.6790, 0.6603, 0.7352 and 0.7339 respectively. The proposed method can be used for clinical diagnosis and assistant treatment.

Key words: multi-modal brain image fusion, K-means-based Singular Value Decomposition (K-SVD), adaptive joint dictionary, Coefficient Reuse Orthogonal Matching Pursuit (CoefROMP), sparse representation, multi-norm, unbiased rule

中图分类号: