计算机应用 ›› 2019, Vol. 39 ›› Issue (12): 3650-3658.DOI: 10.11772/j.issn.1001-9081.2019061063

• 虚拟现实与多媒体计算 • 上一篇    下一篇

改进空间细节提取策略的分量替换遥感图像融合方法

王文卿1,2, 刘涵1,2, 谢国1,2, 刘伟1,2   

  1. 1. 西安理工大学 自动化与信息工程学院, 西安 710048;
    2. 陕西省复杂系统控制与智能信息处理重点实验室(西安理工大学), 西安 710048
  • 收稿日期:2019-06-24 修回日期:2019-09-11 出版日期:2019-12-10 发布日期:2019-10-10
  • 作者简介:王文卿(1986-),男,山东莱芜人,讲师,博士,CCF会员,主要研究方向:遥感影像处理与解译、智能信息处理、机器学习;刘涵(1972-),男,陕西西安人,教授,博士,主要研究方向:复杂工业过程建模与控制、机器学习、人工智能、智能信息处理;谢国(1982-),男,湖北当阳人,教授,博士,主要研究方向:智能信息处理、智能轨道交通、数据分析、故障诊断;刘伟(1982-),男,陕西汉中人,讲师,博士,主要研究方向:计算机视觉。
  • 基金资助:
    国家自然科学基金资助项目(61703334);中国博士后科学基金资助项目(2016M602942XB);陕西省自然科学基础研究计划项目(2017JQ6050);陕西省重点研发计划项目(2018ZDXM-GY-089)。

Component substitution-based fusion method for remote sensing images via improving spatial detail extraction scheme

WANG Wenqing1,2, LIU Han1,2, XIE Guo1,2, LIU Wei1,2   

  1. 1. School of Automation and Information Engineering, Xi'an University of Technology, Xi'an Shaanxi 710048, China;
    2. Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing(Xi'an University of Technology), Xi'an Shaanxi 710048, China
  • Received:2019-06-24 Revised:2019-09-11 Online:2019-12-10 Published:2019-10-10
  • Contact: 王文卿
  • Supported by:
    This work is partially supported by the National Natural Science Foundation of China (61703334), the China Postdoctoral Science Foundation (2016M602942XB), the Natural Science Basic Research Plan of Shaanxi Province (2017JQ6050), the Key Research and Development Program of Shaanxi Province (2018ZDXM-GY-089).

摘要: 针对多光谱图像与全色图像间的局部空间差异引起的空谱失真问题,提出了一种改进空间细节提取策略的分量替换遥感图像融合方法。与传统空间细节提取方法不同,该方法旨在合成高质量的强度图像,用其取代空间细节提取步骤中全色图像的位置,以获取匹配多光谱图像的空间细节信息。首先,借助低分辨率强度图像与高分辨率强度图像的流形结构一致性,利用基于局部线性嵌入的图像重建方法重构第一幅高分辨率强度图像;其次,对低分辨率强度图像与全色图像分别进行小波分解,保留低分辨率强度图像的低频信息与全色图像的高频信息,利用逆小波变换重构第二幅高分辨率强度图像;然后,将两幅高分辨率强度图像进行稀疏融合,获得高质量强度图像;最后,将合成的高分辨率强度图像应用到分量替换融合框架,获取最终融合图像。实验结果表明,与另外11种融合方法相比,所提方法得到的融合图像具有较高的空间分辨率和较低的光谱失真度,该方法的平均相关系数、均方根误差、相对整体维数合成误差、光谱角匹配指数和基于四元数理论的指标在三组GeoEye-1融合图像上的均值分别为:0.9439、24.3479、2.7643、3.9376和0.9082,明显优于对比方法的相应评价指标。该方法可有效地消除局部空间差异对分量替换融合框架性能的影响。

关键词: 多光谱图像, 全色图像, 分量替换融合框架, 空间细节提取, 稀疏融合

Abstract: Concerning the spatial and spectral distortions caused by the local spatial dissimilarity between the multispectral and panchromatic images, a component substitution-based remote sensing image fusion method was proposed via improving spatial detail extraction scheme. Different from the classical spatial detail extraction methods, a high-resolution intensity image was synthesized by the proposed method to replace the panchromatic image in spatial detail extraction with the aim of acquiring spatial detail information matching the multispectral image. Firstly, according the manifold consistency between the low-resolution intensity image and the high-resolution intensity image, locally linear embedding-based reconstruction method was used to reconstruct the first high-resolution intensity image. Secondly, after decomposing the low-resolution intensity image and the panchromatic image with the wavelet technique respectively, the low-frequency information of the low-resolution intensity image and the high-frequency information of the panchromatic image were retained, and the inverse wavelet transformation was performed to reconstruct the second high-resolution intensity image. Thirdly, sparse fusion was performed on the two high-resolution intensity images to acquire the high-quality intensity image. Finally, the synthesized high-resolution intensity image was input in the component substitution-based fusion framework to obtain the fused image. The experimental results show that, compared with the other eleven fusion methods, the proposed method has the fused images with higher spatial resolution and lower spectral distortion. For the proposed method, the mean values of the objective evaluation indexes such as correlation coefficient, root mean squared error, erreur relative global adimensionnelle de synthese, spectral angle mapper and quaternion theory-based quality index on three groups of GeoEye-1 fused images are 0.9439, 24.3479, 2.7643, 3.9376 and 0.9082 respectively. These values are better than those of the other eleven fusion methods. The proposed method can efficiently reduce the effect of local spatial dissimilarity on the performance of the component substitution-based fusion framework.

Key words: multispectral image, panchromatic image, component substitution-based fusion framework, spatial detail extraction, sparse fusion

中图分类号: