Journal of Computer Applications ›› 2021, Vol. 41 ›› Issue (8): 2440-2444.DOI: 10.11772/j.issn.1001-9081.2020101563

Special Issue: 第八届CCF大数据学术会议(CCF Bigdata 2020)

• CCF Bigdata 2020 • Previous Articles     Next Articles

Remote sensing image dehazing method based on cascaded generative adversarial network

SUN Xiao, XU Jindong   

  1. School of Computer and Control Engineering, Yantai University, Yantai Shandong 264005, China
  • Received:2020-10-10 Revised:2020-12-30 Online:2021-08-10 Published:2021-01-27
  • Supported by:
    This work is partially supported by the National Natural Science Foundation of China (62072391, 62066013), the Natural Science Foundation of Shandong (ZR2019MF060), the Shandong Higher Educational Science and Technology Key Program (J18KZ016), the Yantai Key Research and Development Program (2018YT06000271).


孙潇, 徐金东   

  1. 烟台大学 计算机与控制工程学院, 山东 烟台 264005
  • 通讯作者: 徐金东
  • 作者简介:孙潇(1994-),女,山东菏泽人,硕士研究生,CCF会员,主要研究方向:盲源分离、深度学习;徐金东(1980-),男,山东招远人,副教授,博士,CCF会员,主要研究方向:图像处理、盲源分离、模式识别。
  • 基金资助:

Abstract: Dehazing algorithms based on image training pairs are difficult to deal with the problems of insufficient training sample pairs in remote sensing images, and have the model with weak generalization ability, therefore, a remote sensing image dehazing method based on cascaded Generative Adversarial Network (GAN) was proposed. In order to solve the missing of paired remote sensing datasets, U-Net GAN (UGAN) learning haze generation and Pixel Attention GAN (PAGAN) learning dehazing were proposed. In the proposed method, UGAN was used to learn how to add haze to the haze-free remote sensing images with the details of the images retained by using unpaired clear and haze image sets, and then was used to guide the PAGAN to learn how to correctly dehazing such images. To reduce the discrepancy between the synthetic haze remote sensing images and the dehazing remote sensing images, the self-attention mechanism was added to PAGAN. By the generator, the high-resolution detail features were generated by using cues from all feature locations in the low-resolution image. By the discriminator, the detail features in distant parts of the images were checked whether they are consistent with each other. Compared with the dehazing methods such as Feature Fusion Attention Network (FFANet), Gated Context Aggregation Network (GCANet) and Dark Channel Prior (DCP), this cascaded GAN method does not require a large number of paired data to train the network repeatedly. Experimental results show this method can remove haze and thin cloud effectively, and is better than the comparison methods on both visual effect and quantitative indices.

Key words: Generative Adversarial Network (GAN), visual attention, remote sensing image, dehazing

摘要: 针对图像训练对的去雾算法难以应对遥感图像中训练样本对不足,且模型泛化的问题,提出一种基于级联生成对抗网络(GAN)的遥感图像去雾方法。为解决成对遥感数据集的缺失,提出了学习雾生成的U-Net GAN(UGAN)和学习去雾的像素注意GAN(PAGAN)。所提方法通过UGAN学习如何使用未配对的清晰遥感图像和带雾遥感图像集在保留遥感图像细节的同时对无雾图像进行加雾处理,然后引导PAGAN学习如何正确地对此类图像进行去雾。为了减少生成的带雾遥感图像和去雾后遥感图像之间的差异,在PAGAN中加入自我注意机制,用生成器从低分辨率图像中所有位置的细节线索生成高分辨率细节特征,用判别器检查图像远端部分的细节特征是否彼此一致。与特征融合注意网络(FFANet)、门控上下文聚合网络(GCANet)和暗通道先验(DCP)等去雾方法相比,级联GAN方法无需大量成对数据来反复训练网络。实验结果表明该方法能够有效地去除雾和薄云,在目视效果和定量指标上均优于对比方法。

关键词: 生成对抗网络, 视觉注意, 遥感图像, 去雾

CLC Number: