Journal of Computer Applications ›› 2020, Vol. 40 ›› Issue (3): 859-864.DOI: 10.11772/j.issn.1001-9081.2019071205

• Virtual reality and multimedia computing • Previous Articles     Next Articles

Joint super-resolution and deblurring method based on generative adversarial network for text images

CHEN Saijian, ZHU Yuanping   

  1. College of Computer and Information Engineering, Tianjin Normal University, Tianjin 300387, China
  • Received:2019-07-15 Revised:2019-09-03 Online:2020-03-10 Published:2020-03-23
  • Supported by:
    This work is partially supported by the National Natural Science Foundation of China (61602345, 61703306), the Natural Science Foundation of Tianjin (18JCYBJC85000, 16JCQNJC00600).

基于生成对抗网络的文本图像联合超分辨率与去模糊方法

陈赛健, 朱远平   

  1. 天津师范大学 计算机与信息工程学院, 天津 300387
  • 通讯作者: 朱远平
  • 作者简介:陈赛健(1994-),男,江苏启东人,硕士研究生,主要研究方向:图像处理、模式识别;朱远平(1978-),男,江西临川人,教授,博士,主要研究方向:图像处理、模式识别。
  • 基金资助:
    国家自然科学基金资助项目(61602345, 61703306);天津自然科学基金资助项目(18JCYBJC85000, 16JCQNJC00600)。

Abstract: Aiming at the difficulty to reconstruct clear high-resolution images from blurred low-resolution images by the existing super-resolution methods, a joint text image joint super-resolution and deblurring method based on Generative Adversarial Network (GAN) was proposed. Firstly, the low-resolution text images with severe blur were focused, and the down-sampling module and the deblurring module were used to generate the generator network. Secondly, the input images were down-sampled by the down-sampling module to generate blurred super-resolution images. Thirdly, the deblurring module was used to reconstruct the clear super-resolution images. Finally, in order to recover the text images better, a joint training loss including super-resolution pixel loss, deblurring pixel loss, semantic layer feature matching loss and adversarial loss was introduced. Extensive experiments on synthetic and real-world images demonstrate that compared with the existing advanced method SCGAN (Single-Class GAN), the proposed method has the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM) and OCR (Optical Character Recognition) accuracy improved by 1.52 dB, 0.011 5 and 13.2 percentage points respectively. The proposed method can better deal with degraded text images in real scenes with low computational cost.

Key words: super-resolution, deblurring, Generative Adversarial Network (GAN), residual learning, text image

摘要: 针对现有的超分辨率方法难以从模糊的低分辨率图像中重建出清晰的高分辨率图像的问题,提出了一种基于生成式对抗网络(GAN)的文本图像联合超分辨率与去模糊方法。首先,本方法聚焦于严重模糊的低分辨率文本图像,由上采样模块和去模糊模块两部分组成生成器网络;然后,通过上采样模块对输入图像上采样,生成模糊的超分辨率图像;进一步利用去模糊模块重建出清晰的超分辨率图像;最后,为了更好地恢复文本图像,引入了一个联合训练损失,包含超分辨率像素损失与去模糊像素损失、语义层的特征匹配损失以及对抗损失。在合成图像和真实图像上的大量实验结果表明,与现有的先进算法——单类GAN (SCGAN)相比,峰值信噪比(PSNR)、结构相似度(SSIM)和光学字符识别(OCR)精度分别提高了1.52 dB、0.011 5和13.2个百分点。所提方法能更好地处理真实场景下的退化文本图像,同时计算成本较低。

关键词: 超分辨率, 去模糊, 生成对抗网络, 残差学习, 文本图像

CLC Number: