计算机应用 ›› 2020, Vol. 40 ›› Issue (12): 3471-3477.DOI: 10.11772/j.issn.1001-9081.2020060966

• 2020年中国粒计算与知识发现学术会议(CGCKD 2020) • 上一篇    下一篇

基于空间元学习的放大任意倍的超分辨率重建方法

孙忠凡, 周正华, 赵建伟   

  1. 中国计量大学 信息与数学系, 杭州 310018
  • 收稿日期:2020-06-12 修回日期:2020-08-31 出版日期:2020-12-10 发布日期:2020-10-20
  • 通讯作者: 赵建伟(1977-),女,浙江金华人,教授,博士,CCF会员,主要研究方向:深度学习、图像处理。zhaojw@amss.ac.cn
  • 作者简介:孙忠凡(1995-),男,安徽蚌埠人,硕士研究生,主要研究方向:深度学习、图像处理;周正华(1977-),男,陕西商洛人,副教授,博士,CCF会员,主要研究方向:深度学习、图像处理
  • 基金资助:
    国家自然科学基金资助项目(61571410);浙江省自然科学基金资助项目(LY18F020018,LSY19F020001)。

Super-resolution reconstruction method with arbitrary magnification based on spatial meta-learning

SUN Zhongfan, ZHOU Zhenghua, ZHAO Jianwei   

  1. Department of Information and Mathematics, China Jiliang University, Hangzhou Zhejiang 310018, China
  • Received:2020-06-12 Revised:2020-08-31 Online:2020-12-10 Published:2020-10-20
  • Supported by:
    This work is partially supported by the National Natural Science Foundation of China (61571410), the Natural Science Foundation of Zhejiang Province (LY18F020018, LSY19F020001).

摘要: 针对现有的基于深度学习的超分辨率重建方法主要研究放大整数倍的重建,对放大任意倍(如非整数倍)重建情况讨论较少的问题,提出一种基于空间元学习的放大任意倍的超分辨率重建方法。首先,利用坐标投影找出高分辨率图像与低分辨率图像坐标间的对应关系;其次,在元学习网络的基础上,考虑特征图的空间信息,将提取到的空间特征与坐标位置相结合作为权值预测网络的输入;最后,将权值预测网络预测出的卷积核与特征图结合,从而有效地放大特征图的尺寸,得到放大任意倍的高分辨率图像。所提的空间元学习模块可以与其他深度网络相结合,得到放大任意倍的超分辨率图像重建方法。所提的放大任意倍(非整数倍)超分辨率重建方法解决了实际生活中放大尺寸固定且非整数倍的重建问题。实验结果表明,所提的重建方法在空间复杂度(网络参数)相当的情况下,时间复杂度(计算量)是其他重建方法的25%~50%,且峰值信噪比(PSNR)比其他一些方法提高了0.01~5 dB,结构相似度(SSIM)提高了0.03~0.11。

关键词: 超分辨率, 深度学习, 空间元学习, 残差密集模块, 权值预测

Abstract: For the problem that the existing deep-learning based super-resolution reconstruction methods mainly study on the reconstruction problem of amplifying integer times, not on the cases of amplifying arbitrary times (e.g. non-integer times), a super-resolution reconstruction method with arbitrary magnification based on spatial meta-learning was proposed. Firstly, the coordinate projection was used to find the correspondence between the coordinates of high-resolution image and low-resolution image. Secondly, based on the meta-learning network, considering the spatial information of feature map, the extracted spatial features and coordinate positions were combined as the input of weighted prediction network. Finally, the convolution kernels predicted by the weighted prediction network were combined with the feature map in order to amplify the size of feature map effectively and obtain the high-resolution image with arbitrary magnification. The proposed spatial meta-learning module was able to be combined with other deep networks to obtain super-resolution reconstruction methods with arbitrary magnification. The provided super-resolution reconstruction method with arbitrary magnification (non-integer magnification) was able to solve the reconstruction problem with a fixed size but non-integer scale in the real life. Experimental results show that, when the space complexity (network parameters) is equivalent, the time complexity (computational cost) of the proposed method is 25%-50% of that of the other reconstruction methods, the Peak Signal-to-Noise Ratio (PSNR) of the proposed method is 0.01-5 dB higher than that of the others, and the Structural Similarity (SSIM) of the proposed method is 0.03-0.11 higher than that of the others.

Key words: super-resolution, deep learning, spatial meta-learning, residual dense module, weight prediction

中图分类号: