Journal of Computer Applications ›› 2019, Vol. 39 ›› Issue (10): 2802-2808.DOI: 10.11772/j.issn.1001-9081.2019030516

• Artificial intelligence • Previous Articles     Next Articles

Compression method of super-resolution convolutional neural network based on knowledge distillation

GAO Qinquan1,2,3, ZHAO Yan1,2, LI Gen3, TONG Tong3   

  1. 1. College of Physics and Information Engineering, Fuzhou University, Fuzhou Fujian 350108, China;
    2. Key Laboratory of Medical Instrumentation & Pharmaceutical Technology of Fujian Province(Fuzhou University), Fuzhou Fujian 350108, China;
    3. Imperial Vision Technology Company Limited, Fuzhou Fujian 350001, China
  • Received:2019-03-29 Revised:2019-06-02 Online:2019-10-10 Published:2019-06-18
  • Supported by:
    This work is partially supported by the National Natural Science Foundation of China (61802065).

基于知识蒸馏的超分辨率卷积神经网络压缩方法

高钦泉1,2,3, 赵岩1,2, 李根3, 童同3   

  1. 1. 福州大学 物理与信息工程学院, 福州 350108;
    2. 福建省医疗器械与医药技术重点实验室(福州大学), 福州 350108;
    3. 福建帝视信息科技有限公司, 福州 350001
  • 通讯作者: 童同
  • 作者简介:高钦泉(1986-),男,福建福清人,副研究员,博士,主要研究方向:人工智能、计算机视觉、医学图像处理与分析、计算机辅助手术导航;赵岩(1994-),男,山西五台人,硕士研究生,主要研究方向:人工智能、计算机视觉;李根(1984-),男,吉林延边人,高级工程师,博士,主要研究方向:人工智能、计算机视觉;童同(1986-),男,安徽安庆人,研究员,博士,主要研究方向:人工智能、计算机视觉、医学图像处理与分析。
  • 基金资助:
    国家自然科学基金资助项目(61802065)。

Abstract: Aiming at the deep structure and high computational complexity of current network models based on deep learning for super-resolution image reconstruction, as well as the problem that the networks can not operate effectively on resource-constrained devices caused by the high storage space requirement for the network models, a super-resolution convolutional neural network compression method based on knowledge distillation was proposed. This method utilizes a teacher network with large parameters and good reconstruction effect as well as a student network with few parameters and poor reconstruction effect. Firstly the teacher network was trained; then knowledge distillation method was used to transfer knowledge from teacher network to student network; finally the reconstruction effect of the student network was improved without changing the network structure and the parameters of the student network. The Peak Signal-to-Noise Ratio (PSNR) was used to evaluate the quality of reconstruction in the experiments. Compared to the student network without knowledge distillation method, the student network using the knowledge distillation method has the PSNR increased by 0.53 dB, 0.37 dB, 0.24 dB and 0.45 dB respectively on four public test sets when the magnification times is 3. Without changing the structure of student network, the proposed method significantly improves the super-resolution reconstruction effect of the student network.

Key words: super-resolution, knowledge distillation, convolutional neural network compression, teacher network, student network

摘要: 针对目前用于超分辨率图像重建的深度学习网络模型结构深且计算复杂度高,以及存储网络模型所需空间大,进而导致其无法在资源受限的设备上有效运行的问题,提出一种基于知识蒸馏的超分辨率卷积神经网络的压缩方法。该方法使用一个参数多、重建效果好的教师网络和一个参数少、重建效果较差的学生网络。首先训练好教师网络,然后使用知识蒸馏的方法将知识从教师网络转移到学生网络,最后在不改变学生网络的网络结构及参数量的前提下提升学生网络的重建效果。实验使用峰值信噪比(PSNR)评估重建质量的结果,使用知识蒸馏方法的学生网络与不使用知识蒸馏方法的学生网络相比,在放大倍数为3时,在4个公开测试集上的PSNR提升量分别为0.53 dB、0.37 dB、0.24 dB和0.45 dB。在不改变学生网络结构的前提下,所提方法显著地改善了学生网络的超分辨率重建效果。

关键词: 超分辨率, 知识蒸馏, 卷积神经网络压缩, 教师网络, 学生网络

CLC Number: