Deep Convolutional Neural Network (CNN) has impressive performance in image super-resolution reconstruction. However, many current related methods have a lot of model parameters, making them unsuitable for devices with limited computational resources. To address the above problem, a lightweight Asymmetric Information Distillation Network (AIDN) was proposed. Firstly, effective feature information was extracted from the input original images and edge images. Secondly, an asymmetric information distillation module was designed for non-linear mapping learning on these features. Thirdly, multiple residual images were reconstructed by an upsampling module and fused into one residual image through attention mechanism. Finally, the fused residual image was added to the interpolation of the input image to generate the super-resolution image. Experimental results on Set14, Urban100, and Manga109 datasets show that the 4× super-resolution Peak Signal-to-Noise Ratio (PSNR) values of AIDN model are improved by 0.03 dB, 0.14 dB, and 0.06 dB, respectively, compared to those of Spatial Adaptive Feature Modulation Network (SAFMN). This demonstrates that AIDN model achieves a superior balance between model parameters and performance.