Panoramic images, due to their unique projection format, suffer from severe geometric distortions. The existing 2D image super-resolution networks fail to account for the geometric distortion characteristics of panoramic images, making them unsuitable for super-resolution reconstruction of such images. Unlike 2D super-resolution networks, panoramic image super-resolution models must focus on the feature differences across different latitude regions and address issues such as insufficient feature capture at different scales and insufficient learning of contextual information. To address the above issues, an Information Compensation-based Panoramic image Super-resolution reconstruction network (ICPSnet) was proposed. Firstly, based on the geometric characteristics of panoramic images, a position awareness mechanism was introduced to calculate the position weight of each pixel in the latitude direction, thereby enhancing the model’s attention to different latitude regions. Secondly, to address the insufficient feature extraction issue at diverse scales, a Cross-Scale Collaborative Attention (CSCA) module was designed, which utilized a multi-kernel convolutional attention mechanism of different receptive fields to obtain rich cross-scale features. Additionally, to improve quality of the reconstructed image, an Information Compensation (IC) block was designed to enhance the network’s ability to learn contextual information by improving the Atrous Spatial Pyramid Pooling (ASPP). Experimental results on two benchmark datasets, ODI-SR and SUN360, show that when the amplification factor is 4 and 8, ICPSnet improves the Weighted-to-Spherically-uniform Peak Signal-to-Noise Ratio (WS-PSNR) by 0.14 dB, 0.64 dB, and 0.25 dB, 0.26 dB, respectively, compared to current state-of-the-art OSRT (Omnidirectional image Super-Resolution Transformer). It can be seen that compared to other networks, ICPSnet has superior visual performance with reconstructed images better representing the texture details of high-latitude regions.