Traditional Video Super-Resolution (VSR) methods are ineffective in solving geometric distortion problems caused by equirectangular projection when processing panoramic videos, and have deficiencies in inter-frame alignment and feature fusion, which results in poor reconstruction quality. To further improve the super-resolution reconstruction quality of panoramic videos, a panoramic video super-resolution network combining spherical alignment and adaptive geometric correction, named 360GeoVSR, was proposed. In the network, accurate alignment and efficient fusion of inter-frame features were achieved through a Spherical Alignment Module (SAM) and a Geometric Fusion Block (GFB). In SAM, spatial transformation and deformable convolution were combined to address global and local geometric distortions. In GFB, feature alignment was corrected dynamically using an embedded Adaptive Geometric Correction (AGC) submodule, and multi-frame information was fused to capture complex inter-frame relationships. The results of subjective and objective comparison experiments on the extended ODV360Extended panoramic video dataset show that 360GeoVSR outperforms five representative super-resolution methods, including BasicVSR++ and VRT (Video Restoration Transformer), in both objective metrics and subjective visual effects, verifying its effectiveness.