In view of the lack of traditional image segmentation algorithms to guide Convolutional Neural Network (CNN) for segmentation in the current field of medical image segmentation, a medical image segmentation Network with Content-Guided Multi-Angle Feature Fusion (CGMAFF-Net) was proposed. Firstly, grayscale images and Otsu threshold segmentation images were used to generate lesion region guidance maps through a Transformer-based micro U-shaped feature extraction module, and Adaptive Combination Weighting (ACW) was used to weight them to the original medical images for initial guidance. Then, Residual Network (ResNet) was employed to extract downsampled features from the weighted medical images, and a Multi-Angle Feature Fusion (MAFF) module was used to fuse feature maps at 1/16 and 1/8 scales. Finally, Reverse Attention (RA) was applied to upsample and restore the feature map size gradually, so as to predict key lesion regions. Experimental results on CVC-ClinicDB, Kvasir-SEG, and ISIC 2018 datasets demonstrate that compared to the existing best-performing segmentation multiscale spatial reverse attention network MSRAformer, CGMAFF-Net increases the mean Intersection over Union (mIoU) by 0.97, 0.78, and 0.11 percentage points, respectively; compared to the classic network U-Net, CGMAFF-Net improves the mIoU by 2.66, 8.94, and 1.69 percentage points, respectively, fully verifying the effectiveness and advancement of CGMAFF-Net.