Deep learning-based methods for polyp image segmentation face the following problems: images captured by different medical devices differ in feature distribution, resulting in domain bias between different polyp segmentation datasets; most existing models focus on processing features of the same scale size, and there are some limitations in their abilities to capture polyps of different scales; the visual features and color differences between a polyp and the surrounding tissue are usually small, making it difficult for the model to accurately distinguish the polyp from the background. To solve these problems, a Context-Aware Network (CANet) with Pyramid Vision Transformer (PVT) as the main part was proposed, which mainly contains the following modules: 1) Domain Adaptive Denoising Module (DADM), which employs channel attention and spatial attention to the low-level feature maps to solve the problem of domain bias and noise between images of different domains; 2) Scale Recalibration Module (SRM), which processes multi-scale features extracted by the encoder to solve the problem of the obvious changes in the size and shape of polyps; 3) Iterative Semantic Embedding Module (ISEM), which reduces background interference, improves perception of the target boundary, and enhances the accuracy of polyp segmentation. Experimental results on five publicly available colon polyp datasets show that CANet achieves better results than current widely used colon polyp segmentation methods, with mDice of 92.6% and 94.0% on Kvasir-SEG and CVC?ClinicDB datasets, respectively.