As a current leading Chinese Spelling Correction (CSC) model, ReLM (Rephrasing Language Model) has insufficient feature representation in complex semantic scenarios. To address this issue, an ReLM enhanced with deep semantic features, namely FeReLM (Feature-enhanced Rephrasing Language Model), was proposed. In the model, Depthwise Separable Convolution (DSC) technique was used to integrate deep semantic features generated by feature extraction model BGE (BAAI General Embedding) with global features generated by ReLM, thereby enhancing the model’s ability to parse complex contexts and effectively improving the precision in recognizing and correcting spelling errors. Initially, FeReLM was trained on Wang271K dataset, enabling the model to learn deep semantics and complex expressions within sentences continuously. Subsequently, the trained weights were transferred, so that the knowledge learned by the model was applied to new datasets for fine-tuning. Experimental results show that FeReLM outperforms models such as ReLM, MCRSpell (Metric learning of Correct Representation for Chinese Spelling Correction), and RSpell (Retrieval-augmented Framework for Domain Adaptive Chinese Spelling Check) on ECSpell and MCSC datasets in key metrics such as precision, recall, and F1 score, with improvements ranging from 0.6 to 28.7 percentage points. The effectiveness of the proposed method is confirmed through ablation experiments.