In the field of Natural Language Processing (NLP), as an efficient method for sentence representation learning, contrastive learning mitigates the anisotropy of Transformer-based pre-trained language models effectively and enhances the quality of sentence representations significantly. However, the existing research focuses on English conditions, especially under supervised settings. Due to the lack of labeled data, it is difficult to utilize contrastive learning effectively to obtain high-quality sentence representations in most non-English languages. To address this issue, a cross-lingual knowledge transfer method for contrastive learning models was proposed, transferring knowledge across languages by aligning the structures of different language representation spaces. Based on this, a simple and effective cross-lingual knowledge transfer framework, TransCSE, was developed to transfer the knowledge from supervised English contrastive learning models to non-English models. Through knowledge transfer experiments from English to six directions, including French, Arabic, Spanish, Turkish, and Chinese, knowledge was transferred successfully from the supervised contrastive learning model SimCSE (Simple contrastive learning of sentence embeddings) to the multilingual pre-trained language model mBERT (Multilingual Bidirectional Encoder Representations from Transformers) by TransCSE. Experimental results show that model trained using the TransCSE framework achieves accuracy improvements of 17.95 and 43.27 percentage points on XNLI (Cross-lingual Natural Language Inference) and STS (Semantic Textual Similarity) 2017 benchmark datasets, respectively, compared to the original mBERT, proving the effectiveness of TransCSE. Moreover, compared to cross-lingual knowledge transfer methods based on shared parameters and representation alignment, TransCSE has the best performance on both XNLI and STS 2017 benchmark datasets.