Chinese Story Ending Generation (SEG) is one of the downstream tasks in Natural Language Processing (NLP). CLSEG (Contrastive Learning of Story Ending Generation) based on completely wrong endings performs well in terms of story consistency. However, due to the fact that the wrong ending also contains the same content as the original ending text, using only the wrong ending for contrastive training may results in the main part of the generated text with the correct ending being stripped off. Therefore, forward ending enhancement training was added on the basis of CLSEG to preserve the correct parts lost in contrastive training. At the same time, by introducing forward endings, the generated endings have stronger diversity and relevance. The proposed Chinese story ending generation model based on bidirectional contrastive training consisted of two main parts: 1) multi-ending sampling, by which positively enhanced endings and reverse contrasted erroneous endings were obtained by different model methods; 2) contrastive training, by which the loss function was modified during the training process to make the generated ending close to the positive ending and away from the wrong ending. Experimental results on the publicly available story dataset OutGen show that compared to models such as GPT2.ft and Della (Deeply fused layer-wise latent variable), the proposed model achieves better results in BERTScore, METEOR, and other indicators, generating more diverse and relevant endings.