Knowledge Graph (KG) can extract and structurally represent the prior knowledge from massive data, and plays a key role in the construction and application of intelligent systems. Knowledge Graph Completion (KGC) aims to predict missing triples in the KGs to improve integrity and usability, and usually covers encoding and prediction links. However, the traditional KGC methods have difficulties in utilizing additional information and semantic information effectively in the encoding process, the problems of incomplete knowledge coverage and closed world in the prediction process, and the framework of first encoding and then prediction will be limited by embedded representation forms and computing efficiency. Large Language Models (LLMs) can solve these problems with rich knowledge and strong understanding abilities. Therefore, LLM methods for KGC were reviewed. Firstly, the basic concepts and research status of KGs and LLMs were outlined, and the KGC process was explained. Secondly, the existing KGC methods based on LLMs were summarized and sorted out from three aspects: using LLM as an encoder, using LLM as an generator, and basing on prompt guidance. Finally, the performance of the models on different datasets was summed up and the problems and challenges faced by KGC research based on LLMs were discussed.