Traditional Knowledge Graph (KG) provides a unified and machine-interpretable representation of information on the web, but its limitations in handling multimodal applications are increasingly recognized. To address these limitations, Multi-Modal Knowledge Graph (MMKG) was proposed as an effective solution. However, the integration of multi-modal data into KG often leads to problems such as inadequate modality fusion and reasoning difficulties, which constrain the application and development of MMKG. Therefore, Multi-Modal Knowledge Graph Completion (MMKGC) techniques were introduced to integrate cross-modal information fully in the construction phase and to predict missing links after construction, thereby solving issues in modality fusion and reasoning. Subsequently, an overview of MMKGC methods were presented. Firstly, the basic concepts, widely used benchmark datasets and evaluation metrics of MMKGC were elaborated in detail. Secondly, the existing methods were classified into fusion tasks during the MMKG construction phase and reasoning tasks after construction. The former focused on key techniques such as entity alignment and entity linking, while the latter encompassed three techniques: relation inference, missing information completion, and multi-modal expansion. Thirdly, various MMKGC methods in each category were introduced thoroughly and their characteristics were analyzed. Finally, the problems and challenges faced by MMKGC methods were examined, and a summary of the above was provided.