Knowledge Tracing (KT) is a cognitive diagnostic method aimed at simulating learner's mastery level of learned knowledge by analyzing learner's historical question answering records, ultimately predicting learner's future question answering performance. Knowledge tracing techniques based on deep neural network models have become a hot research topic in knowledge tracing field due to their strong feature extraction capabilities and superior prediction performance. However, deep learning-based knowledge tracing models often lack good interpretability. Clear interpretability enable learners and teachers to fully understand the reasoning process and prediction results of knowledge tracing models, thus facilitating the formulation of learning plans tailored to the current knowledge state for future learning, and enhance the trust of learners and teachers in knowledge tracing models at the same time. Therefore, interpretable Deep Knowledge Tracing (DKT) methods were reviewed. Firstly, the development of knowledge tracing and the definition as well as necessity of interpretability were introduced. Secondly, improvement methods proposed for solving the lack of interpretability in DKT models were summarized and listed from the perspectives of feature extraction and internal model enhancement. Thirdly, the related publicly available datasets for researchers were introduced, and the influences of dataset features on interpretability were analyzed, discussing how to evaluate knowledge tracing models from both performance and interpretability perspectives, and sorting out the performance of DKT models on different datasets. Finally, some possible future research directions to address current issues in DKT models were proposed.