Journal of Computer Applications

    Next Articles

Review of interpretable deep knowledge tracing methods

  

  • Received:2024-07-10 Revised:2024-09-06 Online:2024-11-19 Published:2024-11-19

可解释的深度知识追踪方法综述

索晋贤1,张丽萍2,闫盛1,王东奇2,张雅雯1   

  1. 1. 内蒙古师范大学
    2. 内蒙古师范大学计算机科学技术学院
  • 通讯作者: 索晋贤
  • 基金资助:
    基于软件多版本演化的克隆家系提取研究;面向编程教育个性化学习的智能教育服务关键技术研究;基于知识追踪与学习者画像的编程习题个性化推荐方法;基于知识图谱的信息技术课程学习指导模式构建;“互联网+”背景下的智能化导学模型研究

Abstract: Knowledge tracing was a cognitive diagnostic method aimed at simulating learner's mastery of learning materials by analyzing their historical response records, ultimately predicting their future performance. With the development of deep learning technology, knowledge tracing techniques based on deep neural network models have become a hot research topic in the field due to their strong feature extraction capabilities and superior predictive performance. However, deep learning-based knowledge tracing models often lack good interpretability. Nevertheless, clear interpretability not only enables learners and teachers to fully understand the reasoning process and prediction results of knowledge tracing models, thus facilitating the formulation of learning plans tailored to the current knowledge state for future learning, but also enhances the trust of learners and teachers in knowledge tracing models. Therefore, first introduces the development of knowledge tracing and the definition and necessity of interpretability. Secondly, it summarizes and organizes improvement methods proposed for the lack of interpretability in deep knowledge tracing models from the perspectives of feature extraction and internal model enhancement. Thirdly, it introduces publicly available datasets for researchers and analyzes the impact of dataset features on interpretability, discussing how to evaluate knowledge tracing models from both performance and interpretability perspectives, and compiling the performance of models on different datasets. Finally, it proposes some possible future research directions to address current issues in the field of deep knowledge tracing models.

Key words: wisdom education, deep knowledge tracing, explainability, knowledge tracing model, deep learning

摘要: 知识追踪是一种认知诊断方法,旨在通过学习者历史答题记录,模拟学习者对于学习知识的掌握程度,最终预测学习者未来的答题情况。随着深度学习技术的发展,目前基于深度神经网络模型的知识追踪技术以强大的特征提取能力和优越的预测能力成为知识追踪领域研究的热点。但是,深度学习的知识追踪模型通常缺乏较好的可解释性。然而,清晰的可解释性不仅可以让学习者和教师充分理解知识追踪模型的推理过程和预测结果,为下一步学习制定符合当前知识状态的学习计划,同时还能够提升学习者和教师对知识追踪模型的信任程度。因此,首先介绍了知识追踪的发展历程,并介绍了可解释性的定义和必要性。其次,对深度知识追踪模型缺乏可解释性提出的改进方法,从特征提取和模型内提升两方面进行总结和梳理。再次,介绍了现有可供研究者使用的公开数据集以及数据集内数据特征对可解释性的影响,分析了如何从模型性能和可解释性两个方面对知识追踪模型进行评价,同时整理了模型在不同数据集上的性能表现。最后,对深度知识追踪领域模型目前存在的问题提出一些未来可能的研究方向。

关键词: 智慧教育, 深度知识追踪, 可解释性, 知识追踪模型, 深度学习

CLC Number: