《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (3): 765-772.DOI: 10.11772/j.issn.1001-9081.2024101550

• 大模型前沿研究与典型应用 • 上一篇    下一篇

个性化学情感知的智慧助教算法设计与实践

董艳民1, 林佳佳1, 张征1, 程程1, 吴金泽2, 王士进2, 黄振亚1,3(), 刘淇1,3, 陈恩红1   

  1. 1.认知智能全国重点实验室(中国科学技术大学),合肥 230088
    2.科大讯飞人工智能研究院,合肥 230088
    3.合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室),合肥 230094
  • 收稿日期:2024-11-01 修回日期:2024-12-25 接受日期:2024-12-26 发布日期:2025-02-07 出版日期:2025-03-10
  • 通讯作者: 黄振亚
  • 作者简介:董艳民(2000—),男,内蒙古赤峰人,硕士研究生,主要研究方向:代码检索、自然语言处理、大语言模型
    林佳佳(2004—),女,福建三明人,主要研究方向:知识追踪、大语言模型
    张征(1999—),男,湖北仙桃人,博士研究生,主要研究方向:教育数据挖掘、用户建模、可信人工智能
    程程(2001—),男,安徽宿州人,硕士研究生,主要研究方向:大语言模型、智慧教育
    吴金泽(1997—),男,吉林长春人,硕士,主要研究方向:自然语言处理、智能教育、联邦学习
    王士进(1980—),男,安徽合肥人,博士,主要研究方向:语音处理、自然语言处理、智慧教育
    刘淇(1986—),男,山东临沂人,教授,博士,主要研究方向:数据挖掘、机器学习、推荐系统
    陈恩红(1968—),男,安徽宣城人,教授,博士,主要研究方向:数据挖掘、机器学习、网络分析、推荐系统。
  • 基金资助:
    新一代人工智能国家科技重大专项(2022ZD0117103);安徽省科技攻坚计划项目(202423k09020039);中国中文信息学会社会媒体处理专委会(SMP)-智谱AI大模型交叉学科基金资助项目

Design and practice of intelligent tutoring algorithm based on personalized student capability perception

Yanmin DONG1, Jiajia LIN1, Zheng ZHANG1, Cheng CHENG1, Jinze WU2, Shijin WANG2, Zhenya HUANG1,3(), Qi LIU1,3, Enhong CHEN1   

  1. 1.State Key Laboratory of Cognitive Intelligence (University of Science and Technology of China),Hefei Anhui 230088,China
    2.iFLYTEK AI Research,Hefei Anhui 230088,China
    3.Artificial Intelligence Research Institute of Hefei Comprehensive National Science Centre (Anhui Artificial Intelligence Laboratory),Hefei Anhui 230094,China
  • Received:2024-11-01 Revised:2024-12-25 Accepted:2024-12-26 Online:2025-02-07 Published:2025-03-10
  • Contact: Zhenya HUANG
  • About author:DONG Yanmin, born in 2000, M. S. candidate. His research interests include code retrieval, natural language processing, large language model.
    LIN Jiajia, born in 2004. Her research interests include knowledge tracing, large language model.
    ZHANG Zheng, born in 1999, Ph. D. candidate. His research interests include educational data mining, user modeling, trusted artificial intelligence.
    CHENG Cheng, born in 2001, M. S. candidate. His research interests include large language model, intelligent education.
    WU Jinze, born in 1997, M. S. His research interests include natural language processing, intelligent education, federated learning.
    WANG Shijin, born in 1980, Ph. D. His research interests include speech processing, natural language processing, intelligent education.
    LIU Qi, born in 1986, Ph. D., professor. His research interests include data mining, machine learning, recommender system.
    CHEN Enhong, born in 1968, Ph. D., professor. His research interests include data mining, machine learning, network analysis, recommender system.
  • Supported by:
    National Science and Technology Major Project(2022ZD0117103);Key Technologies Research and Development Program of Anhui Province(202423k09020039);Project of CIPSC-SMP-Zhipu.AI Large Model Cross-Disciplinary Fund

摘要:

随着大语言模型(LLM)的快速发展,基于LLM的对话助手逐渐成为学生学习的新方式。通过学生的问答互动,对话助手能生成相应的解答,从而帮助学生解决问题,并提高学习效率。然而,现有的对话助手忽略了学生的个性化需求,无法为学生提供个性化的回答,实现“因材施教”。因此,提出一种基于学生能力感知的个性化对话助手框架。该框架包括2个主要模块:学生能力感知模块和个性化回答生成模块。能力感知模块通过分析学生的答题记录来挖掘学生的知识掌握程度,回答生成模块则根据学生的能力生成个性化回答。基于此框架,设计基于指令、基于小模型驱动和基于智能体Agent的3种实现范式,以深入探讨框架的实际效果。基于指令的对话助手利用LLM的推理能力,从学生的答题记录中挖掘知识掌握程度以帮助生成个性化回答;基于小模型驱动的对话助手利用深度知识追踪(DKT)模型生成学生的知识掌握程度;基于Agent的个性化对话助手采用LLM Agent的方式整合学生能力感知、个性化检测、答案修正等工具辅助答案的生成。基于ChatGLM(Chat General Language Model)、GPT4o_mini的对比实验结果表明,应用3种范式的LLM均能为学生提供个性化的回答,其中基于Agent的范式的准确度更高,表明该范式能更好地感知学生能力,并生成个性化回答。

关键词: 智慧教育, 个性化对话助手, 大语言模型, 知识追踪, LLM智能体

Abstract:

With the rapid development of Large Language Models (LLMs), dialogue assistants based on LLM have emerged as a new learning method for students. These assistants generate answers through interactive Q&A, helping students solve problems and improve learning efficiency. However, the existing conversational assistants ignore students’ personalized needs, failing to provide personalized answers for “tailored instruction”. To address this, a personalized conversational assistant framework based on student capability perception was proposed, which is consisted of two main modules: a capability perception module that analyzes students’ exercise records to explore the knowledge proficiency of the students, and a personalized answer generation module that creates personalized answers based on the capabilities of the students. Three implementation paradigms — instruction-based, data-driven, and agent-based ones were designed to explore the framework’s practical effects. In the instruction-based assistant, the inference capabilities of LLMs were used to explore knowledge proficiency of the students from students’ exercise records to help generate personalized answers; in the small model-driven assistant, a Deep Knowledge Tracing (DKT) model was employed to generate students’ knowledge proficiency; in the agent-based assistant, tools such as student capability perception, personalized detection, and answer correction were integrated using LLM agent method for assistance of answer generation. Comparison experiments using Chat General Language Model (ChatGLM) and GPT4o_mini demonstrate that LLMs applying all three paradigms can provide personalized answers for students, the accuracy of the agent-based paradigm is higher, indicating the superior student capability perception and personalized answer generation of this paradigm.

Key words: intelligent education, personalized conversational assistant, Large Language Model (LLM), knowledge tracing, LLM agent

中图分类号: