With the rapid development of Large Language Models (LLMs), dialogue assistants based on LLM have emerged as a new learning method for students. These assistants generate answers through interactive Q&A, helping students solve problems and improve learning efficiency. However, the existing conversational assistants ignore students’ personalized needs, failing to provide personalized answers for “tailored instruction”. To address this, a personalized conversational assistant framework based on student capability perception was proposed, which is consisted of two main modules: a capability perception module that analyzes students’ exercise records to explore the knowledge proficiency of the students, and a personalized answer generation module that creates personalized answers based on the capabilities of the students. Three implementation paradigms — instruction-based, data-driven, and agent-based ones were designed to explore the framework’s practical effects. In the instruction-based assistant, the inference capabilities of LLMs were used to explore knowledge proficiency of the students from students’ exercise records to help generate personalized answers; in the small model-driven assistant, a Deep Knowledge Tracing (DKT) model was employed to generate students’ knowledge proficiency; in the agent-based assistant, tools such as student capability perception, personalized detection, and answer correction were integrated using LLM agent method for assistance of answer generation. Comparison experiments using Chat General Language Model (ChatGLM) and GPT4o_mini demonstrate that LLMs applying all three paradigms can provide personalized answers for students, the accuracy of the agent-based paradigm is higher, indicating the superior student capability perception and personalized answer generation of this paradigm.