Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Design and practice of intelligent tutoring algorithm based on personalized student capability perception
Yanmin DONG, Jiajia LIN, Zheng ZHANG, Cheng CHENG, Jinze WU, Shijin WANG, Zhenya HUANG, Qi LIU, Enhong CHEN
Journal of Computer Applications    2025, 45 (3): 765-772.   DOI: 10.11772/j.issn.1001-9081.2024101550
Abstract72)   HTML3)    PDF (2239KB)(24)       Save

With the rapid development of Large Language Models (LLMs), dialogue assistants based on LLM have emerged as a new learning method for students. These assistants generate answers through interactive Q&A, helping students solve problems and improve learning efficiency. However, the existing conversational assistants ignore students’ personalized needs, failing to provide personalized answers for “tailored instruction”. To address this, a personalized conversational assistant framework based on student capability perception was proposed, which is consisted of two main modules: a capability perception module that analyzes students’ exercise records to explore the knowledge proficiency of the students, and a personalized answer generation module that creates personalized answers based on the capabilities of the students. Three implementation paradigms — instruction-based, data-driven, and agent-based ones were designed to explore the framework’s practical effects. In the instruction-based assistant, the inference capabilities of LLMs were used to explore knowledge proficiency of the students from students’ exercise records to help generate personalized answers; in the small model-driven assistant, a Deep Knowledge Tracing (DKT) model was employed to generate students’ knowledge proficiency; in the agent-based assistant, tools such as student capability perception, personalized detection, and answer correction were integrated using LLM agent method for assistance of answer generation. Comparison experiments using Chat General Language Model (ChatGLM) and GPT4o_mini demonstrate that LLMs applying all three paradigms can provide personalized answers for students, the accuracy of the agent-based paradigm is higher, indicating the superior student capability perception and personalized answer generation of this paradigm.

Table and Figures | Reference | Related Articles | Metrics
Self-supervised image registration algorithm based on multi-feature fusion
Guijin HAN, Xinyuan ZHANG, Wentao ZHANG, Ya HUANG
Journal of Computer Applications    2024, 44 (5): 1597-1604.   DOI: 10.11772/j.issn.1001-9081.2023050692
Abstract254)   HTML9)    PDF (2617KB)(359)       Save

To ensure that extracted features contain rich information, current deep learning-based image registration algorithms usually employ deep convolutional neural networks, which have high computational complexity and low discrimination of similar feature points. To address the above issues, a Self-supervised Image Registration Algorithm based on Multi-Feature Fusion (SIRA-MFF) was proposed. First, shallow convolutional neural networks were used to extract image features and reduce the computational complexity. Moreover, the problem of single feature information in shallow networks was remedied by adding feature point direction descriptors to the feature extraction layer. Second, an embedding and interaction layer was added after the feature extraction layer to enlarge the receptive field of feature points, by which local and global information of feature points was fused to improve the discrimination of similar feature points. Finally, the feature matching layer was optimized to obtain the best matching scheme. A cross-entropy based loss function was also designed for model training. The SIRA-MFF achieved the Average Matching Accuracy (AMA) of 95.18% and 93.26% on the two test sets generated from the ILSVRC2012 dataset, which was better than comparison algorithms. In the IMC-PT-SparseGM-50 test set, the SIRA-MFF achieved the AMA of 89.69%, which was also better than comparison algorithms; and compared to ResMtch algorithm, SIRA-MFF decreased the operation time of a single image by 49.45%. Experimental results show that SIRA-MFF has higher accurate and stronger robust.

Table and Figures | Reference | Related Articles | Metrics
Efficient similar exercise retrieval model based on unsupervised semantic hashing
Wei TONG, Liyang HE, Rui LI, Wei HUANG, Zhenya HUANG, Qi LIU
Journal of Computer Applications    2024, 44 (1): 206-216.   DOI: 10.11772/j.issn.1001-9081.2023091260
Abstract193)   HTML6)    PDF (1988KB)(156)       Save

Finding similar exercises aims to retrieve exercises with similar testing goals to a given query exercise from the exercise database. As online education evolves, the exercise database is growing in size, and due to the professional characteristic of the exercises, it is not easy to annotate their relations. Thus, online education systems require an efficient and unsupervised model for finding similar exercise. Unsupervised semantic hashing can map high-dimensional data to compact and efficient binary representation under the premise of unsupervised signals. However,it is inadequate to simply apply the semantic hashing model to the similar exercise retrieval model because exercise data contains rich semantic information while the representation space of binary vector is limited. To address this issue, a similar exercise retrieval model was introduced to acquire and retain crucial information. Firstly, a crucial information acquisition module was designed to acquire critical information from exercise data and a de-redundancy object loss was proposed to eliminate redundant information. Secondly, a time-aware activation function was introduced to reduce coding information loss. Thirdly, to maximize the utilization of the Hamming space, a bit balance loss and a bit independent loss were introduced to optimize the distribution of binary representation in the optimization process. Experimental results on MATH and HISTORY datasets demonstrate that the proposed model outperforms the state-of-the-art text semantic hashing model Deep Hash InfoMax (DHIM), with an average improvement of approximately 54% and 23% respectively across three recall settings. Moreover, compared to the best-performing similar exercise retrieval model QuesCo, the proposed model demonstrates a clear advantage on search efficiency.

Table and Figures | Reference | Related Articles | Metrics