Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Personalized learning recommendation in collaboration of knowledge graph and large language model
Xuefei ZHANG, Liping ZHANG, Sheng YAN, Min HOU, Yubo ZHAO
Journal of Computer Applications    2025, 45 (3): 773-784.   DOI: 10.11772/j.issn.1001-9081.2024070971
Abstract98)   HTML6)    PDF (1570KB)(53)       Save

As an important research topic in the field of smart education, personalized learning recommendation has a core goal of using recommendation algorithms and models to provide learners with effective learning resources that match their individual learning needs, interests, abilities, and histories, so as to improve learners’ learning effects. Current recommendation methods have problems such as cold start, data sparsity, poor interpretability, and over-personalization, and the combination of knowledge graph and Large Language Model (LLM) provides strong support to solve the above problems. Firstly, the contents such as concepts and current research status of personalized learning recommendation were overviewed. Secondly, the concepts of knowledge graph and LLM and their specific applications in personalized learning recommendation were discussed respectively. Thirdly, the collaborative application methods of knowledge graph and LLM in personalized learning recommendation were summarized. Finally, the future development directions of knowledge graph and LLM in personalized learning recommendation were prospected to provide reference and inspiration for continuous development and innovative practice in the field of personalized learning recommendation.

Table and Figures | Reference | Related Articles | Metrics
Relation extraction between discipline knowledge entities based on improved piecewise convolutional neural network and knowledge distillation
Yubo ZHAO, Liping ZHANG, Sheng YAN, Min HOU, Mao GAO
Journal of Computer Applications    2024, 44 (8): 2421-2429.   DOI: 10.11772/j.issn.1001-9081.2023081065
Abstract278)   HTML6)    PDF (2292KB)(607)       Save

Relational extraction is an important means of sorting out discipline knowledge as well as an important step in the construction of educational knowledge graph. In the current research, most of the pre-trained language models based on the Transformer architecture, such as the Bidirectional Encoder Representations from Transformers (BERT), suffer from large number of parameters and excessive complexity, which make them difficult to be deployed on end devices and limite their applications in real educational scenarios. In addition, most traditional lightweight relation extraction models do not model the data through text structure, which are easy to ignore the structural information between entities, and the generated word embedding vectors are difficult to capture the contextual features of the text, have poor ability to solve the problem of multiple meanings of words, and are difficult to fit the unstructured nature of discipline knowledge texts and the high proportion of proper nouns, which is not conducive to high-quality relation extraction. In order to solve the above problems, a relation extraction method between discipline knowledge entities based on improved Piecewise Convolutional Neural Network (PCNN) and Knowledge Distillation (KD) was proposed. Firstly, BERT was used to generate high-quality domain text word vectors to improve the input layer of the PCNN model, so as to effectively capture the text context features and solve the problem of multiple meanings of words to a certain extent. Then, convolution and piecewise max pooling operations were utilized to deeply mine inter-entity structural information, constructing the BERT-PCNN model, and achieving high-quality relation extraction. Lastly, by taking into account the demands for efficient and lightweight models in educational scenarios, the knowledge of the output layer and middle layer of the BERT-PCNN model was distilled for guiding the PCNN model to complete the construction of the KD-PCNN model. The experimental results show that, the weighted-average F1 of the BERT-PCNN model reaches 94%, which is improved by 1 and 2 percentage points compared with the R-BERT and EC_BERT models; the weighted-average F1 of the KD-PCNN model reaches 92%, which is the same as the EC_BERT model, and the parameter quantity of the KD-PCNN model decreased by 3 orders of magnitude compared with the BERT-PCNN and KD-RB-l models. It can be seen that the proposed method can achieve a better trade-off between the performance evaluation index and the network parameter quantity, which is conducive to the improvement of the automated construction level of educational knowledge graph and the development and deployment of new educational applications.

Table and Figures | Reference | Related Articles | Metrics