《计算机应用》唯一官方网站 ›› 2022, Vol. 42 ›› Issue (4): 1079-1086.DOI: 10.11772/j.issn.1001-9081.2021071242

• CCF第36届中国计算机应用大会 (CCF NCCA 2021) • 上一篇    

基于知识图偏好注意力网络的长短期推荐模型及其更新方法

顾军华1,2, 樊帅1, 李宁宁1, 张素琪3()   

  1. 1.河北工业大学 人工智能与数据科学学院,天津 300401
    2.河北省大数据计算重点实验室(河北工业大学),天津 300401
    3.天津商业大学 信息工程学院,天津 300134
  • 收稿日期:2021-07-16 修回日期:2021-09-06 接受日期:2021-09-08 发布日期:2021-09-27 出版日期:2022-04-10
  • 通讯作者: 张素琪
  • 作者简介:顾军华(1966—),男,天津人,教授,博士,CCF会员,主要研究方向:智能信息处理、数据挖掘
    樊帅(1996—),男,河北张家口人,硕士研究生,主要研究方向:推荐系统、智能信息处理
    李宁宁(1994—),女,河南周口人,硕士研究生,主要研究方向:推荐系统、智能信息处理
  • 基金资助:
    国家自然科学基金资助项目(61802282);天津市科技计划项目技术创新引导专项(20YDTPJC00670)

Long- and short-term recommendation model and updating method based on knowledge graph preference attention network

Junhua GU1,2, Shuai FAN1, Ningning LI1, Suqi ZHANG3()   

  1. 1.School of Artificial Intelligence,Hebei University of Technology,Tianjin 300401,China
    2.Hebei Province Key Laboratory of Big Data Calculation (Hebei University of Technology),Tianjin 300401,China
    3.School of Information Engineering,Tianjin University of Commerce,Tianjin 300134,China
  • Received:2021-07-16 Revised:2021-09-06 Accepted:2021-09-08 Online:2021-09-27 Published:2022-04-10
  • Contact: Suqi ZHANG
  • About author:GU Junhua, born in1966, Ph. D., professor. His research interests include intelligent information processing, data mining.
    FAN Shuai, born in 1996, M. S. candidate. His research interests include recommender system, intelligent information processing.
    LI Ningning, born in 1994, M. S. candidate. Her research interests include recommender system, intelligent information processing.
  • Supported by:
    National Natural Science Foundation of China(61802282);Special Project for Guiding Technological Innovation of Tianjin Science and Technology Program(20YDTPJC00670)

摘要:

目前,知识图谱推荐的研究主要集中在模型建立和训练上。然而在实际应用中,需要使用增量更新方法定期更新模型来适应新用户和老用户偏好的改变。针对大部分该类模型仅利用用户的长期兴趣表示做推荐,而没有考虑用户的短期兴趣且聚合邻域实体得到项目向量表示时聚合方式的可解释性不足,以及更新模型的过程中存在灾难性遗忘的问题,提出基于知识图偏好注意力网络的长短期推荐(KGPATLS)模型及其更新方法。首先,通过KGPATLS模型提出偏好注意力网络的聚合方式以及结合用户长期兴趣和短期兴趣的用户表示方法;然后,为了缓解更新模型存在的灾难性遗忘问题,提出融合预测采样和知识蒸馏的增量更新方法(FPSKD)。将提出的KGPATLS模型和FPSKD方法在MovieLens-1M和Last.FM两个数据集上进行实验。相较于最优基线模型知识图谱卷积网络(KGCN),KGPATLS模型的曲线下面积(AUC)指标在两个数据集上分别有2.2%和1.4%的提升,准确率(Acc)指标分别有2.5%和2.9%的提升。在两个数据集上对比FPSKD与三个基线增量更新方法Fine Tune、Random Sampling、Full Batch,FPSKD在AUC和Acc指标上优于Fine Tune、Random Sampling,在训练时间指标上FPSKD分别降低到Full Batch的大约1/8和1/4。实验结果验证了KGPATLS模型的性能,而FPSKD在保持模型性能的同时可以高效地更新模型。

关键词: 长短期推荐模型, 知识图偏好注意力网络, 增量更新, 预测采样, 知识蒸馏

Abstract:

Current research on knowledge graph recommendation mainly focus on model establishment and training. However, in practical applications, it is necessary to update the model regularly by using incremental updating method to adapt to the changes of preferences of new and old users. Because most of these models only use the users’ long-term interest representations for recommendation, do not consider the users’ short-term interests, and during the aggregation of neighborhood entities to obtain the item vector representation, the interpretability of the aggregation methods is insufficient, and there is the problem of catastrophic forgetting in the process of updating the model, a Knowledge Graph Preference ATtention network based Long- and Short-term recommendation (KGPATLS) model and its updating method were proposed. Firstly, the aggregation method of preference attention network and the user representation method combining users’ long- and short-term interests were proposed through KGPATLS model. Then, in order to alleviate the catastrophic forgetting problem during model update, an incremental updating method Fusing Predict Sampling and Knowledge Distillation (FPSKD) was proposed. The proposed model and incremental updating method were tested on MovieLens-1M and Last.FM datasets. Compared with the optimal baseline model Knowledge Graph Convolutional Network (KGCN), KGPATLS has the Area Under Curve (AUC) increased by 2.2% and 1.4% respectively and the Accuracy (Acc) increased by 2.5% and 2.9% on the two datasets respectively. Compared with three baseline incremental updating methods on the two datasets, the AUC and Acc indexes of FPSKD are better than those of Fine Tune and Random Sampling respectively, the training time index of FPSKD is reduced to about one eighth and one quarter of that of Full Batch respectively. Experimental results verify the performance of KGPATLS model and that FPSKD can update the model efficiently while maintaining the model performance.

Key words: long- and short-term recommendation model, knowledge graph preference attention network, incremental update, predict sampling, knowledge distillation

中图分类号: