《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (7): 2001-2009.DOI: 10.11772/j.issn.1001-9081.2022071113

• 第39届CCF中国数据库学术会议(NDBC 2022) • 上一篇    

融合协同知识图谱与反事实推理的可解释推荐机制

夏子芳1,2, 于亚新1,2(), 王子腾1,2, 乔佳琪1,2   

  1. 1.东北大学 计算机科学与工程学院,沈阳 110169
    2.医学影像智能计算教育部重点实验室(东北大学),沈阳 110169
  • 收稿日期:2022-07-12 修回日期:2022-08-16 接受日期:2022-08-29 发布日期:2023-07-20 出版日期:2023-07-10
  • 通讯作者: 于亚新
  • 作者简介:夏子芳(1998—),女,河北邢台人,硕士研究生,主要研究方向:推荐系统、因果推断;
    于亚新(1971—),女,辽宁沈阳人,副教授,博士,CCF会员,主要研究方向:社交网络、数据挖掘;
    王子腾(1998—),男,辽宁大连人,硕士研究生,主要研究方向:强化学习、迁移学习;
    乔佳琪(1998—),女,黑龙江伊春人,硕士研究生,主要研究方向:自然语言处理、计算机视觉。
  • 基金资助:
    国家自然科学基金资助项目(61871106)

Explainable recommendation mechanism by fusion collaborative knowledge graph and counterfactual inference

Zifang XIA1,2, Yaxin YU1,2(), Ziteng WANG1,2, Jiaqi QIAO1,2   

  1. 1.School of Computer Science and Engineering,Northeastern University,Shenyang Liaoning 110169,China
    2.Key Laboratory of Intelligent Computing in Medical Image,Ministry of Education (Northeastern University),Shenyang Liaoning 110169,China
  • Received:2022-07-12 Revised:2022-08-16 Accepted:2022-08-29 Online:2023-07-20 Published:2023-07-10
  • Contact: Yaxin YU
  • About author:XIA Zifang, born in 1998, M. S. candidate. Her research interests include recommender systems, causal inference.
    YU Yaxin, born in 1971, Ph. D., associate professor. Her research interests include social network, data mining.
    WANG Ziteng, born in 1998, M. S. candidate. His research interests include reinforcement learning, transfer learning.
    QIAO Jiaqi, born in 1998, M. S. candidate. Her research interests include natural language processing, computer vision.
  • Supported by:
    National Natural Science Foundation of China(61871106)

摘要:

为构建透明可信的推荐机制,相关研究工作主要通过可解释推荐机制为个性化推荐提供合理解释,然而现有可解释推荐机制存在三大局限:1)利用相关关系只能提供合理化解释而非因果解释,而利用路径提供解释存在隐私泄露问题;2)忽略了用户反馈稀疏的问题,解释的保真度难以保证;3)解释粒度较粗,未考虑用户个性化偏好。为解决上述问题,提出基于协同知识图谱(CKG)与反事实推理的可解释推荐机制(ERCKCI)。首先,基于用户自身的行为序列,采用反事实推理思想利用因果关系实现高稀疏度因果去相关,并迭代推导出反事实解释;其次,为提升解释保真度,不仅在单时间片上利用CKG和图神经网络(GNN)的邻域传播机制学习用户和项目表征,还在多时间片上通过自注意力机制捕获用户长短期偏好以增强用户偏好表征;最后,基于反事实集的高阶连通子图捕获用户的多粒度个性化偏好,从而增强反事实解释。为验证ERCKCI机制的有效性,在公开数据集MovieLens(100k)、Book-crossing和MovieLens(1M)上进行了对比实验。所得结果表明,该机制在前两个数据集上相较于RCF(Relational Collaborative Filtering)推荐模型下的ECI(Explainable recommendation based on Counterfactual Inference),在解释保真度上分别提升了4.89和3.38个百分点,在CF集大小上分别降低了63.26%、66.24%,在稀疏度指标上分别提升了1.10和1.66个百分点,可见该机制能有效提升可解释性。

关键词: 可解释, 反事实推理, 协同知识图谱, 图神经网络, 推荐机制

Abstract:

In order to construct a transparent and trustworthy recommendation mechanism, relevant research works mainly provide reasonable explanations for personalized recommendation through explainable recommendation mechanisms. However, there are three major limitations of the existing explainable recommendation mechanism: 1) using correlations only can provide rational explanations rather than causal explanations, and using paths to provide explanations will bring privacy leakage; 2) the problem of sparse user feedback is ignored, so it is difficult to guarantee the fidelity of explanations; 3) the granularities of explanations are relatively coarse, and users’ personalized preferences are not considered. To solve the above problems, an explainable recommendation mechanism ERCKCI based on Collaborative Knowledge Graph (CKG) and counterfactual inference was proposed. Firstly, based on the user’s own behavior sequence, the counterfactual inference was used to achieve high-sparsity causal decorrelation by using the casual relations, and the counterfactual explanations were derived iteratively. Secondly, in order to improve the fidelity of explanations, not only the CKG and the neighborhood propagation mechanism of the Graph Neural Network (GNN) were used to learn users’ and items’ representations based on single time slice; but also the user long-short term preference were captured to enhance user preference representation through self-attention mechanism on multiple time slices. Finally, via a higher-order connected subgraph of the counterfactual set, the multi-granularity personalized preferences of user was captured to enhance counterfactual explanations. To verify the effectiveness of ERCKCI mechanism, comparison experiments were performed on the public datasets MovieLens(100k), Book-crossing and MovieLens(1M). The obtained results show that compared with the Explainable recommendation based on Counterfactual Inference (ECI) algorithm under the Relational Collaborative Filtering (RCF) recommendation model on the first two datasets, the proposed mechanism has the explanation fidelity improved by 4.89 and 3.38 percentage points respectively, the size of CF set reduced by 63.26% and 66.24% respectively, and the sparsity index improved by 1.10 and 1.66 percentage points respectively; so the explainability is improved effectively by the proposed mechanism.

Key words: explainable, counterfactual inference, Collaborative Knowledge Graph (CKG), Graph Neural Network (GNN), recommendation mechanism

中图分类号: