Received:
Revised:
Online:
夏子芳1,于亚新2,王子腾2,乔佳琪2
通讯作者:
基金资助:
Abstract: In order to construct a transparent and trustworthy recommender system, relevant research works mainly provide reasonable explanations for personalized recommendation through explainable recommendation. However, there are three major limitations of the existing explainable recommendation mechanism: 1) Exploiting correlations could only provide rational explanations rather than causal explanations, and exploiting paths to provide explanations would come with privacy leakage. 2) The problem of sparse user feedback was ignored, so it was difficult to guarantee the fidelity of explanations. 3) The granularity of explanation was relatively coarse, so users' personalized preferences were not considered. To solve the above problems, an explainable recommendation mechanism named ERCKCF (Explainable Recommendation mechanism based on Collaborative Knowledge graph & CounterFactuals) was proposed. Firstly, based on the user's own behavior sequence, the counterfactual inference method was used to achieve high sparsity causal decorrelation, and the counterfactual explanation is iteratively derived. Secondly, in order to improve the fidelity of explanations, not only the collaborative knowledge graph and the neighborhood propagation mechanism of the graph neural network were used to learn users and items representation based on single time slice; but also the user long-short term preference was captured to enhance user preference representation through self-attention mechanism on multiple time slices. Finally, via a higher-order connected subgraph based on the counterfactual set, the multi-granularity personalized preferences of user was captured to enhance counterfactual explanation. To verify the effectiveness of the mechanism, comparative experiments were performed using the publicly available datasets MovieLens and Book-crossing. The results show that the fidelity of this mechanism is improved by 4.89 and 3.38 percentage points compared with the optimal baseline on two datasets, and the size of the CF set is reduced by 63.26% and 66.24%, the sparsity is improved by 1.1 and 1.66 percentage points, respectively. So its explainablitity has been effectively improved.
Key words: explainable, counterfactuals, collaborative knowledge graph, graph neural network, recommender system
摘要: 为构建透明可信的推荐系统,相关研究工作主要通过可解释推荐方法为个性化推荐提供合理解释,但现有可解释推荐机制存在三大局限:1)利用相关关系只能提供合理化解释而非因果解释,利用路径提供解释存在隐私泄露问题;2)忽略了用户反馈稀疏的问题,导致解释的保真度难以保证;3)解释粒度较粗,未考虑用户个性化偏好。为解决上述问题,提出了协同知识图谱与反事实相融合的可解释推荐机制 ERCKCF。首先,基于用户自身行为序列,采用反事实推理方法利用因果关系实现高稀疏度去相关,并迭代推导出反事实解释;其次,为提升解释保真度,不仅在单时间片上利用协同知识图谱和图神经网络的邻域传播机制学习用户和项目表征,还在多时间片上通过自注意力机制捕获用户长短期偏好增强用户偏好表征;最后,基于反事实集的高阶连通子图捕获用户的多粒度个性化偏好,增强反事实解释。为了验证 ERCKCF 机制的有效性,使用公开数据集 MovieLens 和 Book-crossing 进行对比实验。结果表明,在两个数据集上该机制相较于最优基线解释保真度分别提升了 4.89和 3.38 个百分点,CF 集大小分别降低了 63.26%、66.24%,稀疏度指标分别提升了 1.1 和 1.66 个百分点,有效提升了可解释性。
关键词: 可解释, 反事实, 协同知识图谱, 图神经网络, 推荐系统
CLC Number:
TP391.3
夏子芳 于亚新 王子腾 乔佳琪. 协同知识图谱与反事实融合的可解释推荐机制[J]. .
0 / Recommend
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/