《计算机应用》唯一官方网站 ›› 2023, Vol. 43 ›› Issue (12): 3683-3688.DOI: 10.11772/j.issn.1001-9081.2022111654

• 人工智能 • 上一篇    下一篇

基于对比超图转换器的会话推荐

党伟超, 程炳阳(), 高改梅, 刘春霞   

  1. 太原科技大学 计算机科学与技术学院,山西 太原 030024
  • 收稿日期:2022-11-04 修回日期:2023-05-26 接受日期:2023-05-29 发布日期:2023-06-16 出版日期:2023-12-10
  • 通讯作者: 程炳阳
  • 作者简介:党伟超(1974—),男,山西运城人,副教授,博士,CCF会员,主要研究方向:智能计算、软件可靠性
    程炳阳(1996—),男,河南商丘人,硕士研究生,主要研究方向:推荐系统;Email: s202120210809@stu.tyust.edu.cn
    高改梅(1978—),女,山西吕梁人,副教授,博士,CCF会员,主要研究方向:网络安全、密码学
    刘春霞(1977—),女,山西大同人,副教授,硕士,CCF会员,主要研究方向:软件工程、数据库。
  • 基金资助:
    太原科技大学博士科研启动基金资助项目(20202063);太原科技大学研究生教育创新项目(SY2022063)

Contrastive hypergraph transformer for session-based recommendation

Weichao DANG, Bingyang CHENG(), Gaimei GAO, Chunxia LIU   

  1. College of Computer Science and Technology,Taiyuan University of Science and Technology,Taiyuan Shanxi 030024,China
  • Received:2022-11-04 Revised:2023-05-26 Accepted:2023-05-29 Online:2023-06-16 Published:2023-12-10
  • Contact: Bingyang CHENG
  • About author:DANG Weichao, born in 1974, Ph. D., associate professor. His research interests include intelligent computing, software reliability.
    GAO Gaimei, born in 1978, Ph. D., associate professor. Her research interests include network security, cryptography.
    LIU Chunxia, born in 1977, M. S., associate professor. Her research interests include software engineering, database.
  • Supported by:
    Doctoral Research Start-up Fund of Taiyuan University of Science and Technology(20202063);Graduate Education Innovation Project of Taiyuan University of Science and Technology(SY2022063)

摘要:

针对会话推荐本身存在的噪声干扰和样本稀疏性问题,提出一种基于对比超图转换器的会话推荐(CHT)模型。首先,将会话序列建模为超图;其次,通过超图转换器构建项目的全局上下文信息和局部上下文信息。最后,在全局关系学习上利用项目级(I-L)编码器和会话级(S-L)编码器捕获不同级别的项目嵌入,经过信息融合模块进行项目嵌入和反向位置嵌入融合,并通过软注意力模块得到全局会话表示,而在局部关系学习上借助权重线图卷积网络生成局部会话表示。此外,引入对比学习范式最大化全局会话表示和局部会话表示之间的互信息,以提高推荐性能。在多个真实数据集上的实验结果表明,CHT模型的推荐性能优于目前的主流模型。相较于次优模型S2-DHCN(Self-Supervised Hypergraph Convolutional Networks),在Tmall数据集上,所提模型的P@20最高达到了35.61%,MRR@20最高达到了17.11%,分别提升了13.34%和13.69%;在Diginetica数据集上,所提模型的P@20最高达到了54.07%,MRR@20最高达到了18.59%,分别提升了0.76%和0.43%,验证了所提模型的有效性。

关键词: 会话推荐, 超图转换器, 对比学习, 注意力机制

Abstract:

A Contrastive Hypergraph Transformer for session-based recommendation (CHT) model was proposed to address the problems of noise interference and sample sparsity in the session-based recommendation itself. Firstly, the session sequence was modeled as a hypergraph. Secondly, the global context information and local context information of items were constructed by the hypergraph transformer. Finally, the Item-Level (I-L) encoder and Session-Level (S-L) encoder were used on global relationship learning to capture different levels of item embeddings, the information fusion module was used to fuse item embedding and reverse position embedding, and the global session representation was obtained by the soft attention module while the local session representation was generated with the help of the weight line graph convolutional network on local relationship learning. In addition, a contrastive learning paradigm was introduced to maximize the mutual information between the global and local session representations to improve the recommendation performance. Experimental results on several real datasets show that the recommendation performance of CHT model is better than that of the current mainstream models. Compared with the suboptimal model S2-DHCN (Self-Supervised Hypergraph Convolutional Networks), the proposed model has the P@20 of 35.61% and MRR@20 of 17.11% on Tmall dataset, which are improved by 13.34% and 13.69% respectively; the P@20 reached 54.07% and MRR@20 reached 18.59% on Diginetica dataset, which are improved by 0.76% and 0.43% respectively; verifying the effectiveness of the proposed model.

Key words: session-based recommendation, hypergraph transformer, contrastive learning, attention mechanism

中图分类号: