《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (9): 2711-2718.DOI: 10.11772/j.issn.1001-9081.2023091257
收稿日期:
2023-09-13
修回日期:
2023-12-14
接受日期:
2023-12-15
发布日期:
2024-03-19
出版日期:
2024-09-10
通讯作者:
黄佳进
作者简介:
唐廷杰(1999—),男,贵州黔东南人,硕士研究生,主要研究方向:推荐系统基金资助:
Tingjie TANG1,2, Jiajin HUANG3(), Jin QIN1,2
Received:
2023-09-13
Revised:
2023-12-14
Accepted:
2023-12-15
Online:
2024-03-19
Published:
2024-09-10
Contact:
Jiajin HUANG
About author:
TANG Tingjie, born in 1999, M. S. candidate. His research interests include recommender system.Supported by:
摘要:
针对现有的自监督对比任务未能充分利用原始数据中的丰富语义以及缺乏通用性的问题,提出一种基于图辅助学习的会话推荐(SR-GAL)模型。首先,在图神经网络(GNN)的基础上引入具有表示一致性(RC)的编码通道,从原始数据中挖掘更有价值的自监督信号;其次,为了充分利用这些自监督信号,设计了与目标任务关系紧密的预测性辅助任务和约束性辅助任务;最后,开发了一个简单且与GNN模型无关的辅助学习框架,将两个辅助任务与推荐任务统一起来,从而提高GNN模型的推荐性能。与次优的对比模型CGSNet(Contrastive Graph Self-attention Network)相比,在Diginetica数据集上,所提模型的精确率P@20和平均倒数排名MRR@20提升了0.58%和1.61%;在Tmall数据集上,所提的模型的P@20和MRR@20分别提升了12.65%和8.41%,验证了该模型的有效性。在多个真实数据集上的实验结果表明,SR-GAL模型优于较先进的模型,并且具有良好的可扩展性和通用性。
中图分类号:
唐廷杰, 黄佳进, 秦进. 基于图辅助学习的会话推荐[J]. 计算机应用, 2024, 44(9): 2711-2718.
Tingjie TANG, Jiajin HUANG, Jin QIN. Session-based recommendation with graph auxiliary learning[J]. Journal of Computer Applications, 2024, 44(9): 2711-2718.
数据集 | 点击数 | 训练会话数 | 测试会话数 | 项目数 | 会话平均长度 |
---|---|---|---|---|---|
Diginetica | 982 961 | 719 470 | 60 858 | 43 097 | 5.12 |
Tmall | 818 479 | 351 268 | 25 898 | 40 728 | 6.69 |
表1 数据集统计结果
Tab. 1 Statistical results of datasets
数据集 | 点击数 | 训练会话数 | 测试会话数 | 项目数 | 会话平均长度 |
---|---|---|---|---|---|
Diginetica | 982 961 | 719 470 | 60 858 | 43 097 | 5.12 |
Tmall | 818 479 | 351 268 | 25 898 | 40 728 | 6.69 |
模型 | Diginetica | Tmall | ||||||
---|---|---|---|---|---|---|---|---|
P@10 | P@20 | M@10 | M@20 | P@10 | P@20 | M@10 | M@20 | |
GRU4Rec | 17.93 | 30.79 | 7.73 | 8.22 | 9.47 | 10.93 | 5.78 | 5.89 |
NARM | 35.44 | 48.32 | 15.13 | 16.00 | 19.17 | 23.30 | 10.42 | 10.70 |
STAMP | 33.69 | 46.62 | 14.26 | 15.13 | 22.63 | 26.47 | 13.12 | 13.36 |
SR-GNN | 38.42 | 51.26 | 16.89 | 17.78 | 23.41 | 27.57 | 13.45 | 13.72 |
GC-SAN | 38.91 | 51.90 | 17.31 | 18.21 | 19.21 | 23.30 | 10.67 | 10.96 |
FGNN | 37.72 | 50.58 | 15.95 | 16.84 | 20.67 | 25.24 | 10.07 | 10.39 |
GCE-GNN | 41.16 | 54.22 | 18.15 | 19.04 | 28.01 | 33.42 | 15.08 | 15.42 |
S2-DHCN | 40.21 | 53.66 | 17.59 | 18.51 | 26.22 | 31.42 | 14.60 | 15.05 |
Disen-GNN | 40.63 | 53.79 | 17.98 | 18.99 | 25.87 | 30.78 | 15.05 | 15.40 |
HyperS2Rec | 40.52 | 54.13 | 18.09 | 18.91 | 27.26 | 32.91 | 14.98 | 15.39 |
CGSNet | ||||||||
SR-GAL | 41.75 | 55.01 | 18.60 | 19.50 | 33.27 | 39.08 | 17.38 | 17.79 |
表2 不同模型在两个数据集上的性能对比 (%)
Tab. 2 Performance comparison of different models on two datasets
模型 | Diginetica | Tmall | ||||||
---|---|---|---|---|---|---|---|---|
P@10 | P@20 | M@10 | M@20 | P@10 | P@20 | M@10 | M@20 | |
GRU4Rec | 17.93 | 30.79 | 7.73 | 8.22 | 9.47 | 10.93 | 5.78 | 5.89 |
NARM | 35.44 | 48.32 | 15.13 | 16.00 | 19.17 | 23.30 | 10.42 | 10.70 |
STAMP | 33.69 | 46.62 | 14.26 | 15.13 | 22.63 | 26.47 | 13.12 | 13.36 |
SR-GNN | 38.42 | 51.26 | 16.89 | 17.78 | 23.41 | 27.57 | 13.45 | 13.72 |
GC-SAN | 38.91 | 51.90 | 17.31 | 18.21 | 19.21 | 23.30 | 10.67 | 10.96 |
FGNN | 37.72 | 50.58 | 15.95 | 16.84 | 20.67 | 25.24 | 10.07 | 10.39 |
GCE-GNN | 41.16 | 54.22 | 18.15 | 19.04 | 28.01 | 33.42 | 15.08 | 15.42 |
S2-DHCN | 40.21 | 53.66 | 17.59 | 18.51 | 26.22 | 31.42 | 14.60 | 15.05 |
Disen-GNN | 40.63 | 53.79 | 17.98 | 18.99 | 25.87 | 30.78 | 15.05 | 15.40 |
HyperS2Rec | 40.52 | 54.13 | 18.09 | 18.91 | 27.26 | 32.91 | 14.98 | 15.39 |
CGSNet | ||||||||
SR-GAL | 41.75 | 55.01 | 18.60 | 19.50 | 33.27 | 39.08 | 17.38 | 17.79 |
模型 | 复杂度 | Diginetica | Tmall | ||
---|---|---|---|---|---|
训练时间/s | 内存/MB | 训练时间/s | 内存/MB | ||
SR-GNN | O(l(nd2+n3)+nd2) | 330 | 2 385 | 152 | 2 139 |
GC-SAN | O(l(nd2+n3)+n2d) | 326 | 2 419 | 152 | 2 381 |
Disen-GNN | O(lc(nd2+n3+nd2+n2d2)+nd2) | 1 121 | 21 549 | 691 | 18 891 |
S2-DHCN | O(l|E|d+b2d2+nd2) | 1 098 | 2 753 | 725 | 2 631 |
HyperS2Rec | O(l|E|d+ln3+nd2) | 844 | 2 355 | 312 | 2 324 |
GCE-GNN | O(ln2d+nkd+nd2) | 600 | 6 917 | 110 | 2 863 |
CGSNet | O(ln2d2+b2d2+ln2d+nkd+nd2) | 763 | 7 463 | 280 | 3 465 |
SR-GAL | O(ln2d+lnd2+nd2) | 370 | 4 529 | 176 | 3 151 |
表3 不同模型的计算复杂度对比
Tab. 3 Comparison of computational complexity of different models
模型 | 复杂度 | Diginetica | Tmall | ||
---|---|---|---|---|---|
训练时间/s | 内存/MB | 训练时间/s | 内存/MB | ||
SR-GNN | O(l(nd2+n3)+nd2) | 330 | 2 385 | 152 | 2 139 |
GC-SAN | O(l(nd2+n3)+n2d) | 326 | 2 419 | 152 | 2 381 |
Disen-GNN | O(lc(nd2+n3+nd2+n2d2)+nd2) | 1 121 | 21 549 | 691 | 18 891 |
S2-DHCN | O(l|E|d+b2d2+nd2) | 1 098 | 2 753 | 725 | 2 631 |
HyperS2Rec | O(l|E|d+ln3+nd2) | 844 | 2 355 | 312 | 2 324 |
GCE-GNN | O(ln2d+nkd+nd2) | 600 | 6 917 | 110 | 2 863 |
CGSNet | O(ln2d2+b2d2+ln2d+nkd+nd2) | 763 | 7 463 | 280 | 3 465 |
SR-GAL | O(ln2d+lnd2+nd2) | 370 | 4 529 | 176 | 3 151 |
模型 | Diginetica | Tmall | ||||||
---|---|---|---|---|---|---|---|---|
P@10 | P@20 | M@10 | M@20 | P@10 | P@20 | M@10 | M@20 | |
SR-GAL-PC | 40.88 | 54.08 | 17.82 | 18.76 | 28.14 | 33.82 | 14.61 | 15.01 |
SR-GAL-P | 41.44 | 54.59 | 18.33 | 19.24 | 28.72 | 34.34 | 14.74 | 15.14 |
SR-GAL-C | 41.71 | 54.79 | 18.36 | 19.26 | 29.38 | 34.98 | 15.12 | 15.51 |
SR-GAL | 41.75 | 55.01 | 18.60 | 19.50 | 33.27 | 39.08 | 17.38 | 17.79 |
表4 SR-GAL的不同变体的性能对比 (%)
Tab. 4 Performance comparison of different variants of SR-GAL
模型 | Diginetica | Tmall | ||||||
---|---|---|---|---|---|---|---|---|
P@10 | P@20 | M@10 | M@20 | P@10 | P@20 | M@10 | M@20 | |
SR-GAL-PC | 40.88 | 54.08 | 17.82 | 18.76 | 28.14 | 33.82 | 14.61 | 15.01 |
SR-GAL-P | 41.44 | 54.59 | 18.33 | 19.24 | 28.72 | 34.34 | 14.74 | 15.14 |
SR-GAL-C | 41.71 | 54.79 | 18.36 | 19.26 | 29.38 | 34.98 | 15.12 | 15.51 |
SR-GAL | 41.75 | 55.01 | 18.60 | 19.50 | 33.27 | 39.08 | 17.38 | 17.79 |
模型 | Diginetica | Tmall | |||||||
---|---|---|---|---|---|---|---|---|---|
P@10 | P@20 | M@10 | M@20 | P@10 | P@20 | M@10 | M@20 | ||
SR-GNN | w/o | 38.42 | 51.26 | 16.89 | 17.78 | 23.41 | 27.57 | 13.45 | 13.72 |
w | 40.72 | 53.88 | 17.82 | 18.74 | 26.08 | 31.01 | 14.56 | 14.91 | |
GC-SAN | w/o | 38.91 | 51.90 | 17.31 | 18.20 | 19.21 | 23.30 | 10.67 | 10.96 |
w | 40.51 | 53.66 | 17.77 | 18.68 | 23.86 | 28.54 | 13.11 | 13.56 | |
Disen-GNN | w/o | 40.63 | 53.79 | 17.98 | 18.99 | 25.87 | 30.78 | 15.05 | 15.40 |
w | 41.27 | 54.50 | 18.38 | 19.30 | 26.72 | 32.15 | 15.46 | 15.80 |
表5 不同模型结合辅助任务前后的性能对比 (%)
Tab. 5 Performance comparison of different models before and after combining with auxiliary tasks
模型 | Diginetica | Tmall | |||||||
---|---|---|---|---|---|---|---|---|---|
P@10 | P@20 | M@10 | M@20 | P@10 | P@20 | M@10 | M@20 | ||
SR-GNN | w/o | 38.42 | 51.26 | 16.89 | 17.78 | 23.41 | 27.57 | 13.45 | 13.72 |
w | 40.72 | 53.88 | 17.82 | 18.74 | 26.08 | 31.01 | 14.56 | 14.91 | |
GC-SAN | w/o | 38.91 | 51.90 | 17.31 | 18.20 | 19.21 | 23.30 | 10.67 | 10.96 |
w | 40.51 | 53.66 | 17.77 | 18.68 | 23.86 | 28.54 | 13.11 | 13.56 | |
Disen-GNN | w/o | 40.63 | 53.79 | 17.98 | 18.99 | 25.87 | 30.78 | 15.05 | 15.40 |
w | 41.27 | 54.50 | 18.38 | 19.30 | 26.72 | 32.15 | 15.46 | 15.80 |
模型 | Diginetica | Tmall | |||||
---|---|---|---|---|---|---|---|
训练 时间/s | 内存/MB | 参数量/106 | 训练 时间/s | 内存/MB | 参数量/106 | ||
SR-GNN | w/o | 330 | 2 385 | 4.49 | 152 | 2 319 | 4.25 |
w | 393 | 2 559 | 4.53 | 212 | 2 517 | 4.28 | |
GC-SAN | w/o | 326 | 2 419 | 5.42 | 152 | 2 381 | 5.14 |
w | 381 | 2 591 | 5.46 | 183 | 2 539 | 5.17 | |
Disen-GNN | w/o | 1 121 | 21 549 | 3.56 | 691 | 18 891 | 4.24 |
w | 1 290 | 21 843 | 3.60 | 729 | 19 221 | 4.27 |
表6 辅助任务对不同模型的训练时间、参数量和内存占用的影响
Tab. 6 Influence of auxiliary tasks on training time, parameters and memory usage of different models
模型 | Diginetica | Tmall | |||||
---|---|---|---|---|---|---|---|
训练 时间/s | 内存/MB | 参数量/106 | 训练 时间/s | 内存/MB | 参数量/106 | ||
SR-GNN | w/o | 330 | 2 385 | 4.49 | 152 | 2 319 | 4.25 |
w | 393 | 2 559 | 4.53 | 212 | 2 517 | 4.28 | |
GC-SAN | w/o | 326 | 2 419 | 5.42 | 152 | 2 381 | 5.14 |
w | 381 | 2 591 | 5.46 | 183 | 2 539 | 5.17 | |
Disen-GNN | w/o | 1 121 | 21 549 | 3.56 | 691 | 18 891 | 4.24 |
w | 1 290 | 21 843 | 3.60 | 729 | 19 221 | 4.27 |
1 | HIDASI B, KARATZOGLOU A, BALTRUNAS L, et al. Session-based recommendations with recurrent neural networks [EB/OL]. [2022-07-05]. . |
2 | LI J, REN P, CHEN Z, et al. Neural attentive session-based recommendation [C]// Proceedings of the 2017 ACM Conference on Information and Knowledge Management. New York: ACM, 2017: 1419-1428. |
3 | LIU Q, ZENG Y, MOKHOSI R, et al. STAMP: short-term attention/memory priority model for session-based recommendation[C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2018: 1831-1839. |
4 | WU S, TANG Y, ZHU Y, et al. Session-based recommendation with graph neural networks [C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2019: 346-353. |
5 | LI A, CHENG Z, LIU F, et al. Disentangled graph neural networks for session-based recommendation [J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(8): 7870-7882. |
6 | WANG Z, WEI W, CONG G, et al. Global context enhanced graph neural networks for session-based recommendation [C]// Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 169-178. |
7 | DING C, ZHAO Z, LI C, et al. Session-based recommendation with hypergraph convolutional networks and sequential information embeddings [J]. Expert Systems with Applications, 2023, 223: No.119875. |
8 | XIA X, YIN H, YU J, et al. Self-supervised hypergraph convolutional networks for session-based recommendation [C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2021: 4503-4511. |
9 | WANG F, LU X, LYU L. CGSNet: contrastive graph self-attention network for session-based recommendation [J]. Knowledge-Based Systems, 2022, 251: No.109282. |
10 | 党伟超,程炳阳,高改梅,等. 基于对比超图转换器的会话推荐[J]. 计算机应用, 2023, 43(12):3683-3688. |
DANG W C, CHENG B Y, GAO G M, et al. Contrastive hypergraph transformer for session-based recommendation [J]. Journal of Computer Applications, 2023, 43(12): 3683-3688. | |
11 | HOU Y, HU B, ZHANG Z, et al. CORE: simple and effective session-based recommendation within consistent representation space [C]// Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2022: 1796-1801. |
12 | TAO Y, GAO M, YU J, et al. Predictive and contrastive: dual-auxiliary learning for recommendation [J]. IEEE Transactions on Computational Social System, 2023, 10(5): 2254-2265. |
13 | ZHOU K, YU H, ZHAO W X, et al. Filter-enhanced MLP is all you need for sequential recommendation [C]// Proceedings of the ACM Web Conference 2022. New York: ACM, 2022: 2388-2399. |
14 | XU C, ZHAO P, LIU Y, et al. Graph contextualized self-attention network for session-based recommendation [C]// Proceedings of the 28th International Joint Conference on Artificial Intelligence. California: IJCAI.org, 2019: 3940-3946. |
15 | YU F, ZHU Y, LIU Q, et al. TAGNN: target attentive graph neural networks for session-based recommendation [C]// Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 1921-1924. |
16 | QIU R, LI J, HUANG Z, et al. Rethinking the item order in session-based recommendation with graph neural networks [C]// Proceedings of the 28th ACM International Conference on Information and Knowledge Management. New York: ACM, 2019: 579-588. |
17 | SHENG Z, ZHANG T, ZHANG Y, et al. Enhanced graph neural network for session-based recommendation [J]. Expert Systems with Applications, 2023, 213(Pt A): No.118887. |
18 | 孙轩宇,史艳翠. 融合项目影响力的图神经网络会话推荐模型[J]. 计算机应用, 2023, 43(12):3689-3696. |
SUN X Y, SHI Y C. Session-based recommendation model by graph neural network fused with item influence [J]. Journal of Computer Applications, 2023, 43(12): 3689-3696. | |
19 | LI H, LUO X, YU Q, et al. Session-based recommendation via contrastive learning on heterogeneous graph [C]// Proceedings of the 2021 IEEE International Conference on Big Data. Piscataway: IEEE, 2021: 1077-1082. |
20 | XIA X, YIN H, YU J, et al. Self-supervised graph co-training for session-based recommendation [C]// Proceedings of the 30th ACM International Conference on Information and Knowledge Management. New York: ACM, 2021: 2180-2190. |
21 | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
[1] | 杜郁, 朱焱. 构建预训练动态图神经网络预测学术合作行为消失[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2726-2731. |
[2] | 杨航, 李汪根, 张根生, 王志格, 开新. 基于图神经网络的多层信息交互融合算法用于会话推荐[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2719-2725. |
[3] | 杨兴耀, 陈羽, 于炯, 张祖莲, 陈嘉颖, 王东晓. 结合自我特征和对比学习的推荐模型[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2704-2710. |
[4] | 唐廷杰, 黄佳进, 秦进, 陆辉. 基于图共现增强多层感知机的会话推荐[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2357-2364. |
[5] | 杨帆, 邹窈, 朱明志, 马振伟, 程大伟, 蒋昌俊. 基于图注意力Transformer神经网络的信用卡欺诈检测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2634-2642. |
[6] | 杨莹, 郝晓燕, 于丹, 马垚, 陈永乐. 面向图神经网络模型提取攻击的图数据生成方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2483-2492. |
[7] | 林欣蕊, 王晓菲, 朱焱. 基于局部扩展社区发现的学术异常引用群体检测[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1855-1861. |
[8] | 汪炅, 唐韬韬, 贾彩燕. 无负采样的正样本增强图对比学习推荐方法PAGCL[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1485-1492. |
[9] | 韩贵金, 张馨渊, 张文涛, 黄娅. 基于多特征融合的自监督图像配准算法[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1597-1604. |
[10] | 黄荣, 宋俊杰, 周树波, 刘浩. 基于自监督视觉Transformer的图像美学质量评价方法[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1269-1276. |
[11] | 郭洁, 林佳瑜, 梁祖红, 罗孝波, 孙海涛. 基于知识感知和跨层次对比学习的推荐方法[J]. 《计算机应用》唯一官方网站, 2024, 44(4): 1121-1127. |
[12] | 徐大鹏, 侯新民. 基于网络结构设计的图神经网络特征选择方法[J]. 《计算机应用》唯一官方网站, 2024, 44(3): 663-670. |
[13] | 荆智文, 张屿佳, 孙伯廷, 郭浩. 二阶段孪生图卷积神经网络推荐算法[J]. 《计算机应用》唯一官方网站, 2024, 44(2): 469-476. |
[14] | 胡能兵, 蔡彪, 李旭, 曹旦华. 基于图池化对比学习的图分类方法[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3327-3334. |
[15] | 曾蠡, 杨婧如, 黄罡, 景翔, 罗超然. 超图应用方法综述:问题、进展与挑战[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3315-3326. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||