Journal of Computer Applications ›› 2024, Vol. 44 ›› Issue (10): 3260-3266.DOI: 10.11772/j.issn.1001-9081.2023101557
• The 40th CCF National Database Conference (NDBC 2023) • Previous Articles Next Articles
Nengqiang XIANG, Xiaofei ZHU(), Zhaoze GAO
Received:
2023-11-13
Revised:
2023-12-28
Accepted:
2024-01-02
Online:
2024-10-15
Published:
2024-10-10
Contact:
Xiaofei ZHU
About author:
XIANG Nengqiang, born in 1998, M. S. candidate. His research interests include natural language processing, social network.Supported by:
通讯作者:
朱小飞
作者简介:
项能强(1998—),男,四川达州人,硕士研究生,CCF会员,主要研究方向:自然语言处理、社交网络基金资助:
CLC Number:
Nengqiang XIANG, Xiaofei ZHU, Zhaoze GAO. Information diffusion prediction model of prototype-aware dual-channel graph convolutional neural network[J]. Journal of Computer Applications, 2024, 44(10): 3260-3266.
项能强, 朱小飞, 高肇泽. 原型感知双通道图卷积神经网络的信息传播预测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(10): 3260-3266.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2023101557
数据集 | 用户数 | 友谊关系 | 级联信息 | |||
---|---|---|---|---|---|---|
关系边数 | 密度 | 级联数 | 平均长度 | 密度 | ||
12 627 | 309 631 | 24.52 | 3 442 | 32.60 | 8.89 | |
Douban | 12 232 | 396 580 | 30.21 | 3 475 | 21.76 | 6.18 |
Android | 9 958 | 48 573 | 4.87 | 679 | 33.30 | 2.27 |
Christianity | 2 897 | 35 624 | 12.30 | 589 | 22.90 | 4.66 |
Tab. 1 Dataset statistics
数据集 | 用户数 | 友谊关系 | 级联信息 | |||
---|---|---|---|---|---|---|
关系边数 | 密度 | 级联数 | 平均长度 | 密度 | ||
12 627 | 309 631 | 24.52 | 3 442 | 32.60 | 8.89 | |
Douban | 12 232 | 396 580 | 30.21 | 3 475 | 21.76 | 6.18 |
Android | 9 958 | 48 573 | 4.87 | 679 | 33.30 | 2.27 |
Christianity | 2 897 | 35 624 | 12.30 | 589 | 22.90 | 4.66 |
模型 | Douban | Android | Christianity | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | |
DeepDiffuse | 5.79 | 10.80 | 18.39 | 9.02 | 14.93 | 19.13 | 4.13 | 10.58 | 17.21 | 10.27 | 21.83 | 30.74 |
Topo-LSTM | 8.45 | 15.80 | 25.42 | 8.57 | 16.53 | 21.47 | 4.56 | 12.63 | 16.53 | 12.28 | 22.63 | 31.52 |
NDM | 15.21 | 28.23 | 32.30 | 10.00 | 21.13 | 30.14 | 4.85 | 14.24 | 18.97 | 15.41 | 31.36 | 45.86 |
SNIDSA | 25.37 | 36.64 | 42.89 | 16.23 | 27.24 | 35.59 | 5.63 | 15.22 | 20.93 | 17.74 | 34.58 | 48.76 |
FOREST | 28.67 | 42.07 | 49.75 | 19.50 | 32.03 | 39.08 | 9.68 | 17.73 | 24.08 | 24.85 | 42.01 | 51.28 |
Inf-VAE | 14.85 | 32.72 | 45.72 | 8.94 | 22.02 | 35.72 | 5.98 | 14.70 | 20.91 | 18.38 | 38.50 | 51.05 |
DyHGCN | 31.88 | 45.05 | 52.19 | 18.71 | 32.33 | 39.71 | 9.10 | 16.38 | 23.09 | 26.62 | 42.80 | 52.47 |
MS-HGAT | 33.50 | 49.59 | 58.91 | 21.33 | 35.25 | 42.75 | 10.41 | 20.31 | 27.55 | 28.80 | 47.14 | 55.62 |
PDGCN | 35.30* | 50.98* | 59.60* | 22.80* | 36.23* | 44.06* | 10.71# | 20.48# | 28.62* | 29.19# | 47.58# | 56.01# |
Tab. 2 Hits@k score (k =10,50,100) of models on four datasets
模型 | Douban | Android | Christianity | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | |
DeepDiffuse | 5.79 | 10.80 | 18.39 | 9.02 | 14.93 | 19.13 | 4.13 | 10.58 | 17.21 | 10.27 | 21.83 | 30.74 |
Topo-LSTM | 8.45 | 15.80 | 25.42 | 8.57 | 16.53 | 21.47 | 4.56 | 12.63 | 16.53 | 12.28 | 22.63 | 31.52 |
NDM | 15.21 | 28.23 | 32.30 | 10.00 | 21.13 | 30.14 | 4.85 | 14.24 | 18.97 | 15.41 | 31.36 | 45.86 |
SNIDSA | 25.37 | 36.64 | 42.89 | 16.23 | 27.24 | 35.59 | 5.63 | 15.22 | 20.93 | 17.74 | 34.58 | 48.76 |
FOREST | 28.67 | 42.07 | 49.75 | 19.50 | 32.03 | 39.08 | 9.68 | 17.73 | 24.08 | 24.85 | 42.01 | 51.28 |
Inf-VAE | 14.85 | 32.72 | 45.72 | 8.94 | 22.02 | 35.72 | 5.98 | 14.70 | 20.91 | 18.38 | 38.50 | 51.05 |
DyHGCN | 31.88 | 45.05 | 52.19 | 18.71 | 32.33 | 39.71 | 9.10 | 16.38 | 23.09 | 26.62 | 42.80 | 52.47 |
MS-HGAT | 33.50 | 49.59 | 58.91 | 21.33 | 35.25 | 42.75 | 10.41 | 20.31 | 27.55 | 28.80 | 47.14 | 55.62 |
PDGCN | 35.30* | 50.98* | 59.60* | 22.80* | 36.23* | 44.06* | 10.71# | 20.48# | 28.62* | 29.19# | 47.58# | 56.01# |
模型 | Douban | Android | Christianity | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | |
DeepDiffuse | 5.87 | 6.80 | 6.39 | 6.02 | 6.93 | 7.13 | 2.30 | 2.53 | 2.56 | 7.27 | 7.83 | 7.84 |
Topo-LSTM | 8.51 | 12.68 | 13.68 | 6.57 | 7.53 | 7.78 | 3.60 | 4.05 | 4.06 | 7.93 | 8.67 | 9.86 |
NDM | 12.41 | 13.23 | 14.30 | 8.24 | 8.73 | 9.14 | 2.01 | 2.22 | 2.93 | 7.41 | 7.68 | 7.86 |
SNIDSA | 15.34 | 16.64 | 16.89 | 10.02 | 11.24 | 11.59 | 2.98 | 3.24 | 3.97 | 8.69 | 8.94 | 9.72 |
FOREST | 19.60 | 20.21 | 21.75 | 11.26 | 11.84 | 11.94 | 5.83 | 6.17 | 6.26 | 14.64 | 15.45 | 15.58 |
Inf-VAE | 19.80 | 20.66 | 21.32 | 11.02 | 11.28 | 12.28 | 4.82 | 4.86 | 5.27 | 9.25 | 11.96 | 12.45 |
DyHGCN | 20.87 | 21.48 | 21.58 | 10.61 | 11.26 | 11.36 | 6.09 | 6.40 | 6.50 | 15.64 | 16.30 | 16.44 |
MS-HGAT | 22.49 | 23.17 | 23.30 | 11.72 | 12.52 | 12.60 | 6.39 | 6.87 | 6.96 | 17.44 | 18.27 | 18.40 |
PDGCN | 23.64* | 24.35* | 24.47* | 12.13# | 12.76# | 12.87# | 6.47# | 6.89# | 7.01# | 17.75# | 18.54# | 18.68# |
Tab. 3 MAP@k scores (k =10,50,100) of models on four datasets
模型 | Douban | Android | Christianity | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | Hits@10 | Hits@50 | Hits@100 | |
DeepDiffuse | 5.87 | 6.80 | 6.39 | 6.02 | 6.93 | 7.13 | 2.30 | 2.53 | 2.56 | 7.27 | 7.83 | 7.84 |
Topo-LSTM | 8.51 | 12.68 | 13.68 | 6.57 | 7.53 | 7.78 | 3.60 | 4.05 | 4.06 | 7.93 | 8.67 | 9.86 |
NDM | 12.41 | 13.23 | 14.30 | 8.24 | 8.73 | 9.14 | 2.01 | 2.22 | 2.93 | 7.41 | 7.68 | 7.86 |
SNIDSA | 15.34 | 16.64 | 16.89 | 10.02 | 11.24 | 11.59 | 2.98 | 3.24 | 3.97 | 8.69 | 8.94 | 9.72 |
FOREST | 19.60 | 20.21 | 21.75 | 11.26 | 11.84 | 11.94 | 5.83 | 6.17 | 6.26 | 14.64 | 15.45 | 15.58 |
Inf-VAE | 19.80 | 20.66 | 21.32 | 11.02 | 11.28 | 12.28 | 4.82 | 4.86 | 5.27 | 9.25 | 11.96 | 12.45 |
DyHGCN | 20.87 | 21.48 | 21.58 | 10.61 | 11.26 | 11.36 | 6.09 | 6.40 | 6.50 | 15.64 | 16.30 | 16.44 |
MS-HGAT | 22.49 | 23.17 | 23.30 | 11.72 | 12.52 | 12.60 | 6.39 | 6.87 | 6.96 | 17.44 | 18.27 | 18.40 |
PDGCN | 23.64* | 24.35* | 24.47* | 12.13# | 12.76# | 12.87# | 6.47# | 6.89# | 7.01# | 17.75# | 18.54# | 18.68# |
模型 | Android | |||
---|---|---|---|---|
Hits@100 | MAP@100 | Hits@100 | MAP@100 | |
w/o 友谊转发图 | 57.92 | 22.87 | 27.77 | 6.96 |
w/o 超图用户表示 | 58.82 | 23.66 | 27.83 | 6.88 |
w/o 原型融合 | 59.15 | 24.01 | 28.01 | 6.81 |
PDGCN | 59.60 | 24.47 | 28.62 | 7.01 |
Tab. 4 Results of ablation experiments
模型 | Android | |||
---|---|---|---|---|
Hits@100 | MAP@100 | Hits@100 | MAP@100 | |
w/o 友谊转发图 | 57.92 | 22.87 | 27.77 | 6.96 |
w/o 超图用户表示 | 58.82 | 23.66 | 27.83 | 6.88 |
w/o 原型融合 | 59.15 | 24.01 | 28.01 | 6.81 |
PDGCN | 59.60 | 24.47 | 28.62 | 7.01 |
模型 | Android | |||
---|---|---|---|---|
总参数量 | 推理时间/s | 总参数量 | 推理时间/s | |
MS-HGAT | 42 901 456 | 8.056 7 | 39 951 756 | 0.377 8 |
PDGCN | 167 277 559 | 8.419 4 | 164 086 259 | 0.415 1 |
Tab. 5 Computational efficiencies of different models
模型 | Android | |||
---|---|---|---|---|
总参数量 | 推理时间/s | 总参数量 | 推理时间/s | |
MS-HGAT | 42 901 456 | 8.056 7 | 39 951 756 | 0.377 8 |
PDGCN | 167 277 559 | 8.419 4 | 164 086 259 | 0.415 1 |
1 | WU L, RAO Y, ZHAO Y, et al. DTCA: decision tree-based co-attention networks for explainable claim verification[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020:1024-1035. |
2 | YANG C, WU Q, GAO X, et al. EPOC: a survival perspective early pattern detection model for outbreak cascades[C]// Proceedings of the 2018 International Conference on Database and Expert Systems Applications, LNCS 11029. Cham: Springer, 2018: 336-351. |
3 | 吴运兵,高航,曾炜森,等. 融合用户关系表示和信息传播拓扑特征的信息传播预测[J/OL]. 计算机科学与探索,1-13 (2023-11-07) [2023-12-13]. . |
WU Y B, GAO H, ZENG W S, et al. Integrating user relation representations and information diffusion topology features for information propagation prediction[J/OL]. Journal of Frontiers of Computer Science and Technology, 1-13 (2023-11-07) [2023-12-13]. . | |
4 | GAO S, MA J, CHEN Z. Effective and effortless features for popularity prediction in microblogging network[C]// Companion: Proceedings of the 23rd International Conference on World Wide Web. New York: ACM, 2014: 269-270. |
5 | CHENG J, ADAMIC L, DOW P A, et al. Can cascades be predicted?[C]// Companion: Proceedings of the 23rd International Conference on World Wide Web. New York: ACM, 2014: 925-936. |
6 | ZHAO Q, ERDOGDU M A, HE H Y, et al. SEISMIC: a self-exciting point process model for predicting tweet popularity[C]// Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2015: 1513-1522. |
7 | BAO P. Modeling and predicting popularity dynamics via an influence-based self-excited Hawkes process[C]// Proceedings of the 25th ACM International Conference on Information and Knowledge Management. New York: ACM, 2016: 1897-1900. |
8 | ISLAM M R, MUTHIAH S, ADHIKARI B, et al. DeepDiffuse: predicting the “who” and “when” in cascades[C]// Proceedings of the 2018 IEEE International Conference on Data Mining. Piscataway: IEEE, 2018: 1055-1060. |
9 | ZHOU J, CUI G, HU S, et al. Graph neural networks: a review of methods and applications[J]. AI Open, 2020, 1: 57-81. |
10 | WANG J, ZHENG V W, LIU Z, et al. Topological recurrent neural network for diffusion prediction[C]// Proceedings of the 2017 IEEE International Conference on Data Mining. Piscataway: IEEE, 2017: 475-484. |
11 | YANG C, SUN M, LIU H, et al. Neural diffusion model for microscopic cascade study[J]. IEEE Transactions on Knowledge and Data Engineering, 2021, 33(3): 1128-1139. |
12 | WANG Z, CHEN C, LI W. A sequential neural information diffusion model with structure attention[C]// Proceedings of the 27th ACM International Conference on Information and Knowledge Management. New York: ACM, 2018: 1795-1798. |
13 | YANG C, TANG J, SUN M, et al. Multi-scale information diffusion prediction with reinforced recurrent networks[C]// Proceedings of the 28th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2019: 4033-4039. |
14 | SANKAR A, ZHANG X, KRISHNAN A, et al. Inf-VAE: a variational autoencoder framework to integrate homophily and influence in diffusion prediction[C]// Proceedings of the 13th International Conference on Web Search and Data Mining. New York: ACM, 2020: 510-518. |
15 | YUAN C, LI J, ZHOU W, et al. DyHGCN: a dynamic heterogeneous graph convolutional network to learn users’ dynamic preferences for information diffusion prediction[C]// Proceedings of the 2020 European Conference on Machine Learning and Knowledge Discovery in Databases, LNCS 12459. Cham: Springer, 2021: 347-363. |
16 | SUN L, RAO Y, ZHANG X, et al. MS-HGAT: memory-enhanced sequential hypergraph attention network for information diffusion prediction[C]// Proceedings of the 36th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2022: 4156-4164. |
17 | YADATI N, NIMISHAKAVI M, YADAV P, et al. HyperGCN: a new method for training graph convolutional networks on hypergraphs[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 1511-1522. |
18 | KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[EB/OL]. (2017-02-22) [2023-05-13].. |
19 | VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al. Graph attention networks[EB/OL]. (2018-02-04) [2023-05-13]. . |
20 | HAMILTON W L, YING R, LESKOVEC J. Inductive representation learning on large graphs[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 1025-1035. |
21 | ZHU Y, XU Y, YU F, et al. Graph contrastive learning with adaptive augmentation[C]// Proceedings of the Web Conference 2021. New York: ACM, 2021: 2069-2080. |
22 | LI J, ZHOU P, XIONG C, et al. Prototypical contrastive learning of unsupervised representations[EB/OL]. (2021-03-30) [2023-05-13].. |
23 | TAN Y, LONG G, LIU L, et al. FedProto: federated prototype learning across heterogeneous clients[C]// Proceedings of the 36th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2022: 8432-8440. |
24 | 杜炎,吕良福,焦一辰. 基于模糊推理的模糊原型网络[J]. 计算机应用, 2021, 41(7): 1885-1890. |
DU Y, LYU L F, JIAO Y C. Fuzzy prototype network based on fuzzy reasoning[J]. Journal of Computer Applications, 2021, 41(7): 1885-1890. | |
25 | HODAS N O, LERMAN K. The simple rules of social contagion[J]. Scientific Reports, 2014, 4: No.4343. |
26 | ZHONG E, FAN W, WANG J, et al. ComSoc: adaptive transfer of user behaviors over composite social network[C]// Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2012: 696-704. |
[1] | Guixiang XUE, Hui WANG, Weifeng ZHOU, Yu LIU, Yan LI. Port traffic flow prediction based on knowledge graph and spatio-temporal diffusion graph convolutional network [J]. Journal of Computer Applications, 2024, 44(9): 2952-2957. |
[2] | Chuanlin PANG, Rui TANG, Ruizhi ZHANG, Chuan LIU, Jia LIU, Shibo YUE. Distributed power allocation algorithm based on graph convolutional network for D2D communication systems [J]. Journal of Computer Applications, 2024, 44(9): 2855-2862. |
[3] | Yu DU, Yan ZHU. Constructing pre-trained dynamic graph neural network to predict disappearance of academic cooperation behavior [J]. Journal of Computer Applications, 2024, 44(9): 2726-2731. |
[4] | Huanhuan LI, Tianqiang HUANG, Xuemei DING, Haifeng LUO, Liqing HUANG. Public traffic demand prediction based on multi-scale spatial-temporal graph convolutional network [J]. Journal of Computer Applications, 2024, 44(7): 2065-2072. |
[5] | Shibin LI, Jun GONG, Shengjun TANG. Semi-supervised heterophilic graph representation learning model based on Graph Transformer [J]. Journal of Computer Applications, 2024, 44(6): 1816-1823. |
[6] | Xiting LYU, Jinghua ZHAO, Haiying RONG, Jiale ZHAO. Information diffusion prediction model based on Transformer and relational graph convolutional network [J]. Journal of Computer Applications, 2024, 44(6): 1760-1766. |
[7] | Yue LIU, Fang LIU, Aoyun WU, Qiuyue CHAI, Tianxiao WANG. 3D object detection network based on self-attention mechanism and graph convolution [J]. Journal of Computer Applications, 2024, 44(6): 1972-1977. |
[8] | Longtao GAO, Nana LI. Aspect sentiment triplet extraction based on aspect-aware attention enhancement [J]. Journal of Computer Applications, 2024, 44(4): 1049-1057. |
[9] | Xianfeng YANG, Yilei TANG, Ziqiang LI. Aspect-level sentiment analysis model based on alternating‑attention mechanism and graph convolutional network [J]. Journal of Computer Applications, 2024, 44(4): 1058-1064. |
[10] | Kaitian WANG, Qing YE, Chunlei CHENG. Classification method for traditional Chinese medicine electronic medical records based on heterogeneous graph representation [J]. Journal of Computer Applications, 2024, 44(2): 411-417. |
[11] | Zucheng WU, Xiaojun WU, Tianyang XU. Image-text retrieval model based on intra-modal fine-grained feature relationship extraction [J]. Journal of Computer Applications, 2024, 44(12): 3776-3783. |
[12] | Rui GAO, Xuebin CHEN, Zucuan ZHANG. Dynamic social network privacy publishing method for partial graph updating [J]. Journal of Computer Applications, 2024, 44(12): 3831-3838. |
[13] | Xinrong HU, Jingxue CHEN, Zijian HUANG, Bangchao WANG, Xun YAO, Junping LIU, Qiang ZHU, Jie YANG. Graph convolution network-based masked data augmentation [J]. Journal of Computer Applications, 2024, 44(11): 3335-3344. |
[14] | Li ZENG, Jingru YANG, Gang HUANG, Xiang JING, Chaoran LUO. Survey on hypergraph application methods: issues, advances, and challenges [J]. Journal of Computer Applications, 2024, 44(11): 3315-3326. |
[15] | Li XIE, Weiping SHU, Junjie GENG, Qiong WANG, Hailin YANG. Few-shot cervical cell classification combining weighted prototype and adaptive tensor subspace [J]. Journal of Computer Applications, 2024, 44(10): 3200-3208. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||