Journal of Computer Applications ›› 2025, Vol. 45 ›› Issue (8): 2409-2420.DOI: 10.11772/j.issn.1001-9081.2024081140
• National Open Distributed and Parallel Computing Conference 2024 (DPCS 2024) • Previous Articles
Yinchuan TU, Yong GUO, Heng MAO, Yi REN(), Jianfeng ZHANG, Bao LI
Received:
2024-08-14
Revised:
2024-09-14
Accepted:
2024-09-23
Online:
2024-09-25
Published:
2025-08-10
Contact:
Yi REN
About author:
TU Yinchuan, born in 1996, M. S. candidate. His research interests include graph neural network, cloud computing, distributed machine learning.Supported by:
通讯作者:
任怡
作者简介:
涂银川(1996—),男,湖北武汉人,硕士研究生,主要研究方向:图神经网络、云计算、分布式机器学习基金资助:
CLC Number:
Yinchuan TU, Yong GUO, Heng MAO, Yi REN, Jianfeng ZHANG, Bao LI. Evaluation of training efficiency and training performance of graph neural network models based on distributed environment[J]. Journal of Computer Applications, 2025, 45(8): 2409-2420.
涂银川, 郭勇, 毛恒, 任怡, 张建锋, 李宝. 基于分布式环境的图神经网络模型训练效率与训练性能评估[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2409-2420.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2024081140
数据集 | 节点数 | 边数 | 平均度数 | 节点特征维度 | 类别数 | 应用领域 |
---|---|---|---|---|---|---|
CoraFull[ | 19 793 | 126 842 | 6.41 | 8 710 | 70 | 引文网络 |
ogbn-arxiv[ | 169 343 | 1 166 243 | 6.89 | 128 | 40 | 引文网络 |
Reddit[ | 232 965 | 114 615 892 | 491.99 | 602 | 41 | 社交网络 |
ogbn-products[ | 2 449 029 | 61 859 140 | 25.26 | 100 | 47 | 推荐系统 |
Tab. 1 Datasets selected in evaluation
数据集 | 节点数 | 边数 | 平均度数 | 节点特征维度 | 类别数 | 应用领域 |
---|---|---|---|---|---|---|
CoraFull[ | 19 793 | 126 842 | 6.41 | 8 710 | 70 | 引文网络 |
ogbn-arxiv[ | 169 343 | 1 166 243 | 6.89 | 128 | 40 | 引文网络 |
Reddit[ | 232 965 | 114 615 892 | 491.99 | 602 | 41 | 社交网络 |
ogbn-products[ | 2 449 029 | 61 859 140 | 25.26 | 100 | 47 | 推荐系统 |
模型 | 基本概念 | 参数共享 | 计算效率 | 灵活性 | 抗过平滑 |
---|---|---|---|---|---|
GCN | 图卷积 | 是 | 高 | 低 | 低 |
GAT | 注意力 | 否 | 中 | 高 | 高 |
GraphSAGE | 采样聚合 | 是 | 高 | 高 | 中 |
Tab. 2 Models selected in evaluation
模型 | 基本概念 | 参数共享 | 计算效率 | 灵活性 | 抗过平滑 |
---|---|---|---|---|---|
GCN | 图卷积 | 是 | 高 | 低 | 低 |
GAT | 注意力 | 否 | 中 | 高 | 高 |
GraphSAGE | 采样聚合 | 是 | 高 | 高 | 中 |
模型 | 聚合方式 | 学习率 | 激活函数 | dropout |
---|---|---|---|---|
GCN | 求和 | 0.003 | ReLU | 0.5 |
GAT | 注意力机制 | 0.003 | ReLU | 0.5 |
GraphSAGE | 均值 | 0.003 | ReLU | 0.5 |
Tab. 3 Model parameter setting
模型 | 聚合方式 | 学习率 | 激活函数 | dropout |
---|---|---|---|---|
GCN | 求和 | 0.003 | ReLU | 0.5 |
GAT | 注意力机制 | 0.003 | ReLU | 0.5 |
GraphSAGE | 均值 | 0.003 | ReLU | 0.5 |
数据集 | 隐藏层维度 | 数据集 | 隐藏层维度 |
---|---|---|---|
CoraFull | 64 | 512 | |
ogbn-arxiv | 256 | ogbn-products | 1 024 |
Tab. 4 Hidden layer dimensions for datasets
数据集 | 隐藏层维度 | 数据集 | 隐藏层维度 |
---|---|---|---|
CoraFull | 64 | 512 | |
ogbn-arxiv | 256 | ogbn-products | 1 024 |
数据集名称 | 分区数量 | 分区特征 | 分区1 | 分区2 | 分区3 | 分区4 | 分区5 | 分区6 | 分区7 | 分区8 |
---|---|---|---|---|---|---|---|---|---|---|
CoraFull | 4 | 节点 | 6 245 | 7 054 | 6 580 | 6 056 | — | — | — | — |
5 095 | 5 095 | 5 095 | 4 584 | |||||||
边 | 33 962 | 35 316 | 33 994 | 35 482 | — | — | — | — | ||
31 797 | 31 656 | 30 933 | 32 456 | |||||||
8 | 节点 | 3 548 | 3 979 | 3 896 | 3 561 | 3 776 | 3 576 | 4 031 | 3 681 | |
2 545 | 2 546 | 2 396 | 2 161 | 2 530 | 2 539 | 2 543 | 2 533 | |||
边 | 17 482 | 18 838 | 19 334 | 17 852 | 17 304 | 18 008 | 18 454 | 18 182 | ||
15 783 | 16 305 | 16 216 | 14 874 | 15 312 | 16 184 | 16 040 | 16 128 | |||
ogbn-arxiv | 4 | 节点 | 72 166 | 73 705 | 79 222 | 69 758 | — | — | — | — |
42 151 | 43 591 | 42 842 | 40 759 | |||||||
边 | 349 845 | 386 028 | 408 835 | 410 273 | — | — | — | — | ||
304 982 | 343 654 | 343 477 | 343 473 | |||||||
8 | 节点 | 50 225 | 49 622 | 45 405 | 44 941 | 45 008 | 50 709 | 49 064 | 45 018 | |
21 777 | 21 801 | 19 599 | 21 449 | 21 776 | 20 912 | 20 229 | 21 800 | |||
边 | 215 378 | 212 008 | 223 712 | 213 756 | 186 270 | 208 424 | 201 564 | 192 607 | ||
171 879 | 171 701 | 171 928 | 171 784 | 153 345 | 171 954 | 159 784 | 163 211 | |||
4 | 节点 | 203 689 | 167 999 | 175 006 | 168 965 | — | — | — | — | |
53 928 | 59 455 | 59 570 | 60 012 | |||||||
边 | 34 882 726 | 33 785 415 | 33 495 268 | 33 354 912 | — | — | — | — | ||
29 214 769 | 28 417 939 | 28 841 090 | 28 375 059 | |||||||
8 | 节点 | 140 091 | 124 022 | 178 234 | 141 059 | 121 485 | 166 820 | 117 723 | 150 886 | |
30 030 | 25 490 | 30 035 | 29 887 | 29 800 | 29 914 | 29 416 | 28 393 | |||
边 | 17 578 366 | 19 521 642 | 20 797 883 | 19 818 501 | 16 708 704 | 16 124 892 | 16 446 438 | 19 275 925 | ||
13 865 509 | 14 724 005 | 14 803 007 | 14 736 263 | 14 762 642 | 12 743 840 | 14 412 564 | 14 801 027 | |||
ogbn-products | 4 | 节点 | 1 073 131 | 992 338 | 1 012 637 | 972 605 | — | — | — | — |
610 657 | 615 415 | 592 322 | 630 635 | |||||||
边 | 35 653 651 | 34 895 467 | 31 336 778 | 34 031 337 | — | — | — | — | ||
32 481 110 | 32 374 997 | 29 612 148 | 31 698 798 | |||||||
8 | 节点 | 566 183 | 553 209 | 578 443 | 61 531 | 526 883 | 606 943 | 628 995 | 609 695 | |
315 327 | 309 400 | 310 846 | 261 228 | 315 322 | 314 885 | 307 820 | 314 201 | |||
边 | 16 941 573 | 16 402 542 | 17 432 828 | 17 087 742 | 17 007 390 | 17 469 477 | 17 848 086 | 17 167 751 | ||
15 681 661 | 15 388 305 | 15 815 285 | 15 341 841 | 16 084 289 | 15 962 365 | 16 113 958 | 15 779 349 |
Tab. 5 Results of data partition
数据集名称 | 分区数量 | 分区特征 | 分区1 | 分区2 | 分区3 | 分区4 | 分区5 | 分区6 | 分区7 | 分区8 |
---|---|---|---|---|---|---|---|---|---|---|
CoraFull | 4 | 节点 | 6 245 | 7 054 | 6 580 | 6 056 | — | — | — | — |
5 095 | 5 095 | 5 095 | 4 584 | |||||||
边 | 33 962 | 35 316 | 33 994 | 35 482 | — | — | — | — | ||
31 797 | 31 656 | 30 933 | 32 456 | |||||||
8 | 节点 | 3 548 | 3 979 | 3 896 | 3 561 | 3 776 | 3 576 | 4 031 | 3 681 | |
2 545 | 2 546 | 2 396 | 2 161 | 2 530 | 2 539 | 2 543 | 2 533 | |||
边 | 17 482 | 18 838 | 19 334 | 17 852 | 17 304 | 18 008 | 18 454 | 18 182 | ||
15 783 | 16 305 | 16 216 | 14 874 | 15 312 | 16 184 | 16 040 | 16 128 | |||
ogbn-arxiv | 4 | 节点 | 72 166 | 73 705 | 79 222 | 69 758 | — | — | — | — |
42 151 | 43 591 | 42 842 | 40 759 | |||||||
边 | 349 845 | 386 028 | 408 835 | 410 273 | — | — | — | — | ||
304 982 | 343 654 | 343 477 | 343 473 | |||||||
8 | 节点 | 50 225 | 49 622 | 45 405 | 44 941 | 45 008 | 50 709 | 49 064 | 45 018 | |
21 777 | 21 801 | 19 599 | 21 449 | 21 776 | 20 912 | 20 229 | 21 800 | |||
边 | 215 378 | 212 008 | 223 712 | 213 756 | 186 270 | 208 424 | 201 564 | 192 607 | ||
171 879 | 171 701 | 171 928 | 171 784 | 153 345 | 171 954 | 159 784 | 163 211 | |||
4 | 节点 | 203 689 | 167 999 | 175 006 | 168 965 | — | — | — | — | |
53 928 | 59 455 | 59 570 | 60 012 | |||||||
边 | 34 882 726 | 33 785 415 | 33 495 268 | 33 354 912 | — | — | — | — | ||
29 214 769 | 28 417 939 | 28 841 090 | 28 375 059 | |||||||
8 | 节点 | 140 091 | 124 022 | 178 234 | 141 059 | 121 485 | 166 820 | 117 723 | 150 886 | |
30 030 | 25 490 | 30 035 | 29 887 | 29 800 | 29 914 | 29 416 | 28 393 | |||
边 | 17 578 366 | 19 521 642 | 20 797 883 | 19 818 501 | 16 708 704 | 16 124 892 | 16 446 438 | 19 275 925 | ||
13 865 509 | 14 724 005 | 14 803 007 | 14 736 263 | 14 762 642 | 12 743 840 | 14 412 564 | 14 801 027 | |||
ogbn-products | 4 | 节点 | 1 073 131 | 992 338 | 1 012 637 | 972 605 | — | — | — | — |
610 657 | 615 415 | 592 322 | 630 635 | |||||||
边 | 35 653 651 | 34 895 467 | 31 336 778 | 34 031 337 | — | — | — | — | ||
32 481 110 | 32 374 997 | 29 612 148 | 31 698 798 | |||||||
8 | 节点 | 566 183 | 553 209 | 578 443 | 61 531 | 526 883 | 606 943 | 628 995 | 609 695 | |
315 327 | 309 400 | 310 846 | 261 228 | 315 322 | 314 885 | 307 820 | 314 201 | |||
边 | 16 941 573 | 16 402 542 | 17 432 828 | 17 087 742 | 17 007 390 | 17 469 477 | 17 848 086 | 17 167 751 | ||
15 681 661 | 15 388 305 | 15 815 285 | 15 341 841 | 16 084 289 | 15 962 365 | 16 113 958 | 15 779 349 |
数据集 | 模型 | 总时间/s | 样本采样与数据拷贝 | 前向传播 | 反向传播 | 参数更新 | ||||
---|---|---|---|---|---|---|---|---|---|---|
时间/s | 占比/% | 时间/s | 占比/% | 时间/s | 占比/% | 时间/s | 占比/% | |||
CoraFull | GCN | 26.84 | 5.98 | 22.28 | 4.69 | 17.47 | 1.50 | 5.59 | 1.03 | 3.84 |
GAT | 125.47 | 19.12 | 15.24 | 18.81 | 14.99 | 10.35 | 8.25 | 2.93 | 2.34 | |
GraphSAGE | 178.64 | 23.29 | 13.04 | 19.11 | 10.70 | 10.23 | 5.73 | 2.06 | 1.15 | |
ogbn-arxiv | GCN | 44.98 | 12.18 | 27.08 | 8.68 | 19.30 | 2.62 | 5.82 | 1.41 | 3.13 |
GAT | 186.95 | 40.35 | 21.58 | 33.54 | 17.94 | 16.59 | 8.87 | 3.06 | 1.64 | |
GraphSAGE | 256.18 | 63.97 | 24.97 | 38.65 | 15.09 | 15.83 | 6.18 | 2.48 | 0.97 | |
GCN | 372.26 | 34.47 | 9.26 | 20.20 | 5.43 | 5.98 | 1.61 | 1.43 | 0.38 | |
GAT | 569.16 | 35.93 | 6.31 | 31.45 | 5.53 | 13.02 | 2.29 | 2.51 | 0.44 | |
GraphSAGE | 811.44 | 45.84 | 5.65 | 27.05 | 3.33 | 11.56 | 1.42 | 2.34 | 0.29 | |
ogbn-products | GCN | 295.20 | 58.39 | 19.78 | 22.82 | 7.73 | 7.34 | 2.49 | 1.69 | 0.57 |
GAT | 438.68 | 65.00 | 14.82 | 52.34 | 11.93 | 23.63 | 5.39 | 3.68 | 0.84 | |
GraphSAGE | 599.61 | 76.82 | 12.81 | 54.42 | 9.08 | 24.84 | 4.14 | 3.27 | 0.55 |
Tab. 6 Time consumption and decomposition for 10 epochs in model training with 4 computing nodes
数据集 | 模型 | 总时间/s | 样本采样与数据拷贝 | 前向传播 | 反向传播 | 参数更新 | ||||
---|---|---|---|---|---|---|---|---|---|---|
时间/s | 占比/% | 时间/s | 占比/% | 时间/s | 占比/% | 时间/s | 占比/% | |||
CoraFull | GCN | 26.84 | 5.98 | 22.28 | 4.69 | 17.47 | 1.50 | 5.59 | 1.03 | 3.84 |
GAT | 125.47 | 19.12 | 15.24 | 18.81 | 14.99 | 10.35 | 8.25 | 2.93 | 2.34 | |
GraphSAGE | 178.64 | 23.29 | 13.04 | 19.11 | 10.70 | 10.23 | 5.73 | 2.06 | 1.15 | |
ogbn-arxiv | GCN | 44.98 | 12.18 | 27.08 | 8.68 | 19.30 | 2.62 | 5.82 | 1.41 | 3.13 |
GAT | 186.95 | 40.35 | 21.58 | 33.54 | 17.94 | 16.59 | 8.87 | 3.06 | 1.64 | |
GraphSAGE | 256.18 | 63.97 | 24.97 | 38.65 | 15.09 | 15.83 | 6.18 | 2.48 | 0.97 | |
GCN | 372.26 | 34.47 | 9.26 | 20.20 | 5.43 | 5.98 | 1.61 | 1.43 | 0.38 | |
GAT | 569.16 | 35.93 | 6.31 | 31.45 | 5.53 | 13.02 | 2.29 | 2.51 | 0.44 | |
GraphSAGE | 811.44 | 45.84 | 5.65 | 27.05 | 3.33 | 11.56 | 1.42 | 2.34 | 0.29 | |
ogbn-products | GCN | 295.20 | 58.39 | 19.78 | 22.82 | 7.73 | 7.34 | 2.49 | 1.69 | 0.57 |
GAT | 438.68 | 65.00 | 14.82 | 52.34 | 11.93 | 23.63 | 5.39 | 3.68 | 0.84 | |
GraphSAGE | 599.61 | 76.82 | 12.81 | 54.42 | 9.08 | 24.84 | 4.14 | 3.27 | 0.55 |
模型 | CoraFull | ogbn-arxiv | ogbn-products | |
---|---|---|---|---|
GCN | 38.83 | 8.19 | 15.69 | 14.26 |
GAT | 9.82 | 3.01 | 8.39 | 5.67 |
GraphSAGE | 5.92 | 1.50 | 5.46 | 4.87 |
Tab. 7 NATR trained with 4 computing nodes
模型 | CoraFull | ogbn-arxiv | ogbn-products | |
---|---|---|---|---|
GCN | 38.83 | 8.19 | 15.69 | 14.26 |
GAT | 9.82 | 3.01 | 8.39 | 5.67 |
GraphSAGE | 5.92 | 1.50 | 5.46 | 4.87 |
[1] | YING R, HE R, CHEN K, et al. Graph convolutional neural networks for web-scale recommender systems[C]// Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2018: 974-983. |
[2] | VATTER J, MAYER R, JACOBSEN H A. The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: a survey[J]. ACM Computing Surveys, 2024, 56(1): No.6. |
[3] | SHAO Y, LI H, GU X, et al. Distributed graph neural network training: a survey[J]. ACM Computing Surveys, 2024, 56(8): No.191. |
[4] | DWIVEDI V P, JOSHI C K, LUU A T, et al. Benchmarking graph neural networks[J]. Journal of Machine Learning Research, 2023, 23: 1-48. |
[5] | 魏嘉,张兴军,王龙翔,等. 面向深度神经网络大规模分布式数据并行训练的MC2能耗模型[J]. 计算机研究与发展, 2024, 61(12): 2985-3004. |
WEI J, ZHANG X J, WANG L X, et al. MC2 energy consumption model for massively distributed data parallel training of deep neural network[J]. Journal of Computer Research and Development, 2024, 61(12): 2985-3004. | |
[6] | WU J, SUN J, SUN H, et al. Performance analysis of graph neural network frameworks[C]// Proceedings of the 2021 IEEE International Symposium on Performance Analysis of Systems and Software. Piscataway: IEEE, 2021: 118-127. |
[7] | ZHANG L, LU K, LAI Z, et al. Accelerating GNN training by adapting large graphs to distributed heterogeneous architectures[J]. IEEE Transactions on Computers, 2023, 72(12): 3473-3488. |
[8] | ZHANG L, LAI Z, TANG Y, et al. PCGraph: accelerating GNN inference on large graphs via partition caching[C]// Proceedings of the 19th IEEE International Symposium on Parallel and Distributed Processing with Applications/ 11th IEEE International Conference on Big Data and Cloud Computing/ 14th IEEE International Conference on Social Computing and Networking/ 11th IEEE International Conference on Sustainable Computing and Communications. Piscataway: IEEE, 2021: 279-287. |
[9] | LIN H, YAN M, YE X, et al. A comprehensive survey on distributed training of graph neural networks[J]. Proceedings of the IEEE, 2023, 111(12): 1572-1606. |
[10] | MA L, YANG Z, MIAO Y, et al. NeuGraph: parallel deep neural network computation on large graphs[C]// Proceedings of the 2019 USENIX Annual Technical Conference. Berkeley: USENIX Association, 2019: 443-457. |
[11] | CAI Z, ZHOU Q, YAN X, et al. DSP: efficient GNN training with multiple GPUs[C]// Proceedings of the 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming. New York: ACM, 2023: 392-404. |
[12] | WU X, SHI L, HE L, et al. TurboGNN: improving the end-to-end performance for sampling-based GNN training on GPUs[J]. IEEE Transactions on Computers, 2023, 72(9): 2571-2584. |
[13] | WU W, SHI X, HE L, et al. TurboMGNN: improving concurrent GNN training tasks on GPU with fine-grained kernel fusion[J]. IEEE Transactions on Parallel and Distributed Systems, 2023, 34(6): 1968-1981. |
[14] | SUN J, SU L, SHI Z, et al. Legion: automatically pushing the envelope of multi-GPU system for billion-scale GNN training[C]// Proceedings of the 2023 USENIX Annual Technical Conference. Berkeley: USENIX Association, 2023: 165-179. |
[15] | WAN B, ZHAO J, WU C. Adaptive message quantization and parallelization for distributed full-graph GNN training[EB/OL]. [2024-07-15].. |
[16] | LIU H, LU S, CHEN X, et al. G3: when Graph neural networks meet parallel Graph processing systems on GPUs[J]. Proceedings of the VLDB Endowment, 2020, 13(12): 2813-2816. |
[17] | KALER T, ILIOPOULOS A S, MURZYNOWSKI P, et al. Communication-efficient graph neural networks with probabilistic neighborhood expansion analysis and caching[EB/OL]. [2024-07-15].. |
[18] | TEKDOĞAN T, GOKTAŞ S, YILMAZER-METIN A. gSuite: a flexible and framework independent benchmark suite for graph neural network inference on GPUs[C]// Proceedings of the 2022 IEEE International Symposium on Workload Characterization. Piscataway: IEEE, 2022: 146-159. |
[19] | CHEN T, ZHOU K, DUAN K, et al. Bag of tricks for training deeper graph neural networks: a comprehensive benchmark study[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(3): 2769-2781. |
[20] | 赵港,王千阁,姚烽,等. 大规模图神经网络系统综述[J]. 软件学报, 2022, 33(1): 150-170. |
ZHAO G, WANG Q G, YAO F, et al. Survey on large-scale graph neural network systems[J]. Journal of Software, 2022, 33(1): 150-170. | |
[21] | BARUAH T, SHIVDIKAR K, DONG S, et al. GNNMark: a benchmark suite to characterize graph neural network training on GPUs[C]// Proceedings of the 2021 IEEE International Symposium on Performance Analysis of Systems and Software. Piscataway: IEEE, 2021: 13-23. |
[22] | LV Q, DING M, LIU Q, et al. Are we really making much progress? revisiting, benchmarking and refining heterogeneous graph neural networks[C]// Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2021: 1150-1160. |
[23] | WU F, DE SOUZA A H, Jr, ZHANG T, et al. Simplifying graph convolutional networks[C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 6861-6871. |
[24] | KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[EB/OL]. [2024-07-15].. |
[25] | GILMER J, SCHOENHOLZ S S, RILEY P F, et al. Neural message passing for quantum chemistry[C]// Proceedings of the 34th International Conference on Machine Learning. New York: JMLR.org, 2017: 1263-1272. |
[26] | DEARING M T, WANG X. Analyzing the performance of graph neural networks with pipe parallelism[EB/OL]. [2024-07-15].. |
[27] | LIN H, YAN M, YANG X, et al. Characterizing and understanding distributed GNN training on GPUs[J]. IEEE Computer Architecture Letters, 2022, 21(1): 21-24. |
[28] | WANG Z, WANG Y, YUAN C, et al. Empirical analysis of performance bottlenecks in graph neural network training and inference with GPUs[J]. Neurocomputing, 2021, 446: 165-191. |
[29] | SHCHUR O, MUMME M, BOJCHEVSKI A, et al. Pitfalls of graph neural network evaluation[EB/OL]. [2024-07-15].. |
[30] | HU W, FEY M, ZITNIK M, et al. Open graph benchmark: datasets for machine learning on graphs[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 22118-22133. |
[31] | NVIDIA Developer. NVIDIA visual profiler[EB/OL]. [2024-07-15].. |
[32] | VILLA O, STEPHENSON M, NELLANS D, et al. NVBit: a dynamic binary instrumentation framework for NVIDIA GPUs[C]// Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture. New York: ACM, 2019: 372-383. |
[33] | ZHENG D, MA C, WANG M, et al. DistDGL: distributed graph neural network training for billion-scale graphs[C]// Proceedings of the IEEE/ACM 10th Workshop on Irregular Applications: Architectures and Algorithms. Piscataway: IEEE, 2020: 36-44. |
[34] | KETKAR R, LIU Y, WANG H, et al. A benchmark study of graph models for molecular acute toxicity prediction[J]. International Journal of Molecular Sciences, 2023, 24(15): No.11966. |
[35] | FEY M, LENSSEN J E. Fast graph representation learning with PyTorch Geometric[EB/OL]. [2024-07-15].. |
[36] | FUNG V, ZHANG J, JUAREZ E, et al. Benchmarking graph neural networks for materials chemistry[J]. npj Computational Materials, 2021, 7: No.84. |
[37] | LI F, FENG J, YAN H, et al. Dynamic graph convolutional recurrent network for traffic prediction: benchmark and solution[J]. ACM Transactions on Knowledge Discovery from Data, 2023, 17(1): No.9. |
[38] | CoraFullDataset — DGL 2.2.1 documentation[DS/OL]. [2024-07-15].. |
[39] | RedditDataset — DGL 2.2.1 documentation[DS/OL]. [2024-07-15].. |
[40] | VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al. Graph attention networks[EB/OL]. [2024-07-15].. |
[41] | HAMILTON W L, YING R, LESKOVEC J. Inductive representation learning on large graphs[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 1025-1035. |
[42] | PASZKE A, GROSS S, MASSA F, et al. PyTorch: an imperative style, high-performance deep learning library[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 8026-8037. |
[43] | WANG M, ZHENG D, YE Z, et al. Deep graph library: a graph-centric, highly-performant package for graph neural networks[EB/OL]. [2024-07-15].. |
[44] | ABADI M, AGARWAL A, BARHAM P, et al. TensorFlow: large-scale machine learning on heterogeneous distributed systems[EB/OL]. [2024-07-15].. |
[45] | CHEN T, LI M, LI Y, et al. MXNet: a flexible and efficient machine learning library for heterogeneous distributed systems[EB/OL]. [2024-07-15].. |
[1] | Danyang CHEN, Changlun ZHANG. Multi-scale decorrelation graph convolutional network model [J]. Journal of Computer Applications, 2025, 45(7): 2180-2187. |
[2] | Yuelan ZHANG, Jing SU, Hangyu ZHAO, Baili YANG. Multi-view knowledge-aware and interactive distillation recommendation algorithm [J]. Journal of Computer Applications, 2025, 45(7): 2211-2220. |
[3] | Chen LIANG, Yisen WANG, Qiang WEI, Jiang DU. Source code vulnerability detection method based on Transformer-GCN [J]. Journal of Computer Applications, 2025, 45(7): 2296-2303. |
[4] | Zimo ZHANG, Xuezhuan ZHAO. Multi-scale sparse graph guided vision graph neural networks [J]. Journal of Computer Applications, 2025, 45(7): 2188-2194. |
[5] | Renjie TIAN, Mingli JING, Long JIAO, Fei WANG. Recommendation algorithm of graph contrastive learning based on hybrid negative sampling [J]. Journal of Computer Applications, 2025, 45(4): 1053-1060. |
[6] | Cong WANG, Yancui SHI. Group recommendation model by graph neural network based on multi-perspective learning [J]. Journal of Computer Applications, 2025, 45(4): 1205-1212. |
[7] | Weichao DANG, Xinyu WEN, Gaimei GAO, Chunxia LIU. Multi-view and multi-scale contrastive learning for graph collaborative filtering [J]. Journal of Computer Applications, 2025, 45(4): 1061-1068. |
[8] | Lan YOU, Yuang ZHANG, Yuan LIU, Zhijun CHEN, Wei WANG, Xing ZENG, Zhangwei HE. Developer recommendation for open-source projects based on collaborative contribution network [J]. Journal of Computer Applications, 2025, 45(4): 1213-1222. |
[9] | Handa MA, Yadong WU. Multi-domain spatiotemporal hierarchical graph neural network for air quality prediction [J]. Journal of Computer Applications, 2025, 45(2): 444-452. |
[10] | Qijian CAI, Wei TAN. Semantic graph enhanced multi-modal recommendation algorithm [J]. Journal of Computer Applications, 2025, 45(2): 421-427. |
[11] | Zidong CHENG, Peng LI, Feng ZHU. Potential relation mining in internet of things threat intelligence knowledge graph [J]. Journal of Computer Applications, 2025, 45(1): 24-31. |
[12] | Wenbo ZHAO, Zitong MA, Zhe YANG. Link prediction model based on directed hypergraph adaptive convolution [J]. Journal of Computer Applications, 2025, 45(1): 15-23. |
[13] | Tingjie TANG, Jiajin HUANG, Jin QIN. Session-based recommendation with graph auxiliary learning [J]. Journal of Computer Applications, 2024, 44(9): 2711-2718. |
[14] | Xinrui LIN, Xiaofei WANG, Yan ZHU. Academic anomaly citation group detection based on local extended community detection [J]. Journal of Computer Applications, 2024, 44(6): 1855-1861. |
[15] | Jie GUO, Jiayu LIN, Zuhong LIANG, Xiaobo LUO, Haitao SUN. Recommendation method based on knowledge‑awareness and cross-level contrastive learning [J]. Journal of Computer Applications, 2024, 44(4): 1121-1127. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||