《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (11): 3529-3539.DOI: 10.11772/j.issn.1001-9081.2024111657
• 人工智能 • 上一篇
收稿日期:2024-11-22
修回日期:2025-03-10
接受日期:2025-03-18
发布日期:2025-04-02
出版日期:2025-11-10
通讯作者:
林慧丹
作者简介:赵敬华(1984—),女,山东冠县人,副教授,博士,主要研究方向:流行度预测、互动创新基金资助:
Jinghua ZHAO1, Zhu ZHANG1, Xiting LYU1, Huidan LIN2(
)
Received:2024-11-22
Revised:2025-03-10
Accepted:2025-03-18
Online:2025-04-02
Published:2025-11-10
Contact:
Huidan LIN
About author:ZHAO Jinghua, born in 1984, Ph. D.,associate professor. Her research interests include popularity prediction, interactive innovation.Supported by:摘要:
针对现有多尺度信息传播预测模型忽略了级联传播的动态性,以及独立进行微观信息预测时性能有待提高的问题,提出基于超图神经网络的多尺度信息传播预测模型(MIDHGNN)。首先,使用图卷积网络(GCN)提取社交网络图中蕴含的用户社交关系特征,使用超图神经网络(HGNN)提取传播级联图中蕴含的用户全局偏好特征,并融合这2类特征进行微观信息传播预测;其次,利用门控循环单元(GRU)连续预测传播用户,直至虚拟用户;再次,将每次预测所得用户总数作为级联的最终规模,完成宏观信息传播预测;最后,在模型中嵌入强化学习(RL)框架,采用策略梯度方法优化参数,提升宏观信息传播预测性能。在微观信息传播预测方面,相较于次优模型,MIDHGNN在Twitter、Douban、Android数据集上的Hits@k指标分别平均提升12.01%、11.64%、9.74%,mAP@k指标分别平均提升31.31%、14.85%、13.24%;在宏观预测方面,MIDHGNN在这3个数据集上的均方对数误差(MSLE)指标分别最少降低8.10%、12.61%、3.24%,各项指标均显著优于对比模型,验证了它的有效性。
中图分类号:
赵敬华, 张柱, 吕锡婷, 林慧丹. 基于超图神经网络的多尺度信息传播预测模型[J]. 计算机应用, 2025, 45(11): 3529-3539.
Jinghua ZHAO, Zhu ZHANG, Xiting LYU, Huidan LIN. Multiscale information diffusion prediction model based on hypergraph neural network[J]. Journal of Computer Applications, 2025, 45(11): 3529-3539.
| 数据集 | #Users | #Links | #Cascades | Avg.Length |
|---|---|---|---|---|
| 12 627 | 309 631 | 3 442 | 32.60 | |
| Douban | 23 123 | 348 280 | 10 602 | 27.14 |
| Android | 9 958 | 48 573 | 679 | 33.30 |
表1 实验数据集统计详情
Tab. 1 Statistical details of experimental datasets
| 数据集 | #Users | #Links | #Cascades | Avg.Length |
|---|---|---|---|---|
| 12 627 | 309 631 | 3 442 | 32.60 | |
| Douban | 23 123 | 348 280 | 10 602 | 27.14 |
| Android | 9 958 | 48 573 | 679 | 33.30 |
| Hits@10 | mAP@10 | MSLE | 训练时间/h | |
|---|---|---|---|---|
| 6 | 0.302 | 0.217 | 0.923 | 8.2 |
| 12 | 0.308 | 0.224 | 0.905 | 6.3 |
| 24 | 0.318 | 0.224 | 0.896 | 4.5 |
| 48 | 0.311 | 0.221 | 0.912 | 3.9 |
表2 时间窗口长度对模型性能和训练效率的影响
Tab. 2 Impact of time window length on model performance and training efficiency
| Hits@10 | mAP@10 | MSLE | 训练时间/h | |
|---|---|---|---|---|
| 6 | 0.302 | 0.217 | 0.923 | 8.2 |
| 12 | 0.308 | 0.224 | 0.905 | 6.3 |
| 24 | 0.318 | 0.224 | 0.896 | 4.5 |
| 48 | 0.311 | 0.221 | 0.912 | 3.9 |
| 参数名 | 参数值 | 参数名 | 参数值 |
|---|---|---|---|
| Batch Size | 16 | d_model | 64 |
| d_pos | 8 | warmup epochs | 10 |
| Num Epoch | 50 | embed_dim | 64 |
| Dropout Rate | 0.1 | k | 5 |
表3 参数设置
Tab. 3 Parameter settings
| 参数名 | 参数值 | 参数名 | 参数值 |
|---|---|---|---|
| Batch Size | 16 | d_model | 64 |
| d_pos | 8 | warmup epochs | 10 |
| Num Epoch | 50 | embed_dim | 64 |
| Dropout Rate | 0.1 | k | 5 |
| 数据集 | 模型 | Hits@10 | Hits@50 | Hits@100 | mAP@10 | mAP@50 | mAP@100 |
|---|---|---|---|---|---|---|---|
| NDM | 0.215 2 | 0.322 3 | 0.383 1 | 0.143 0 | 0.148 0 | 0.148 9 | |
| SNIDSA | 0.233 7 | 0.354 6 | 0.434 9 | 0.148 4 | 0.154 0 | 0.155 1 | |
| GraphSAGE | 0.257 7 | 0.337 8 | 0.453 7 | 0.156 3 | 0.166 5 | 0.170 2 | |
| FOREST | 0.261 8 | 0.409 5 | 0.503 9 | 0.172 1 | 0.178 8 | 0.180 2 | |
| TGAT | 0.248 0 | 0.360 7 | 0.478 7 | 0.164 0 | 0.170 4 | 0.178 0 | |
| MIDHGNN | 0.318 4 | 0.443 7 | 0.534 5 | 0.227 9 | 0.233 1 | 0.236 3 | |
| Douban | NDM | 0.103 1 | 0.188 7 | 0.240 2 | 0.055 4 | 0.059 3 | 0.060 |
| SNIDSA | 0.118 1 | 0.219 1 | 0.283 7 | 0.063 6 | 0.068 1 | 0.069 1 | |
| GraphSAGE | 0.123 3 | 0.190 3 | 0.223 9 | 0.053 3 | 0.057 2 | 0.060 3 | |
| FOREST | 0.141 6 | 0.247 9 | 0.312 5 | 0.078 9 | 0.083 8 | 0.084 7 | |
| TGAT | 0.137 8 | 0.184 8 | 0.265 3 | 0.067 2 | 0.071 8 | 0.076 3 | |
| MIDHGNN | 0.160 2 | 0.275 8 | 0.345 4 | 0.091 1 | 0.096 0 | 0.097 0 | |
| Android | NDM | 0.017 0 | 0.042 3 | 0.055 5 | 0.005 9 | 0.007 0 | 0.007 2 |
| GraphSAGE | 0.065 5 | 0.094 3 | 0.118 2 | 0.052 2 | 0.054 1 | 0.057 3 | |
| FOREST | 0.086 6 | 0.173 9 | 0.231 4 | 0.062 8 | 0.066 7 | 0.067 5 | |
| TGAT | 0.071 2 | 0.166 2 | 0.196 4 | 0.061 1 | 0.063 2 | 0.066 1 | |
| MIDHGNN | 0.096 9 | 0.189 6 | 0.250 6 | 0.072 3 | 0.075 3 | 0.075 4 |
表4 微观尺度预测实验结果
Tab. 4 Experimental results of micro-scale prediction
| 数据集 | 模型 | Hits@10 | Hits@50 | Hits@100 | mAP@10 | mAP@50 | mAP@100 |
|---|---|---|---|---|---|---|---|
| NDM | 0.215 2 | 0.322 3 | 0.383 1 | 0.143 0 | 0.148 0 | 0.148 9 | |
| SNIDSA | 0.233 7 | 0.354 6 | 0.434 9 | 0.148 4 | 0.154 0 | 0.155 1 | |
| GraphSAGE | 0.257 7 | 0.337 8 | 0.453 7 | 0.156 3 | 0.166 5 | 0.170 2 | |
| FOREST | 0.261 8 | 0.409 5 | 0.503 9 | 0.172 1 | 0.178 8 | 0.180 2 | |
| TGAT | 0.248 0 | 0.360 7 | 0.478 7 | 0.164 0 | 0.170 4 | 0.178 0 | |
| MIDHGNN | 0.318 4 | 0.443 7 | 0.534 5 | 0.227 9 | 0.233 1 | 0.236 3 | |
| Douban | NDM | 0.103 1 | 0.188 7 | 0.240 2 | 0.055 4 | 0.059 3 | 0.060 |
| SNIDSA | 0.118 1 | 0.219 1 | 0.283 7 | 0.063 6 | 0.068 1 | 0.069 1 | |
| GraphSAGE | 0.123 3 | 0.190 3 | 0.223 9 | 0.053 3 | 0.057 2 | 0.060 3 | |
| FOREST | 0.141 6 | 0.247 9 | 0.312 5 | 0.078 9 | 0.083 8 | 0.084 7 | |
| TGAT | 0.137 8 | 0.184 8 | 0.265 3 | 0.067 2 | 0.071 8 | 0.076 3 | |
| MIDHGNN | 0.160 2 | 0.275 8 | 0.345 4 | 0.091 1 | 0.096 0 | 0.097 0 | |
| Android | NDM | 0.017 0 | 0.042 3 | 0.055 5 | 0.005 9 | 0.007 0 | 0.007 2 |
| GraphSAGE | 0.065 5 | 0.094 3 | 0.118 2 | 0.052 2 | 0.054 1 | 0.057 3 | |
| FOREST | 0.086 6 | 0.173 9 | 0.231 4 | 0.062 8 | 0.066 7 | 0.067 5 | |
| TGAT | 0.071 2 | 0.166 2 | 0.196 4 | 0.061 1 | 0.063 2 | 0.066 1 | |
| MIDHGNN | 0.096 9 | 0.189 6 | 0.250 6 | 0.072 3 | 0.075 3 | 0.075 4 |
| 数据集 | 模型 | MSLE |
|---|---|---|
| DeepCas | 2.261 | |
| DeepHawkes | 2.411 | |
| FOREST | 0.975 | |
| TGAT | 1.164 | |
| MIDHGNN | 0.896 | |
| Douban | DeepCas | 2.122 |
| DeepHawkes | 1.725 | |
| FOREST | 0.825 | |
| TGAT | 0.927 | |
| MIDHGNN | 0.721 | |
| Android | DeepCas | 2.122 |
| DeepHawkes | 1.971 | |
| FOREST | 0.556 | |
| TGAT | 0.741 | |
| MIDHGNN | 0.538 |
表5 宏观尺度预测实验结果
Tab. 5 Experimental results of macro-scale prediction
| 数据集 | 模型 | MSLE |
|---|---|---|
| DeepCas | 2.261 | |
| DeepHawkes | 2.411 | |
| FOREST | 0.975 | |
| TGAT | 1.164 | |
| MIDHGNN | 0.896 | |
| Douban | DeepCas | 2.122 |
| DeepHawkes | 1.725 | |
| FOREST | 0.825 | |
| TGAT | 0.927 | |
| MIDHGNN | 0.721 | |
| Android | DeepCas | 2.122 |
| DeepHawkes | 1.971 | |
| FOREST | 0.556 | |
| TGAT | 0.741 | |
| MIDHGNN | 0.538 |
| 模型 | Hits@100 | mAP@100 | MSLE |
|---|---|---|---|
| -GCN | 0.515 4 | 0.220 2 | 0.935 |
| -HGNN | 0.503 2 | 0.214 0 | 1.021 |
| -RL | 0.520 2 | 0.223 0 | — |
| -HGNN+RL | 0.501 1 | 0.207 4 | — |
| MIDHGNN | 0.534 5 | 0.236 3 | 0.896 |
表6 Twitter 数据集的消融实验结果
Tab. 6 Ablation experiment results on Twitter dataset
| 模型 | Hits@100 | mAP@100 | MSLE |
|---|---|---|---|
| -GCN | 0.515 4 | 0.220 2 | 0.935 |
| -HGNN | 0.503 2 | 0.214 0 | 1.021 |
| -RL | 0.520 2 | 0.223 0 | — |
| -HGNN+RL | 0.501 1 | 0.207 4 | — |
| MIDHGNN | 0.534 5 | 0.236 3 | 0.896 |
| 用户节点嵌入维度 | MSLE | 用户节点嵌入维度 | MSLE |
|---|---|---|---|
| 16 | 1.046 | 64 | 0.896 |
| 32 | 0.967 | 128 | 0.986 |
表7 用户节点嵌入维度对宏观预测指标值的影响
Tab. 7 Impact of user node embedding dimension on macro prediction index value
| 用户节点嵌入维度 | MSLE | 用户节点嵌入维度 | MSLE |
|---|---|---|---|
| 16 | 1.046 | 64 | 0.896 |
| 32 | 0.967 | 128 | 0.986 |
| 位置嵌入维度 | MSLE | 位置嵌入维度 | MSLE |
|---|---|---|---|
| 2 | 1.134 | 16 | 1.037 |
| 4 | 1.083 | 32 | 1.042 |
| 8 | 0.896 |
表8 位置嵌入维度对宏观预测指标值的影响
Tab.8 Impact of position embedding dimension on macro prediction index value
| 位置嵌入维度 | MSLE | 位置嵌入维度 | MSLE |
|---|---|---|---|
| 2 | 1.134 | 16 | 1.037 |
| 4 | 1.083 | 32 | 1.042 |
| 8 | 0.896 |
| GRU层数 | MSLE | GRU层数 | MSLE |
|---|---|---|---|
| 1 | 0.921 | 3 | 0.912 |
| 2 | 0.896 | 4 | 0.934 |
表9 GRU层数对宏观预测指标值的影响
Tab.9 Impact of number of GRU layers on macro prediction index value
| GRU层数 | MSLE | GRU层数 | MSLE |
|---|---|---|---|
| 1 | 0.921 | 3 | 0.912 |
| 2 | 0.896 | 4 | 0.934 |
| 注意力头数 | MSLE | 注意力头数 | MSLE |
|---|---|---|---|
| 2 | 0.945 | 10 | 0.901 |
| 4 | 0.091 | 12 | 0.899 |
| 6 | 0.903 | 14 | 0.904 |
| 8 | 0.896 |
表10 注意力头数对宏观预测指标值的影响
Tab.10 Impact of number of attention heads on macro prediction index value
| 注意力头数 | MSLE | 注意力头数 | MSLE |
|---|---|---|---|
| 2 | 0.945 | 10 | 0.901 |
| 4 | 0.091 | 12 | 0.899 |
| 6 | 0.903 | 14 | 0.904 |
| 8 | 0.896 |
| [1] | 王震宇,朱学芳.基于多模态Transformer的虚假新闻检测研究[J].情报学报,2023,42(12):1477-1486. |
| WANG Z Y, ZHU X F. Research on fake news detection based on multimodal Transformer[J]. Journal of the China Society for Scientific and Technical Information, 2023, 42(12): 1477-1486. | |
| [2] | LI Q, XIE Y, WU X, et al. User behavior prediction model based on implicit links and multi-type rumor messages[J]. Knowledge-Based Systems, 2023, 262: No.110276. |
| [3] | WANG J, HU Y, JIANG T X, et al. Essential tensor learning for multimodal information-driven stock movement prediction[J]. Knowledge-Based Systems, 2023, 262: No.110262. |
| [4] | YANG X, YANG Y, SU J, et al. Who's next: rising star prediction via diffusion of user interest in social networks[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(5): 5413-5425. |
| [5] | LI Y, JIN H, YU X, et al. Intelligent prediction of private information diffusion in social networks[J]. Electronics, 2020, 9(5): No.719. |
| [6] | PORTILLO-VAN DIEST A, BALLESTER COMA L, MORTIER P, et al. Experience sampling methods for the personalised prediction of mental health problems in Spanish university students: protocol for a survey-based observational study within the PROMES-U project[J]. BMJ Open, 2023, 13(7): No.e072641. |
| [7] | 赵蓉英,李新来,李丹阳.高校新媒体信息传播网络结构及其演化特征[J].情报科学,2022,40(6):3-11. |
| ZHAO R Y, LI X L, LI D Y. Structure and evolution feature on information transmission network of university new media[J]. Information Science, 2022, 40(6): 3-11. | |
| [8] | 王晰巍,庄蕙荥,姜奕冰,等.重大突发事件下网络舆情传播中回声室网络结构研究[J].情报理论与实践,2024,47(1):101-109. |
| WANG X W, ZHUANG H X, JIANG Y B, et al. Research on network structure of echo chamber in network public opinion propagation for major emergencies[J]. Information Studies: Theory and Application, 2024, 47(1): 101-109. | |
| [9] | CHEN X, ZHOU F, ZHANG K, et al. Information diffusion prediction via recurrent cascades convolution[C]// Proceedings of the IEEE 35th International Conference on Data Engineering. Piscataway: IEEE, 2019: 770-781. |
| [10] | SUN X, ZHOU J, LIU L, et al. Explicit time embedding based cascade attention network for information popularity prediction[J]. Information Processing and Management, 2023, 60(3): No.103278. |
| [11] | 苗琛香,刘小洋.融合超图注意力机制与图卷积网络的信息扩散预测[J].计算机应用研究,2023,40(6):1715-1720. |
| MIAO C X, LIU X Y. Information diffusion prediction based on hypergraph attention mechanism and graph convolution network[J]. Application Research of Computers, 2023, 40(6): 1715-1720. | |
| [12] | GONG Y C, WANG M, LIANG W, et al. UHIR: an effective information dissemination model of online social hypernetworks based on user and information attributes[J]. Information Sciences, 2023, 644: No.119284. |
| [13] | 丁晟春,包舟,刘笑迎.突发事件舆情传播中用户交互行为预测研究[J].现代情报,2023,43(9):111-123. |
| DING S C, BAO Z, LIU X Y. Research on user behavior prediction of emergency public opinion communication[J]. Journal of Modern Information, 2023, 43(9): 111-123. | |
| [14] | JIA X, SHANG J, LIU D, et al. HeDAN: heterogeneous diffusion attention network for popularity prediction of online content[J]. Knowledge-Based Systems, 2022, 254: No.109659. |
| [15] | MENG F, CHEN L, HERRING P, et al. Information and disseminator features influences online negative information recognition and dissemination[J]. International Journal of Pattern Recognition and Artificial Intelligence, 2023, 37(3): No.2350005. |
| [16] | 赵小月,曾园园,江昊.基于C-DGCN的信息流行度预测[J].武汉大学学报(工学版),2023,56(4):506-514. |
| ZHAO X Y, ZENG Y Y, JIANG H. Information popularity prediction model based on C-DGCN[J]. Engineering Journal of Wuhan University, 2023, 56(4): 506-514. | |
| [17] | ZHAO J H, ZHAO J L, FENG J. Information diffusion prediction based on cascade sequences and social topology[J]. Computers and Electrical Engineering, 2023, 109(Pt B): No.108782. |
| [18] | 梁少斌,陈志豪,魏晶晶,等.基于级联时空特征的信息传播预测方法[J].模式识别与人工智能,2021,34(11):969-978. |
| LIANG S B, CHEN Z H, WEI J J, et al. Information diffusion prediction based on cascade spatial-temporal feature[J]. Pattern Recognition and Artificial Intelligence, 2021, 34(11): 969-978. | |
| [19] | 吕锡婷,赵敬华,荣海迎,等.基于Transformer和关系图卷积网络的信息传播预测模型[J].计算机应用,2024,44(6):1760-1766. |
| LYU X T, ZHAO J H, RONG H Y, et al. Information diffusion prediction model based on Transformer and relational graph convolutional network[J]. Journal of Computer Applications, 2024, 44(6): 1760-1766. | |
| [20] | WANG R, XU X, ZHANG Y. Multiscale information diffusion prediction with minimal substitution neural network[J]. IEEE Transactions on Neural Networks and Learning Systems, 2025, 36(1): 1069-1080. |
| [21] | YANG C, WANG H, TANG J, et al. Full-scale information diffusion prediction with reinforced recurrent networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(5): 2271-2283. |
| [22] | KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[EB/OL]. [2024-09-26].. |
| [23] | 操东林. 基于深度强化学习的金融投资研究[D]. 济南:山东财经大学, 2023. |
| CAO D L. Research on financial investment based on deep reinforcement learning[D]. Jinan: Shandong University of Finance and Economics, 2023. | |
| [24] | WILLIAMS R J. Simple statistical gradient-following algorithms for connectionist reinforcement learning[J]. Machine Learning, 1992, 8(3/4): 229-256. |
| [25] | HODAS N O, LERMAN K. The simple rules of social contagion[J]. Scientific Reports, 2014, 4: No.4343. |
| [26] | ZHONG E, FAN W, WANG J, et al. ComSoc: adaptive transfer of user behaviors over composite social network[C]// Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2012: 696-704. |
| [27] | SANKAR A, ZHANG X, KRISHNAN A, et al. Inf-VAE: a variational autoencoder framework to integrate homophily and influence in diffusion prediction[C]// Proceedings of the 13th ACM International Conference on Web Search and Data Mining. New York: ACM, 2020: 510-518. |
| [28] | YANG C, SUN M, LIU H, et al. Neural diffusion model for microscopic cascade study[J]. IEEE Transactions on Knowledge and Data Engineering, 2018, 14(8): 1128-1139. |
| [29] | WANG Z, CHEN C, LI W. A sequential neural information diffusion model with structure attention[C]// Proceedings of the 27th ACM International Conference on Information and Knowledge Management. New York: ACM, 2018: 1795-1798. |
| [30] | HAMILTON W L, YING R, LESKOVEC J. Inductive representation learning on large graphs[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 1025-1035. |
| [31] | LI C, MA J, GUO X, et al. DeepCas: an end-to-end predictor of information cascades[C]// Proceedings of the 26th International Conference on World Wide Web. Republic and Canton of Geneva: International World Wide Web Conferences Steering Committee, 2017: 577-586. |
| [32] | CAO Q, SHEN H, CEN K, et al. DeepHawkes: bridging the gap between prediction and understanding of information cascades[C]// Proceedings of the 2017 ACM Conference on Information and Knowledge Management. New York: ACM, 2017: 1149-1158. |
| [33] | YANG C, TANG J, SUN M, et al. Multi-scale information diffusion prediction with reinforced recurrent networks[C]// Proceedings of the 28th International Joint Conference on Artificial Intelligence. California: IJCAI.org, 2019: 4033-4039. |
| [34] | XU D, RUAN C, KORPEOGLU E, et al. Inductive representation learning on temporal graphs[EB/OL]. [2024-07-06]. . |
| [35] | LIU B, YANG D, SHI Y, et al. Improving information cascade modeling by social topology and dual role user dependency[C]// Proceedings of the 2022 International Conference on Database Systems for Advanced Applications, LNCS 13245. Cham: Springer, 2022: 425-440. |
| [36] | CHEN X, ZHANG K, ZHOU F, et al. Information cascades modeling via deep multi-task learning[C]// Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2019: 885-888. |
| [37] | 鲍鹏,徐昊.基于图注意力时空神经网络的在线内容流行度预测[J].模式识别与人工智能,2019,32(11):1014-1021. |
| BAO P, XU H. Predicting popularity of online contents via graph attention based spatial-temporal neural network[J]. Pattern Recognition and Artificial Intelligence, 2019, 32(11): 1014-1021. | |
| [38] | JIAO P, CHEN H, BAO Q, et al. Enhancing multi-scale diffusion prediction via sequential hypergraphs and adversarial learning[C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 8571-8581. |
| [39] | VASWANIA, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010. |
| [1] | 梁一鸣, 范菁, 柴汶泽. 基于双向交叉注意力的多尺度特征融合情感分类[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2773-2782. |
| [2] | 石超, 周昱昕, 扶倩, 唐万宇, 何凌, 李元媛. 基于骨架和3D热图的注意缺陷多动障碍患者动作识别算法[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 3036-3044. |
| [3] | 颜承志, 陈颖, 钟凯, 高寒. 基于多尺度网络与轴向注意力的3D目标检测算法[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2537-2545. |
| [4] | 陈亮, 王璇, 雷坤. 复杂场景下跨层多尺度特征融合的安全帽佩戴检测算法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2333-2341. |
| [5] | 陈丹阳, 张长伦. 多尺度去相关的图卷积网络模型[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2180-2187. |
| [6] | 李自亮, 朱广丽, 张玉雷, 刘佳佳, 焦熠璇, 张顺香. 集成句法与情感知识的方面级情感分析模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1724-1731. |
| [7] | 薛天宇, 李爱萍, 段利国. 联合任务卸载和资源优化的车辆边缘计算方案[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1766-1775. |
| [8] | 王向, 崔倩倩, 张晓明, 王建超, 王震洲, 宋佳霖. 改进ConvNeXt的无线胶囊内镜图像分类模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 2016-2024. |
| [9] | 陈盈涛, 方康康, 张金敖, 梁浩然, 郭焕斌, 邱兆文. 基于多尺度空间特征的冠状动脉CT血管造影图像分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 2007-2015. |
| [10] | 周天彤, 郑妍琪, 魏韬, 戴亚康, 邹凌. 融合变分图自编码器与局部-全局图网络的认知负荷脑电识别模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1849-1857. |
| [11] | 颜文婧, 王瑞东, 左敏, 张青川. 基于风味嵌入异构图层次学习的食谱推荐模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1869-1878. |
| [12] | 许鹏程, 何磊, 李川, 钱炜祺, 赵暾. 基于Transformer的深度符号回归方法[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1455-1463. |
| [13] | 陈满, 杨小军, 杨慧敏. 基于图卷积网络和终点诱导的行人轨迹预测[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1480-1487. |
| [14] | 王泉, 陆啟想, 施珮. 用于交通流量预测的多图扩散注意力网络[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1472-1479. |
| [15] | 姜坤元, 李小霞, 王利, 曹耀丹, 张晓强, 丁楠, 周颖玥. 引入解耦残差自注意力的边界交叉监督语义分割网络[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1120-1129. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||