《计算机应用》唯一官方网站 ›› 2026, Vol. 46 ›› Issue (4): 1042-1049.DOI: 10.11772/j.issn.1001-9081.2025050540
收稿日期:2025-05-19
修回日期:2025-08-18
接受日期:2025-08-27
发布日期:2025-08-28
出版日期:2026-04-10
通讯作者:
金泰松
作者简介:张生伟(1982—),男,河南南阳人,研究员,硕士,主要研究方向:计算机视觉、模式识别基金资助:
Shengwei ZHANG1, Hao WANG2, Taisong JIN2(
)
Received:2025-05-19
Revised:2025-08-18
Accepted:2025-08-27
Online:2025-08-28
Published:2026-04-10
Contact:
Taisong JIN
About author:ZHANG Shengwei, born in 1982, M. S., research fellow. His research interests include computer vision, pattern recognition.Supported by:摘要:
作为一种能够自然表征多元数据对象间高阶关系的数学工具,超图相较于传统图机器学习方法展现出显著优势。基于超图的机器学习范式的前提在于通过超图学习方法构建能够反映数据间高阶关系的超图。然而,现有的超图学习方法在应对噪声和数据损坏方面的鲁棒性不足制约了它们的实际应用效果。为了解决这个问题,提出一种基于块对角表示的超图学习方法。该方法优化一个引入块对角约束的数据重构目标函数,并利用获得的重构系数生成超边和设置超边权重。在加入噪声的图像数据集上的实验结果表明:与CR-HG(CorrentRopy-induced low-rank HyperGraph)方法相比,所提方法在加入高斯噪声的噪声率为40%和加入椒盐噪声的噪声密度为30%的Coil20图像集上的互信息(NMI)分别提升了2.6和1.0个百分点;在加入高斯噪声的噪声率40%和加入椒盐噪声的噪声密度为30%的USPS图像集上的分类准确率(ACC)分别提升了2.1和1.1个百分点。可见,所提方法的学习性能优于现有的主流超图学习方法。
中图分类号:
张生伟, 王豪, 金泰松. 基于块对角表示的超图学习方法[J]. 计算机应用, 2026, 46(4): 1042-1049.
Shengwei ZHANG, Hao WANG, Taisong JIN. Hypergraph learning method via block diagonal representation[J]. Journal of Computer Applications, 2026, 46(4): 1042-1049.
| 图像集 | 样本数 | 特征维数 | 类别数 |
|---|---|---|---|
| Coil20 | 1 440 | 1 024 | 20 |
| USPS | 9 298 | 256 | 10 |
表1 数据集的统计信息
Tab. 1 Statistics of datasets
| 图像集 | 样本数 | 特征维数 | 类别数 |
|---|---|---|---|
| Coil20 | 1 440 | 1 024 | 20 |
| USPS | 9 298 | 256 | 10 |
| 方法 | α=0% | α=10% | α=20% | α=30% | α=40% | |||||
|---|---|---|---|---|---|---|---|---|---|---|
| AC | NMI | AC | NMI | AC | NMI | AC | NMI | AC | NMI | |
| KNN-HG | 75.6 | 85.2 | 74.9 | 85.7 | 72.5 | 81.4 | 68.4 | 75.7 | 59.7 | 69.4 |
| L1-HG | 77.2 | 87.6 | 76.7 | 86.2 | 75.4 | 82.4 | 73.7 | 82.1 | 72.5 | 81.7 |
| L2-HG | 77.9 | 87.9 | 77.3 | 86.3 | 74.1 | 84.5 | 72.9 | 83.7 | 71.9 | 81.9 |
| EN-HG | 77.9 | 88.1 | 75.2 | 86.4 | 74.0 | 85.3 | 74.2 | 84.1 | 72.9 | 82.5 |
| CR-HG | 78.3 | 89.6 | 76.9 | 88.6 | 75.6 | 87.7 | 74.1 | 82.4 | 73.5 | 81.1 |
| 本文方法 | 79.5 | 90.3 | 78.9 | 89.8 | 76.7 | 87.5 | 73.7 | 83.9 | 73.9 | 83.7 |
表2 加入高斯噪声的Coil20图像集上的聚类结果 (%)
Tab. 2 Clustering results on Coil20 image set with Gaussian noise
| 方法 | α=0% | α=10% | α=20% | α=30% | α=40% | |||||
|---|---|---|---|---|---|---|---|---|---|---|
| AC | NMI | AC | NMI | AC | NMI | AC | NMI | AC | NMI | |
| KNN-HG | 75.6 | 85.2 | 74.9 | 85.7 | 72.5 | 81.4 | 68.4 | 75.7 | 59.7 | 69.4 |
| L1-HG | 77.2 | 87.6 | 76.7 | 86.2 | 75.4 | 82.4 | 73.7 | 82.1 | 72.5 | 81.7 |
| L2-HG | 77.9 | 87.9 | 77.3 | 86.3 | 74.1 | 84.5 | 72.9 | 83.7 | 71.9 | 81.9 |
| EN-HG | 77.9 | 88.1 | 75.2 | 86.4 | 74.0 | 85.3 | 74.2 | 84.1 | 72.9 | 82.5 |
| CR-HG | 78.3 | 89.6 | 76.9 | 88.6 | 75.6 | 87.7 | 74.1 | 82.4 | 73.5 | 81.1 |
| 本文方法 | 79.5 | 90.3 | 78.9 | 89.8 | 76.7 | 87.5 | 73.7 | 83.9 | 73.9 | 83.7 |
| 方法 | α=0% | α=10% | α=20% | α=30% | α=40% | |||||
|---|---|---|---|---|---|---|---|---|---|---|
| AC | NMI | AC | NMI | AC | NMI | AC | NMI | AC | NMI | |
| KNN-HG | 78.6 | 81.6 | 77.9 | 79.9 | 70.7 | 73.4 | 66.5 | 70.6 | 58.9 | 59.5 |
| L1-HG | 77.5 | 80.6 | 76.9 | 78.7 | 77.8 | 77.9 | 74.5 | 76.7 | 73.9 | 72.9 |
| L2-HG | 79.4 | 81.9 | 78.9 | 81.3 | 78.1 | 79.5 | 75.7 | 76.9 | 72.9 | 74.2 |
| EN-HG | 77.9 | 80.7 | 77.5 | 79.9 | 76.2 | 78.7 | 75.4 | 76.3 | 73.5 | 72.9 |
| CR-HG | 80.8 | 81.6 | 79.7 | 80.2 | 78.4 | 79.4 | 76.9 | 76.6 | 73.6 | 74.7 |
| 本文方法 | 81.2 | 81.8 | 80.9 | 81.9 | 80.6 | 81.5 | 76.7 | 79.9 | 74.5 | 75.7 |
表3 加入高斯噪声的USPS图像集上的聚类结果 (%)
Tab. 3 Clustering results on USPS image set with Gaussian noise
| 方法 | α=0% | α=10% | α=20% | α=30% | α=40% | |||||
|---|---|---|---|---|---|---|---|---|---|---|
| AC | NMI | AC | NMI | AC | NMI | AC | NMI | AC | NMI | |
| KNN-HG | 78.6 | 81.6 | 77.9 | 79.9 | 70.7 | 73.4 | 66.5 | 70.6 | 58.9 | 59.5 |
| L1-HG | 77.5 | 80.6 | 76.9 | 78.7 | 77.8 | 77.9 | 74.5 | 76.7 | 73.9 | 72.9 |
| L2-HG | 79.4 | 81.9 | 78.9 | 81.3 | 78.1 | 79.5 | 75.7 | 76.9 | 72.9 | 74.2 |
| EN-HG | 77.9 | 80.7 | 77.5 | 79.9 | 76.2 | 78.7 | 75.4 | 76.3 | 73.5 | 72.9 |
| CR-HG | 80.8 | 81.6 | 79.7 | 80.2 | 78.4 | 79.4 | 76.9 | 76.6 | 73.6 | 74.7 |
| 本文方法 | 81.2 | 81.8 | 80.9 | 81.9 | 80.6 | 81.5 | 76.7 | 79.9 | 74.5 | 75.7 |
| 方法 | 噪声密度为10% | 噪声密度为20% | 噪声密度为30% | |||
|---|---|---|---|---|---|---|
| AC | NMI | AC | NMI | AC | NMI | |
| KNN-HG | 70.4 | 81.3 | 66.5 | 76.4 | 60.4 | 70.1 |
| L1-HG | 72.8 | 82.2 | 70.4 | 78.7 | 67.7 | 74.5 |
| L2-HG | 72.6 | 82.5 | 70.1 | 79.2 | 68.2 | 75.3 |
| EN-HG | 74.7 | 84.4 | 72.7 | 82.5 | 70.3 | 80.1 |
| CR-HG | 74.9 | 85.1 | 73.1 | 83.1 | 71.3 | 80.2 |
| 本文方法 | 75.4 | 86.2 | 73.8 | 84.1 | 71.8 | 81.2 |
表4 加入椒盐噪声的Coil20图像集上的聚类结果 (%)
Tab. 4 Clustering results on Coil20 image set with salt-and-pepper noise
| 方法 | 噪声密度为10% | 噪声密度为20% | 噪声密度为30% | |||
|---|---|---|---|---|---|---|
| AC | NMI | AC | NMI | AC | NMI | |
| KNN-HG | 70.4 | 81.3 | 66.5 | 76.4 | 60.4 | 70.1 |
| L1-HG | 72.8 | 82.2 | 70.4 | 78.7 | 67.7 | 74.5 |
| L2-HG | 72.6 | 82.5 | 70.1 | 79.2 | 68.2 | 75.3 |
| EN-HG | 74.7 | 84.4 | 72.7 | 82.5 | 70.3 | 80.1 |
| CR-HG | 74.9 | 85.1 | 73.1 | 83.1 | 71.3 | 80.2 |
| 本文方法 | 75.4 | 86.2 | 73.8 | 84.1 | 71.8 | 81.2 |
| 方法 | 噪声密度为10% | 噪声密度为20% | 噪声密度为30% | |||
|---|---|---|---|---|---|---|
| AC | NMI | AC | NMI | AC | NMI | |
| KNN-HG | 71.7 | 73.5 | 68.3 | 69.6 | 60.1 | 65.6 |
| L1-HG | 72.8 | 75.3 | 70.2 | 72.9 | 68.2 | 70.6 |
| L2-HG | 72.9 | 75.4 | 70.1 | 72.5 | 69.1 | 70.9 |
| EN-HG | 76.5 | 78.5 | 72.2 | 75.7 | 71.4 | 73.3 |
| CR-HG | 77.8 | 79.1 | 73.4 | 76.3 | 72.4 | 74.2 |
| 本文方法 | 78.4 | 80.3 | 74.2 | 77.1 | 72.9 | 75.3 |
表5 加入椒盐噪声的USPS图像集上的聚类结果 (%)
Tab. 5 Clustering results on USPS image set with salt-and-pepper noise
| 方法 | 噪声密度为10% | 噪声密度为20% | 噪声密度为30% | |||
|---|---|---|---|---|---|---|
| AC | NMI | AC | NMI | AC | NMI | |
| KNN-HG | 71.7 | 73.5 | 68.3 | 69.6 | 60.1 | 65.6 |
| L1-HG | 72.8 | 75.3 | 70.2 | 72.9 | 68.2 | 70.6 |
| L2-HG | 72.9 | 75.4 | 70.1 | 72.5 | 69.1 | 70.9 |
| EN-HG | 76.5 | 78.5 | 72.2 | 75.7 | 71.4 | 73.3 |
| CR-HG | 77.8 | 79.1 | 73.4 | 76.3 | 72.4 | 74.2 |
| 本文方法 | 78.4 | 80.3 | 74.2 | 77.1 | 72.9 | 75.3 |
| 方法 | Coil20 | USPS | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| α=0% | α=10% | α=20% | α=30% | α=40% | α=0% | α=10% | α=20% | α=30% | α=40% | |
| KNN-HG | 93.5 | 92.7 | 86.2 | 83.1 | 77.8 | 95.3 | 95.9 | 90.5 | 83.9 | 75.3 |
| L1-HG | 95.3 | 95.3 | 93.6 | 92.3 | 90.3 | 97.3 | 97.9 | 95.3 | 89.3 | 81.2 |
| L2-HG | 96.4 | 95.1 | 94.3 | 93.1 | 92.5 | 98.7 | 96.9 | 95.9 | 94.3 | 93.7 |
| Ada-HG | 94.6 | 93.5 | 89.3 | 85.7 | 81.1 | 96.9 | 95.4 | 92.5 | 84.7 | 78.3 |
| EN-HG | 95.1 | 93.5 | 92.3 | 91.2 | 89.5 | 95.7 | 93.3 | 91.9 | 91.3 | 89.4 |
| CR-HG | 97.3 | 95.1 | 95.1 | 94.3 | 92.0 | 98.6 | 96.9 | 95.3 | 93.5 | 91.6 |
| 本文方法 | 97.8 | 96.7 | 95.8 | 94.9 | 93.5 | 98.9 | 97.9 | 96.1 | 94.9 | 93.7 |
表6 加入高斯噪声的Coil20与USPS图像集上的分类准确率 (%)
Tab. 6 Classification accuracies for classification on Coil20 and USPS image sets with Gaussian noise
| 方法 | Coil20 | USPS | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| α=0% | α=10% | α=20% | α=30% | α=40% | α=0% | α=10% | α=20% | α=30% | α=40% | |
| KNN-HG | 93.5 | 92.7 | 86.2 | 83.1 | 77.8 | 95.3 | 95.9 | 90.5 | 83.9 | 75.3 |
| L1-HG | 95.3 | 95.3 | 93.6 | 92.3 | 90.3 | 97.3 | 97.9 | 95.3 | 89.3 | 81.2 |
| L2-HG | 96.4 | 95.1 | 94.3 | 93.1 | 92.5 | 98.7 | 96.9 | 95.9 | 94.3 | 93.7 |
| Ada-HG | 94.6 | 93.5 | 89.3 | 85.7 | 81.1 | 96.9 | 95.4 | 92.5 | 84.7 | 78.3 |
| EN-HG | 95.1 | 93.5 | 92.3 | 91.2 | 89.5 | 95.7 | 93.3 | 91.9 | 91.3 | 89.4 |
| CR-HG | 97.3 | 95.1 | 95.1 | 94.3 | 92.0 | 98.6 | 96.9 | 95.3 | 93.5 | 91.6 |
| 本文方法 | 97.8 | 96.7 | 95.8 | 94.9 | 93.5 | 98.9 | 97.9 | 96.1 | 94.9 | 93.7 |
| 方法 | Coil20 | USPS | ||||
|---|---|---|---|---|---|---|
| 噪声密度为10% | 噪声密度为20% | 噪声密度为30% | 噪声密度为10% | 噪声密度为20% | 噪声密度为30% | |
| KNN-HG | 90.3 | 83.5 | 78.1 | 91.7 | 85.8 | 75.4 |
| L1-HG | 91.7 | 88.6 | 84.3 | 92.5 | 90.1 | 81.7 |
| L2-HG | 91.1 | 88.5 | 85.1 | 92.9 | 90.7 | 82.3 |
| Ada-HG | 93.3 | 85.7 | 80.7 | 92.5 | 87.8 | 77.7 |
| EN-HG | 94.5 | 90.3 | 88.2 | 94.3 | 92.9 | 84.4 |
| CR-HG | 94.9 | 91.5 | 89.4 | 94.7 | 93.3 | 85.5 |
| 本文方法 | 95.1 | 92.7 | 90.7 | 95.3 | 94.1 | 86.6 |
表7 加入椒盐噪声的Coil20与USPS图像集上的分类准确率 (%)
Tab. 7 Classification accuracies for classification on Coil20 and USPS image sets with salt-and-pepper noise
| 方法 | Coil20 | USPS | ||||
|---|---|---|---|---|---|---|
| 噪声密度为10% | 噪声密度为20% | 噪声密度为30% | 噪声密度为10% | 噪声密度为20% | 噪声密度为30% | |
| KNN-HG | 90.3 | 83.5 | 78.1 | 91.7 | 85.8 | 75.4 |
| L1-HG | 91.7 | 88.6 | 84.3 | 92.5 | 90.1 | 81.7 |
| L2-HG | 91.1 | 88.5 | 85.1 | 92.9 | 90.7 | 82.3 |
| Ada-HG | 93.3 | 85.7 | 80.7 | 92.5 | 87.8 | 77.7 |
| EN-HG | 94.5 | 90.3 | 88.2 | 94.3 | 92.9 | 84.4 |
| CR-HG | 94.9 | 91.5 | 89.4 | 94.7 | 93.3 | 85.5 |
| 本文方法 | 95.1 | 92.7 | 90.7 | 95.3 | 94.1 | 86.6 |
| [1] | BATTISTELLA E, VAKALOPOULOU M, PARAGIOS N, et al. GHOST: graph-based higher-order similarity transformation for classification[J]. Pattern Recognition, 2024, 155: No.110623. |
| [2] | ZHANG G, LIU T, YE Z. Dynamic screening strategy based on feature graphs for UAV object and group re-identification[J]. Remote Sensing, 2024, 16(5): No.775. |
| [3] | FANG Y, PAN X, SHEN H B. De novo drug design by iterative multiobjective deep reinforcement learning with graph-based molecular quality assessment[J]. Bioinformatics, 2023, 39(4): No.btad157. |
| [4] | SUNEERA C M, PRAKASH J, SINGH P K. Question answering over knowledge graphs using BERT based relation mapping[J]. Expert Systems, 2023, 40(10): No.e13456. |
| [5] | LU J, WAN H, LI P, et al. Exploring high-order spatio-temporal correlations from skeleton for person re-identification[J]. IEEE Transactions on Image Processing, 2023, 32: 949-963. |
| [6] | 马慧芳,刘芳,夏琴,等. 基于加权超图随机游走的文献关键词提取算法[J]. 电子学报, 2018, 46(6): 1410-1414. |
| MA H F, LIU F, XIA Q, et al. Keywords extraction algorithm based on weighted hypergraph random walk[J]. Acta Electronica Sinica, 2018, 46(6): 1410-1414. | |
| [7] | GAO Y, ZHANG Z, LIN H, et al. Hypergraph learning: methods and practices[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(5): 2548-2566. |
| [8] | 陈子睿,王鑫,王晨旭,等. 面向时间感知的知识超图链接预测[J]. 软件学报, 2023, 34(10): 4533-4547. |
| CHEN Z R, WANG X, WANG C X, et al. Towards time-aware knowledge hypergraph link prediction[J]. Journal of Software, 2023, 34(10): 4533-4547. | |
| [9] | 宗林林,周佳慧,谢秋婕,等. 基于超图的多模态情绪识别[J]. 计算机学报, 2023, 46(12): 2520-2534. |
| ZONG L L, ZHOU J H, XIE Q J, et al. Multi-modal emotion recognition based on hypergraph[J]. Chinese Journal of Computers, 2023, 46(12): 2520-2534. | |
| [10] | HUANG S, KANG Z, TSANG I W, et al. Auto-weighted multi-view clustering via kernelized graph learning[J]. Pattern Recognition, 2019, 88: 174-184. |
| [11] | LIU T, LIYANAARACHCHI LEKAMALAGE C K, HUANG G B, et al. An adaptive graph learning method based on dual data representations for clustering[J]. Pattern Recognition, 2018, 77: 126-139. |
| [12] | WANG W, YAN Y, NIE F, et al. Flexible manifold learning with optimal graph for image and video representation[J]. IEEE Transactions on Image Processing, 2018, 27(6): 2664-2675. |
| [13] | 郭正山,左劼,段磊,等. 面向知识超图链接预测的生成对抗负采样方法[J]. 计算机研究与发展, 2022, 59(8): 1742-1756. |
| GUO Z S, ZUO J, DUAN L, et al. A generative adversarial negative sampling method for knowledge hypergraph link prediction[J]. Journal of Computer Research and Development, 2022, 59(8): 1742-1756. | |
| [14] | 于亚新,张文超,李振国,等. 基于超图的EBSN个性化推荐及优化算法[J]. 计算机研究与发展, 2020, 57(12): 2556-2570. |
| YU Y X, ZHANG W C, LI Z G, et al. Hypergraph-based personalized recommendation & optimization algorithm in EBSN[J]. Journal of Computer Research and Development, 2020, 57(12): 2556-2570. | |
| [15] | LI F, WANG X, CHENG D, et al. Hypergraph self-supervised learning with sampling-efficient signals[C]// Proceedings of the 33rd International Joint Conference on Artificial Intelligence. California: ijcai.org, 2024: 4398-4406. |
| [16] | ZHANG Z, XIAO Y, JIANG L, et al. Spatial-temporal interplay in human mobility: a hierarchical reinforcement learning approach with hypergraph representation[C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 9396-9404. |
| [17] | WANG M, LIU X, WU X. Visual classification by l1-hypergraph modeling[J]. IEEE Transactions on Knowledge and Data Engineering, 2015, 27(9): 2564-2574. |
| [18] | YU J, TAO D, WANG M. Adaptive hypergraph learning and its application in image classification[J]. IEEE Transactions on Image Processing, 2012, 21(7): 3262-3272. |
| [19] | JIN T, JI R, GAO Y, et al. Correntropy-induced robust low-rank hypergraph[J]. IEEE Transactions on Image Processing, 2019, 28(6): 2755-2769. |
| [20] | JIN T, YU J, YOU J, et al. Low-rank matrix factorization with multiple hypergraph regularizer[J]. Pattern Recognition, 2015, 48(3): 1011-1022. |
| [21] | HUANG S, ELGAMMAL A, YANG D. On the effect of hyperedge weights on hypergraph learning[J]. Image and Vision Computing, 2017, 57: 89-101. |
| [22] | DUCOURNAU A, BRETTO A. Random walks in directed hypergraphs and application to semi-supervised image segmentation[J]. Computer Vision and Image Understanding, 2014, 120: 91-102. |
| [23] | PURKAIT P, CHIN T J, SADRI A, et al. Clustering with hypergraphs: the case for large hyperedges[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(9): 1697-1711. |
| [24] | ZHANG C, HU S, TANG Z G, et al. Re-revisiting learning on hypergraphs: confidence interval, subgradient method, and extension to multiclass[J]. IEEE Transactions on Knowledge and Data Engineering, 2020, 32(3): 506-518. |
| [25] | WANG Z, CHEN J, SHAO Z, et al. Dual-view desynchronization hypergraph learning for dynamic hyperedge prediction[J]. IEEE Transactions on Knowledge and Data Engineering, 2025, 37(2): 597-612. |
| [26] | LIU Q, SUN Y, WANG C, et al. Elastic-net hypergraph learning for image clustering and semi-supervised classification[J]. IEEE Transactions on Image Processing, 2017, 26(1): 452-463. |
| [27] | ZHAO Y, LUO X, JU W, et al. Dynamic hypergraph structure learning for traffic flow forecasting[C]// Proceedings of the IEEE 39th International Conference on Data Engineering. Piscataway: IEEE, 2023: 2303-2316. |
| [28] | CAI D, SONG M, SUN C, et al. Hypergraph structure learning for hypergraph neural networks[C]// Proceedings of the 31st International Joint Conference on Artificial Intelligence. California: ijcai.org, 2022: 1923-1929. |
| [29] | LI M, YANG Y, MENG L, et al. Self-supervised hypergraph structure learning[J]. Artificial Intelligence Review, 2025, 58: No.190. |
| [30] | BAI J, GONG B, ZHAO Y, et al. Multi-scale representation learning on hypergraph for 3D shape retrieval and recognition[J]. IEEE Transactions on Image Processing, 2021, 30: 5327-5338. |
| [31] | GAO Y, FENG Y, JI S, et al. HGNN+: general hypergraph neural networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(3): 3181-3199. |
| [32] | VIJAIKUMAR M, HADA D, SHEVADE S. HyperTeNet: hypergraph and Transformer-based neural network for personalized list continuation[C]// Proceedings of the 2021 IEEE International Conference on Data Mining. Piscataway: IEEE, 2021: 1210-1215. |
| [33] | LI Y, CHEN H, SUN X, et al. Hyperbolic hypergraphs for sequential recommendation[C]// Proceedings of the 30th ACM International Conference on Information and Knowledge Management. New York: ACM, 2021: 988-997. |
| [34] | YU J, YIN H, LI J, et al. Self-supervised multi-channel hypergraph convolutional network for social recommendation[C]// Proceedings of the Web Conference 2021. New York: ACM, 2021: 413-424. |
| [35] | PEDRONETTE D C G, VALEM L P, ALMEIDA J, et al. Multimedia retrieval through unsupervised hypergraph-based manifold ranking[J]. IEEE Transactions on Image Processing, 2019, 28(12): 5824-5838. |
| [36] | JIN T, YU Z, GAO Y, et al. Robust l2-Hypergraph and its applications[J]. Information Sciences, 2019, 501: 708-723. |
| [37] | JIN T, CAO L, ZHANG B, et al. Hypergraph induced convolutional manifold networks[C]// Proceedings of the 28th International Joint Conference on Artificial Intelligence. California: ijcai.org, 2019: 2670-2676. |
| [38] | FENG Y, YOU H, ZHANG Z, et al. Hypergraph neural networks[C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2019: 3558-3565. |
| [39] | LU C, FENG J, LIN Z, et al. Subspace clustering by block diagonal representation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(2): 487-501. |
| [40] | TURK M, PENTLAND A. Eigenfaces for recognition[J]. Journal of Cognitive Neuroscience, 1991, 3(1): 71-86. |
| [1] | 邱星, 玄祖兴, 黄可佳, 张雯, 庄晓. 基于超图的数据不平衡条件下的瓦当年代判别方法[J]. 《计算机应用》唯一官方网站, 2026, 46(2): 620-629. |
| [2] | 王聪, 史艳翠. 基于多视角学习的图神经网络群组推荐模型[J]. 《计算机应用》唯一官方网站, 2025, 45(4): 1205-1212. |
| [3] | 赵红燕, 郭力华, 刘春霞, 王日云. 基于多图神经网络和图对比学习的科学文献摘要模型[J]. 《计算机应用》唯一官方网站, 2025, 45(12): 3820-3828. |
| [4] | 赵敬华, 张柱, 吕锡婷, 林慧丹. 基于超图神经网络的多尺度信息传播预测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(11): 3529-3539. |
| [5] | 赵文博, 马紫彤, 杨哲. 基于有向超图自适应卷积的链接预测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(1): 15-23. |
| [6] | 曾蠡, 杨婧如, 黄罡, 景翔, 罗超然. 超图应用方法综述:问题、进展与挑战[J]. 《计算机应用》唯一官方网站, 2024, 44(11): 3315-3326. |
| [7] | 项能强, 朱小飞, 高肇泽. 原型感知双通道图卷积神经网络的信息传播预测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(10): 3260-3266. |
| [8] | 徐兰天, 李荣华, 戴永恒, 王国仁. 面向超图的极大团搜索算法[J]. 《计算机应用》唯一官方网站, 2023, 43(8): 2319-2324. |
| [9] | 党伟超, 程炳阳, 高改梅, 刘春霞. 基于对比超图转换器的会话推荐[J]. 《计算机应用》唯一官方网站, 2023, 43(12): 3683-3688. |
| [10] | 李晓杰, 崔超然, 宋广乐, 苏雅茜, 吴天泽, 张春云. 基于时序超图卷积神经网络的股票趋势预测方法[J]. 《计算机应用》唯一官方网站, 2022, 42(3): 797-803. |
| [11] | 田玲, 张谨川, 张晋豪, 周望涛, 周雪. 知识图谱综述——表示、构建、推理与知识超图理论[J]. 《计算机应用》唯一官方网站, 2021, 41(8): 2161-2186. |
| [12] | 张永凯, 武志昊, 林友芳, 赵苡积. 面向交通流量预测的时空超关系图卷积网络[J]. 《计算机应用》唯一官方网站, 2021, 41(12): 3578-3584. |
| [13] | 邓凯, 黄佳进, 秦进. 基于物品的统一推荐模型[J]. 《计算机应用》唯一官方网站, 2020, 40(2): 530-534. |
| [14] | 周阳, 吴启武, 姜灵芝. 基于分布式路径计算单元的多域光网络组密钥管理方案[J]. 计算机应用, 2019, 39(4): 1095-1099. |
| [15] | 余江兰, 李向利, 赵朋飞. 基于核技巧和超图正则的稀疏非负矩阵分解[J]. 计算机应用, 2019, 39(3): 742-749. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||