《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (2): 392-402.DOI: 10.11772/j.issn.1001-9081.2024030266
• 人工智能 • 上一篇
富坤1(), 应世聪1, 郑婷婷2,3, 屈佳捷2,3, 崔静远1, 李建伟1
收稿日期:
2024-03-13
修回日期:
2024-06-04
接受日期:
2024-06-11
发布日期:
2024-08-02
出版日期:
2025-02-10
通讯作者:
富坤
作者简介:
应世聪(1997—),男,河南漯河人,硕士研究生,主要研究方向:网络表示学习基金资助:
Kun FU1(), Shicong YING1, Tingting ZHENG2,3, Jiajie QU2,3, Jingyuan CUI1, Jianwei LI1
Received:
2024-03-13
Revised:
2024-06-04
Accepted:
2024-06-11
Online:
2024-08-02
Published:
2025-02-10
Contact:
Kun FU
About author:
YING Shicong, born in 1997, M. S. candidate. His research interests include network representation learning.Supported by:
摘要:
现实中,图结构数据广泛存在,然而,在实际应用中,这些数据常面临标注数据短缺的难题。图数据的小样本学习(FSL)方法旨在以较少的标注样本实现数据的分类。尽管这些方法在小样本节点分类(FSNC)任务上获得较好的性能,但还存在以下问题:高质量的标签数据难获取,参数初始化过程泛化能力不足,未能充分挖掘图中的拓扑结构信息。为解决这些问题,提出一种基于图数据增强的小样本节点分类模型(GDA-FSNC)。GDA-FSNC由4个模块构成:基于结构相似度的图数据预处理模块、参数初始化模块、参数微调模块和自适应伪标签生成模块。在图数据预处理模块中,通过基于结构相似度的邻接矩阵增强方法获取更多的图结构信息;在参数初始化模块中,使用互相教学的数据增强方法使每个模型都能从其他模型学到不同的模式和特征,增强信息的多样性;在自适应伪标签生成模块中,根据不同数据集的特征自动选择合适的伪标签生成技术,以生成高质量的伪标签数据。在7个真实数据集上的实验结果表明,GDA-FSNC的分类准确率超过了Meta-GNN、GPN(Graph Prototypical Network)、IA-FSNC(Information Augmentation for Few-Shot Node Classification)等主流的FSL模型。例如,相较于基线模型IA-FSNC,所提模型的分类准确率在小数据集2-way 1-shot设置下至少提升了0.27个百分点,在大数据集5-way 1-shot设置下至少提升了2.06个百分点。可见,GDA-FSNC在小样本场景下有更好的分类性能和泛化能力。
中图分类号:
富坤, 应世聪, 郑婷婷, 屈佳捷, 崔静远, 李建伟. 面向小样本节点分类的图数据增强方法[J]. 计算机应用, 2025, 45(2): 392-402.
Kun FU, Shicong YING, Tingting ZHENG, Jiajie QU, Jingyuan CUI, Jianwei LI. Graph data augmentation method for few-shot node classification[J]. Journal of Computer Applications, 2025, 45(2): 392-402.
符号 | 说明 |
---|---|
c | 数据集类别数 |
节点vi 的标签向量 | |
D | 度矩阵 |
特征矩阵 | |
图节点的预测概率矩阵 | |
节点标签矩阵和节点伪标签矩阵 | |
S | 支持集 |
Q | 查询集 |
aij | A 中的第i个节点与第j个节点之间是否存在边, 存在边为1,否则为0 |
xi | 节点vi 的特征向量 |
表1 主要符号及相关说明
Tab. 1 Main symbols and related descriptions
符号 | 说明 |
---|---|
c | 数据集类别数 |
节点vi 的标签向量 | |
D | 度矩阵 |
特征矩阵 | |
图节点的预测概率矩阵 | |
节点标签矩阵和节点伪标签矩阵 | |
S | 支持集 |
Q | 查询集 |
aij | A 中的第i个节点与第j个节点之间是否存在边, 存在边为1,否则为0 |
xi | 节点vi 的特征向量 |
度量方法 | Cora | Citeseer | 度量方法 | Cora | Citeseer |
---|---|---|---|---|---|
Origin | 72.08 | 70.83 | Salton | 75.13 | 74.17 |
Jaccard | 73.61 | 72.62 | Salton&Dice | 75.50 | 75.23 |
Dice | 74.44 | 73.82 |
表2 不同相似性度量下的分类准确率 (%)
Tab. 2 Classification accuracies under different similarity measures
度量方法 | Cora | Citeseer | 度量方法 | Cora | Citeseer |
---|---|---|---|---|---|
Origin | 72.08 | 70.83 | Salton | 75.13 | 74.17 |
Jaccard | 73.61 | 72.62 | Salton&Dice | 75.50 | 75.23 |
Dice | 74.44 | 73.82 |
数据集 | 节点数 | 边数 | 特征数 | 类别数 | 图级别 特征相似度 |
---|---|---|---|---|---|
Cora | 2 708 | 10 556 | 1 433 | 7 | 0.650 |
Citeseer | 3 327 | 9 104 | 3 703 | 6 | 0.596 |
Computers | 13 381 | 491 722 | 767 | 10 | 0.727 |
Coauthor-CS | 18 333 | 163 788 | 6 805 | 15 | 0.728 |
Amazon Electronics | 42 318 | 43 556 | 8 669 | 167 | 0.814 |
Cora-full | 19 793 | 65 311 | 8 710 | 70 | 0.607 |
Amazon Clothing | 24 919 | 91 680 | 9 034 | 77 | 0.762 |
表3 数据集统计信息
Tab. 3 Statistics of datasets
数据集 | 节点数 | 边数 | 特征数 | 类别数 | 图级别 特征相似度 |
---|---|---|---|---|---|
Cora | 2 708 | 10 556 | 1 433 | 7 | 0.650 |
Citeseer | 3 327 | 9 104 | 3 703 | 6 | 0.596 |
Computers | 13 381 | 491 722 | 767 | 10 | 0.727 |
Coauthor-CS | 18 333 | 163 788 | 6 805 | 15 | 0.728 |
Amazon Electronics | 42 318 | 43 556 | 8 669 | 167 | 0.814 |
Cora-full | 19 793 | 65 311 | 8 710 | 70 | 0.607 |
Amazon Clothing | 24 919 | 91 680 | 9 034 | 77 | 0.762 |
模型 | Cora | Citeseer | ||||
---|---|---|---|---|---|---|
2-way 1-shot | 2-way 3-shot | 2-way 5-shot | 2-way 1-shot | 2-way 3-shot | 2-way 5-shot | |
GCN | 62.92±3.72 | 75.08±3.21 | 82.21±2.63 | 53.25±4.71 | 65.0±1.02 | 72.33±4.15 |
Meta-GNN | 67.73±0.12 | 76.16±0.16 | 83.05±0.17 | 55.10±0.12 | 68.46±0.09 | 75.69±0.10 |
G-Meta | 65.43±0.16 | 76.31±0.13 | 81.75±0.10 | 54.48±0.10 | 66.46±0.11 | 73.44±0.12 |
GPN | 64.32±0.12 | 77.43±0.20 | 82.45±0.15 | 59.46±0.16 | 67.31±0.15 | 75.73±0.09 |
IA-FSNC | 74.78±0.17 | 80.68±0.13 | 85.95±0.10 | 69.83±0.14 | 78.23±0.12 | 81.33±0.35 |
TENT | 55.51±0.11 | 62.79±0.13 | 62.14±0.12 | 53.01±0.11 | 54.21±0.11 | 56.17±0.11 |
GDA-FSNC | 75.05±0.15 | 83.71±0.43 | 87.95±0.06 | 75.53±0.15 | 80.35±0.12 | 82.20±0.11 |
模型 | Computers | Coauthor-CS | ||||
2-way 1-shot | 2-way 3-shot | 2-way 5-shot | 2-way 1-shot | 2-way 3-shot | 2-way 5-shot | |
GCN | 71.13±16.2 | 84.79±12.64 | 88.75±10.67 | 82.33±18.73 | 92.58±7.83 | 93.10±6.93 |
Meta-GNN | 73.92±0.21 | 87.66±0.64 | 89.99±0.14 | 86.85±0.12 | 91.93±0.11 | 93.69±0.15 |
G-Meta | 72.50±0.15 | 85.95±0.19 | 89.63±0.23 | 85.59±0.10 | 90.56±0.43 | 92.89±0.32 |
GPN | 72.87±0.47 | 86.55±0.13 | 90.62±0.08 | 91.99±0.10 | 94.25±0.07 | 93.37±0.08 |
IA-FSNC | 80.24±0.12 | 87.71±0.52 | 91.04±0.07 | 91.43±0.12 | 95.70±0.05 | 96.65±0.05 |
TENT | 86.12±0.16 | 92.47±0.10 | 94.58±0.09 | 90.81±0.06 | 93.04±0.03 | 95.36±0.04 |
GDA-FSNC | 90.16±0.14 | 96.58±0.06 | 97.64±0.04 | 92.10±0.14 | 96.47±0.07 | 96.70±0.06 |
表4 不同模型在小数据集上的分类准确率(均值和标准差) (%)
Tab. 4 Classification accuracy (mean and standard deviation) for different models on small datasets
模型 | Cora | Citeseer | ||||
---|---|---|---|---|---|---|
2-way 1-shot | 2-way 3-shot | 2-way 5-shot | 2-way 1-shot | 2-way 3-shot | 2-way 5-shot | |
GCN | 62.92±3.72 | 75.08±3.21 | 82.21±2.63 | 53.25±4.71 | 65.0±1.02 | 72.33±4.15 |
Meta-GNN | 67.73±0.12 | 76.16±0.16 | 83.05±0.17 | 55.10±0.12 | 68.46±0.09 | 75.69±0.10 |
G-Meta | 65.43±0.16 | 76.31±0.13 | 81.75±0.10 | 54.48±0.10 | 66.46±0.11 | 73.44±0.12 |
GPN | 64.32±0.12 | 77.43±0.20 | 82.45±0.15 | 59.46±0.16 | 67.31±0.15 | 75.73±0.09 |
IA-FSNC | 74.78±0.17 | 80.68±0.13 | 85.95±0.10 | 69.83±0.14 | 78.23±0.12 | 81.33±0.35 |
TENT | 55.51±0.11 | 62.79±0.13 | 62.14±0.12 | 53.01±0.11 | 54.21±0.11 | 56.17±0.11 |
GDA-FSNC | 75.05±0.15 | 83.71±0.43 | 87.95±0.06 | 75.53±0.15 | 80.35±0.12 | 82.20±0.11 |
模型 | Computers | Coauthor-CS | ||||
2-way 1-shot | 2-way 3-shot | 2-way 5-shot | 2-way 1-shot | 2-way 3-shot | 2-way 5-shot | |
GCN | 71.13±16.2 | 84.79±12.64 | 88.75±10.67 | 82.33±18.73 | 92.58±7.83 | 93.10±6.93 |
Meta-GNN | 73.92±0.21 | 87.66±0.64 | 89.99±0.14 | 86.85±0.12 | 91.93±0.11 | 93.69±0.15 |
G-Meta | 72.50±0.15 | 85.95±0.19 | 89.63±0.23 | 85.59±0.10 | 90.56±0.43 | 92.89±0.32 |
GPN | 72.87±0.47 | 86.55±0.13 | 90.62±0.08 | 91.99±0.10 | 94.25±0.07 | 93.37±0.08 |
IA-FSNC | 80.24±0.12 | 87.71±0.52 | 91.04±0.07 | 91.43±0.12 | 95.70±0.05 | 96.65±0.05 |
TENT | 86.12±0.16 | 92.47±0.10 | 94.58±0.09 | 90.81±0.06 | 93.04±0.03 | 95.36±0.04 |
GDA-FSNC | 90.16±0.14 | 96.58±0.06 | 97.64±0.04 | 92.10±0.14 | 96.47±0.07 | 96.70±0.06 |
模型 | Cora-full | Coauthor-CS | ||||
---|---|---|---|---|---|---|
5-way 1-shot | 5-way 3-shot | 5-way 5-shot | 5-way 1-shot | 5-way 3-shot | 5-way 5-shot | |
GCN | 31.85±2.20 | 38.33±1.38 | 42.89±3.23 | 45.05±0.31 | 53.48±2.15 | 59.37±1.68 |
Meta-GNN | 50.57±1.87 | 56.19±0.57 | 61.66±3.85 | 53.18±0.49 | 61.18±1.73 | 63.47±2.46 |
G-Meta | 42.71±1.63 | 52.64±1.24 | 55.68±3.28 | 50.97±0.67 | 62.83±0.91 | 64.65±1.02 |
GPN | 49.75±2.10 | 61.78±0.66 | 65.77±2.83 | 58.61±0.54 | 69.70±0.81 | 72.66±0.49 |
IA-FSNC | 62.32±0.71 | 70.42±0.39 | 75.29±0.57 | 80.43±0.12 | 91.65±0.05 | 94.13±1.53 |
TENT | 52.64±0.08 | 64.33±0.48 | 67.74±1.27 | 54.59±0.17 | 70.16±0.38 | 73.12±0.08 |
GDA-FSNC | 65.44±0.10 | 70.56±0.20 | 77.28±2.18 | 88.16±0.38 | 93.06±0.45 | 95.28±1.04 |
模型 | Amazon Electronics | Amazon Clothing | ||||
5-way 1-shot | 5-way 3-shot | 5-way 5-shot | 5-way 1-shot | 5-way 3-shot | 5-way 5-shot | |
GCN | 41.47±0.97 | 51.87±1.84 | 61.92±2.81 | 48.60±3.15 | 59.82±2.52 | 66.88±0.39 |
Meta-GNN | 54.23±1.29 | 62.19±1.48 | 68.08±3.16 | 67.42±1.66 | 74.62±2.35 | 75.38±1.78 |
G-Meta | 44.14±1.24 | 55.75±0.52 | 60.06±2.98 | 57.71±0.67 | 64.44±1.68 | 71.28±1.34 |
GPN | 46.79±1.40 | 61.41±0.79 | 66.48±2.35 | 59.39±1.87 | 72.32±2.27 | 74.40±1.25 |
IA-FSNC | 68.80±0.86 | 79.78±0.45 | 83.77±0.23 | 75.53±0.32 | 83.39±0.85 | 85.26±0.48 |
TENT | 66.51±0.13 | 77.33±0.56 | 80.42±0.05 | 69.17±0.10 | 80.07±0.48 | 82.29±0.61 |
GDA-FSNC | 70.86±0.73 | 81.76±1.16 | 84.61±2.09 | 78.31±0.97 | 86.86±0.45 | 88.24±1.98 |
表5 不同模型在大数据集上的分类准确率(均值和标准差) (%)
Tab. 5 Classification accuracy (mean and standard deviation) for different models on big datasets
模型 | Cora-full | Coauthor-CS | ||||
---|---|---|---|---|---|---|
5-way 1-shot | 5-way 3-shot | 5-way 5-shot | 5-way 1-shot | 5-way 3-shot | 5-way 5-shot | |
GCN | 31.85±2.20 | 38.33±1.38 | 42.89±3.23 | 45.05±0.31 | 53.48±2.15 | 59.37±1.68 |
Meta-GNN | 50.57±1.87 | 56.19±0.57 | 61.66±3.85 | 53.18±0.49 | 61.18±1.73 | 63.47±2.46 |
G-Meta | 42.71±1.63 | 52.64±1.24 | 55.68±3.28 | 50.97±0.67 | 62.83±0.91 | 64.65±1.02 |
GPN | 49.75±2.10 | 61.78±0.66 | 65.77±2.83 | 58.61±0.54 | 69.70±0.81 | 72.66±0.49 |
IA-FSNC | 62.32±0.71 | 70.42±0.39 | 75.29±0.57 | 80.43±0.12 | 91.65±0.05 | 94.13±1.53 |
TENT | 52.64±0.08 | 64.33±0.48 | 67.74±1.27 | 54.59±0.17 | 70.16±0.38 | 73.12±0.08 |
GDA-FSNC | 65.44±0.10 | 70.56±0.20 | 77.28±2.18 | 88.16±0.38 | 93.06±0.45 | 95.28±1.04 |
模型 | Amazon Electronics | Amazon Clothing | ||||
5-way 1-shot | 5-way 3-shot | 5-way 5-shot | 5-way 1-shot | 5-way 3-shot | 5-way 5-shot | |
GCN | 41.47±0.97 | 51.87±1.84 | 61.92±2.81 | 48.60±3.15 | 59.82±2.52 | 66.88±0.39 |
Meta-GNN | 54.23±1.29 | 62.19±1.48 | 68.08±3.16 | 67.42±1.66 | 74.62±2.35 | 75.38±1.78 |
G-Meta | 44.14±1.24 | 55.75±0.52 | 60.06±2.98 | 57.71±0.67 | 64.44±1.68 | 71.28±1.34 |
GPN | 46.79±1.40 | 61.41±0.79 | 66.48±2.35 | 59.39±1.87 | 72.32±2.27 | 74.40±1.25 |
IA-FSNC | 68.80±0.86 | 79.78±0.45 | 83.77±0.23 | 75.53±0.32 | 83.39±0.85 | 85.26±0.48 |
TENT | 66.51±0.13 | 77.33±0.56 | 80.42±0.05 | 69.17±0.10 | 80.07±0.48 | 82.29±0.61 |
GDA-FSNC | 70.86±0.73 | 81.76±1.16 | 84.61±2.09 | 78.31±0.97 | 86.86±0.45 | 88.24±1.98 |
模型 | 不同数据集上的分类准确率 | |||||
---|---|---|---|---|---|---|
Cora | Citeseer | Computers | Amazon Electronics | Amazon Clothing | Cora-full | |
GDA-FSNC\L | 82.73 | 78.74 | 84.72 | 50.00 | 54.17 | 80.00 |
GDA-FSNC\C | 83.65 | 75.40 | 94.44 | 93.61 | 94.37 | 85.12 |
GDA-FSNC\MT | 82.23 | 72.08 | 84.03 | 82.64 | 87.11 | 77.22 |
GDA-FSNC\S | 82.89 | 78.47 | 94.75 | 93.33 | 92.96 | 85.87 |
GDA-FSNC | 84.05 | 79.71 | 95.14 | 95.83 | 95.14 | 87.43 |
表6 GDA-FSNC及其变体模型的节点分类结果 (%)
Tab. 6 Classification results of GDA-FSNC and its variant models
模型 | 不同数据集上的分类准确率 | |||||
---|---|---|---|---|---|---|
Cora | Citeseer | Computers | Amazon Electronics | Amazon Clothing | Cora-full | |
GDA-FSNC\L | 82.73 | 78.74 | 84.72 | 50.00 | 54.17 | 80.00 |
GDA-FSNC\C | 83.65 | 75.40 | 94.44 | 93.61 | 94.37 | 85.12 |
GDA-FSNC\MT | 82.23 | 72.08 | 84.03 | 82.64 | 87.11 | 77.22 |
GDA-FSNC\S | 82.89 | 78.47 | 94.75 | 93.33 | 92.96 | 85.87 |
GDA-FSNC | 84.05 | 79.71 | 95.14 | 95.83 | 95.14 | 87.43 |
δ | 不同数据集上的分类准确率/% | |||||
---|---|---|---|---|---|---|
Cora | Citeseer | Computers | Amazon Electronics | Amazon Clothing | Cora-full | |
0.5 | 75.93 | 59.38 | 88.78 | 80.32 | 82.79 | 73.47 |
0.6 | 76.27 | 75.34 | 90.67 | 81.71 | 83.38 | 74.88 |
0.7 | 75.58 | 73.22 | 90.85 | 82.15 | 83.25 | 69.92 |
0.8 | 74.54 | 71.64 | 81.33 | 81.42 | 54.89 | 70.67 |
0.9 | 72.69 | 71.18 | 80.58 | 51.23 | 55.21 | 71.34 |
表7 不同阈值δ下GDA-FSNC的节点分类结果
Tab. 7 Node classification results of GDA-FSNC at different threshold δ
δ | 不同数据集上的分类准确率/% | |||||
---|---|---|---|---|---|---|
Cora | Citeseer | Computers | Amazon Electronics | Amazon Clothing | Cora-full | |
0.5 | 75.93 | 59.38 | 88.78 | 80.32 | 82.79 | 73.47 |
0.6 | 76.27 | 75.34 | 90.67 | 81.71 | 83.38 | 74.88 |
0.7 | 75.58 | 73.22 | 90.85 | 82.15 | 83.25 | 69.92 |
0.8 | 74.54 | 71.64 | 81.33 | 81.42 | 54.89 | 70.67 |
0.9 | 72.69 | 71.18 | 80.58 | 51.23 | 55.21 | 71.34 |
1 | DING K, LI J, AGARWAL N, et al. Inductive anomaly detection on attributed networks[C]// Proceedings of the 29th International Joint Conferences on Artificial Intelligence. California: ijcai.org, 2021: 1288-1294. |
2 | TANG J, ZHANG J, YAO L, et al. ArnetMiner: extraction and mining of academic social networks[C]// Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2008: 990-998. |
3 | QI G J, AGGARWAL C, TIAN Q, et al. Exploring context and content links in social media: a latent space method[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(5): 850-862. |
4 | YUAN Z, SANG J, LIU Y, et al. Latent feature learning in social media network[C]// Proceedings of the 21st ACM International Conference on Multimedia. New York: ACM, 2013: 253-262. |
5 | BOJCHEVSKI A, GASTEIGER J, PEROZZI B, et al. Scaling graph neural networks with approximate PageRank[C]// Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2020: 2464-2473. |
6 | GASTEIGER J, BOJCHEVSKI A, GÜNNEMANN S. Predict then propagate: graph neural networks meet personalized PageRank[EB/OL]. [2024-06-01].. |
7 | GASTEIGER J, WEIßENBERGER S, GÜNNEMANN S. Diffusion improves graph learning[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2019: 13366-13378. |
8 | DING K, WANG J, LI J, et al. Graph prototypical networks for few-shot learning on attributed networks[C]// Proceedings of the 29th ACM International Conference on Information and Knowledge Management. New York: ACM, 2020: 295-304. |
9 | ZHOU F, CAO C, ZHANG K, et al. Meta-GNN: on few-shot node classification in graph meta-learning[C]// Proceedings of the 28th ACM International Conference on Information and Knowledge Management. New York: ACM, 2019: 2357-2360. |
10 | JOSHI V, PETERS M, HOPKINS M. Extending a parser to distant domains using a few dozen partially annotated examples[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2018: 1190-1199. |
11 | GARCIA V, BRUNA J. Few-shot learning with graph neural networks[EB/OL]. [2024-06-01].. |
12 | HUANG K, ZITNIK M. Graph meta learning via local subgraphs[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2020: 5862-5874. |
13 | LIU Z, FANG Y, LIU C, et al. Relative and absolute location embedding for few-shot node classification on graph[C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2021: 4267-4275. |
14 | FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]// Proceedings of the 34th International Conference on Machine Learning. New York: JMLR.org, 2017: 1126-1135. |
15 | KONG K, LI G, DING M, et al. Robust optimization as data augmentation for large-scale graphs[C]// Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2022: 60-69. |
16 | RONG Y, HUANG W, XU T, et al. DropEdge: towards deep graph convolutional networks on node classification[EB/OL]. [2024-06-01].. |
17 | KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[EB/OL]. [2024-06-01].. |
18 | WU Z, PAN S, CHEN F, et al. A comprehensive survey on graph neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32(1): 4-24. |
19 | YAO H, ZHANG C, WEI Y, et al. Graph few-shot learning via knowledge transfer[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2020: 6656-6663. |
20 | WU Z, ZHOU P, WEN G, et al. Information augmentation for few-shot node classification[C]// Proceedings of the 31st International Joint Conference on Artificial Intelligence. California: ijcai.org, 2022: 3601-3607. |
21 | WANG S, DING K, ZHANG C, et al. Task-adaptive few-shot node classification[C]// Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: ACM, 2022: 1910-1919. |
22 | XU Z, DING K, WANG Y X, et al. Generalized few-shot node classification[C]// Proceedings of the 2022 IEEE International Conference on Data Mining. Piscataway: IEEE, 2022: 608-617. |
23 | YANG Z, COHEN W W, SALAKHUDINOV R. Revisiting semi-supervised learning with graph embeddings[C]// Proceedings of the 33rd International Conference on Machine Learning. New York: JMLR.org, 2016: 40-48. |
24 | 刘正铭,马宏,刘树新,等. 融合节点描述属性信息的网络表示学习算法[J]. 计算机应用, 2019, 39(4):1012-1020. |
LIU Z M, MA H, LIU S X, et al. Network representation learning algorithm incorporated with node profile attribute information[J]. Journal of Computer Applications, 2019, 39(4):1012-1020. | |
25 | YANG H, CHENG J, YANG Z, et al. A node similarity and community link strength-based community discovery algorithm[J]. Complexity, 2021, 2021: No.8848566. |
26 | ZHAO J, SONG Y, LIU F, et al. The identification of influential nodes based on structure similarity[J]. Connection Science, 2021, 33(2): 201-218. |
27 | SALTON G. Recent trends in automatic information retrieval[C]// Proceedings of the 9th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 1986: 1-10. |
28 | SILVA MEYER A DA, GARCIA A A F, DE SOUZA A P, et al. Comparison of similarity coefficients used for cluster analysis with dominant markers in maize (Zea mays L)[J]. Genetics and Molecular Biology, 2004, 27(1): 83-91. |
29 | JACCARD P. The distribution of the flora in the alpine zone[J]. New Phytologist, 1912, 11(2): 37-50. |
30 | SZYMAN P, BARBUCHA D. Link prediction in organizational social network based on e-mail communication[J]. Procedia Computer Science, 2022, 207: 4008-4016. |
31 | BERGSTRA J, BENGIO Y. Random search for hyper-parameter optimization[J]. Journal of Machine Learning Research, 2012, 13: 281-305. |
32 | 李凡长,刘洋,吴鹏翔,等. 元学习研究综述[J]. 计算机学报, 2021, 44(2): 422-446. |
LI F Z, LIU Y, WU P X, et al. A survey on recent advances in meta-learning[J]. Chinese Journal of Computers, 2021, 44(2): 422-446. | |
33 | BANSAL T, ALZUBI S, WANG T, et al. Meta-Adapters: parameter efficient few-shot fine-tuning through meta-learning[C]// Proceedings of the 2022 International Conference on Automated Machine Learning. New York: JMLR.org, 2022: No.19. |
34 | ZHANG X K, REN J, SONG C, et al. Label propagation algorithm for community detection based on node importance and label influence[J]. Physics Letters A, 2017, 381(33): 2691-2698. |
35 | 杨莹,郝晓燕,于丹,等. 面向图神经网络模型提取攻击的图数据生成方法[J]. 计算机应用, 2024, 44(8):2483-2492. |
YANG Y, HAO X Y, YU D, et al. Graph data generation approach for graph neural network model extraction attacks[J]. Journal of Computer Applications, 2024, 44(8):2483-2492. | |
36 | HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. [2024-06-01].. |
37 | ZHANG Y, XIANG T, HOSPEDALES T M, et al. Deep mutual learning[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 4320-4328. |
38 | WU S, LI J, LIU C, et al. Mutual learning of complementary networks via residual correction for improving semi-supervised classification[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 6493-6502. |
39 | 陈嘉言,任东东,李文斌,等. 面向小样本学习的轻量化知识蒸馏[J]. 软件学报, 2024, 35(5):2414-2429. |
CHEN J Y, REN D D, LI W B, et al. Lightweight knowledge distillation for few-shot learning[J]. Journal of Software, 2024, 35(5):2414-2429. | |
40 | 王帆,韩忠义,尹义龙. 伪标签不确定性估计的源域无关鲁棒域自适应[J]. 软件学报, 2022, 33(4): 1183-1199. |
WU F, HAN Z Y, YIN Y L. Source free robust domain adaptation based on pseudo label uncertainty estimation[J]. Journal of Software, 2022, 33(4): 1183-1199. | |
41 | ISCEN A, TOLIAS G, AVRITHIS Y, et al. Label propagation for deep semi-supervised learning[C]// Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 5065-5074. |
42 | SHCHUR O, MUMME M, BOJCHEVSKI A, et al. Pitfalls of graph neural network evaluation[EB/OL]. [2024-06-01].. |
43 | McAULEY J, PANDEY R, LESKOVEC J. Inferring networks of substitutable and complementary products[C]// Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2015: 785-794. |
[1] | 严雪文, 黄章进. 基于对比学习的小样本图像分类方法[J]. 《计算机应用》唯一官方网站, 2025, 45(2): 383-391. |
[2] | 张嘉琳, 任庆桦, 毛启容. 利用全局-局部特征依赖的反欺骗说话人验证系统[J]. 《计算机应用》唯一官方网站, 2025, 45(1): 308-317. |
[3] | 庞川林, 唐睿, 张睿智, 刘川, 刘佳, 岳士博. D2D通信系统中基于图卷积网络的分布式功率控制算法[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2855-2862. |
[4] | 薛桂香, 王辉, 周卫峰, 刘瑜, 李岩. 基于知识图谱和时空扩散图卷积网络的港口交通流量预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2952-2957. |
[5] | 黄云川, 江永全, 黄骏涛, 杨燕. 基于元图同构网络的分子毒性预测[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2964-2969. |
[6] | 杨莹, 郝晓燕, 于丹, 马垚, 陈永乐. 面向图神经网络模型提取攻击的图数据生成方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2483-2492. |
[7] | 刘禹含, 吉根林, 张红苹. 基于骨架图与混合注意力的视频行人异常检测方法[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2551-2557. |
[8] | 李欢欢, 黄添强, 丁雪梅, 罗海峰, 黄丽清. 基于多尺度时空图卷积网络的交通出行需求预测[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2065-2072. |
[9] | 吕锡婷, 赵敬华, 荣海迎, 赵嘉乐. 基于Transformer和关系图卷积网络的信息传播预测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1760-1766. |
[10] | 黎施彬, 龚俊, 汤圣君. 基于Graph Transformer的半监督异配图表示学习模型[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1816-1823. |
[11] | 余新言, 曾诚, 王乾, 何鹏, 丁晓玉. 基于知识增强和提示学习的小样本新闻主题分类方法[J]. 《计算机应用》唯一官方网站, 2024, 44(6): 1767-1774. |
[12] | 时旺军, 王晶, 宁晓军, 林友芳. 小样本场景下的元迁移学习睡眠分期模型[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1445-1451. |
[13] | 汪炅, 唐韬韬, 贾彩燕. 无负采样的正样本增强图对比学习推荐方法PAGCL[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1485-1492. |
[14] | 吴郅昊, 迟子秋, 肖婷, 王喆. 基于元学习自适应的小样本语音合成[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1629-1635. |
[15] | 郑文萍, 葛慧琳, 刘美麟, 杨贵. 融合二连通模体结构信息的节点分类算法[J]. 《计算机应用》唯一官方网站, 2024, 44(5): 1464-1470. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||