《计算机应用》唯一官方网站 ›› 2025, Vol. 45 ›› Issue (11): 3555-3563.DOI: 10.11772/j.issn.1001-9081.2024111587
• 人工智能 • 上一篇
邱冰婕1, 张超群1,2(
), 汤卫东1, 梁弼诚1, 崔丹阳1, 罗海升1, 陈启明1
收稿日期:2024-11-11
修回日期:2025-02-26
接受日期:2025-02-28
发布日期:2025-03-04
出版日期:2025-11-10
通讯作者:
张超群
作者简介:邱冰婕(2002—),女,湖南邵阳人,硕士研究生,主要研究方向:自然语言处理基金资助:
Bingjie QIU1, Chaoqun ZHANG1,2(
), Weidong TANG1, Bicheng LIANG1, Danyang CUI1, Haisheng LUO1, Qiming CHEN1
Received:2024-11-11
Revised:2025-02-26
Accepted:2025-02-28
Online:2025-03-04
Published:2025-11-10
Contact:
Chaoqun ZHANG
About author:QIU Bingjie, born in 2002, M. S. candidate. Her research interests include natural language processing.Supported by:摘要:
针对零样本关系抽取(ZSRE)中因相似实体或关系导致的关系表示重叠及关系预测错误问题,提出一种基于双重对比学习的零样本关系抽取(DCL-ZSRE)模型。首先,通过预训练编码器对实例和关系描述进行编码,以得到相应的向量表示;其次,设计双重对比学习提高关系表示的可区分度,即通过实例级对比学习(ICL)学习实例之间的互信息,再将实例和关系描述的表示进行连接,并利用匹配级对比学习(MCL)学习实例与关系描述之间的联系,从而解决关系表示重叠问题;最后,根据对比学习中学习到的表示在分类模块对未见关系进行预测。在FewRel和Wiki-ZSL数据集上的实验结果表明,DCL-ZSRE在精确率、召回率和F1值上均明显优于8个先进的对比模型,尤其在未见关系类别较多时:当未见关系类别数为15时,相较于EMMA (Efficient Multi-grained Matching Approach)模型,DCL-ZSRE模型在FewRel数据集上的3项指标分别显著提高了4.76、4.63、4.69个百分点,在Wiki-ZSL数据集上也实现了1.32、2.20、1.76个百分点的增长。DCL-ZSRE模型能有效区分重叠的关系表示,可作为一种有效且鲁棒性强的零样本关系抽取方法。
中图分类号:
邱冰婕, 张超群, 汤卫东, 梁弼诚, 崔丹阳, 罗海升, 陈启明. 基于双重对比学习的零样本关系抽取模型[J]. 计算机应用, 2025, 45(11): 3555-3563.
Bingjie QIU, Chaoqun ZHANG, Weidong TANG, Bicheng LIANG, Danyang CUI, Haisheng LUO, Qiming CHEN. Zero-shot relation extraction model based on dual contrastive learning[J]. Journal of Computer Applications, 2025, 45(11): 3555-3563.
| 参数 | 取值 |
|---|---|
| 训练集批次batchsizetrain | 32 |
| 验证集批次batchsizevalid | 640 |
| 测试集批次batchsizetest | 640 |
| 迭代次数epoch | 5 |
| 文本最大长度max_length | 128 |
| 学习率lr | 0.000 02 |
| 候选关系数k | 2 |
| 实例级温度参数 | 1 |
| 匹配级温度参数 | 0.02 |
| 损失权重 | 0.1 |
| 优化器 | AdamW[ |
表1 实验参数设置
Tab. 1 Experimental parameter settings
| 参数 | 取值 |
|---|---|
| 训练集批次batchsizetrain | 32 |
| 验证集批次batchsizevalid | 640 |
| 测试集批次batchsizetest | 640 |
| 迭代次数epoch | 5 |
| 文本最大长度max_length | 128 |
| 学习率lr | 0.000 02 |
| 候选关系数k | 2 |
| 实例级温度参数 | 1 |
| 匹配级温度参数 | 0.02 |
| 损失权重 | 0.1 |
| 优化器 | AdamW[ |
| m | 模型 | FewRel | Wiki-ZSL | ||||
|---|---|---|---|---|---|---|---|
| P/% | R/% | F1/% | P/% | R/% | F1/% | ||
| 5 | SUMASK[ | 78.27 | 72.55 | 75.30 | 75.64 | 70.96 | 73.23 |
| ZS-BERT[ | 76.96 | 78.86 | 77.90 | 71.54 | 72.39 | 71.96 | |
| RelationPrompt[ | 90.15 | 88.50 | 89.30 | 70.66 | 83.75 | 76.63 | |
| RE-Matching[ | 92.82 | 92.34 | 92.58 | 78.19 | 78.41 | 78.30 | |
| EMMA[ | 94.87 | 94.48 | 94.67 | ||||
| IPPR[ | 88.03 | 87.14 | 87.58 | 86.56 | 83.31 | 84.91 | |
| AlignRE[ | 93.30 | 92.90 | 93.09 | 83.11 | 80.30 | 81.64 | |
| PromptMatch[ | 91.14 | 90.86 | 91.00 | 77.39 | 75.90 | 76.63 | |
| DCL-ZSRE | 95.29 | 94.20 | 94.74 | ||||
| 10 | SUMASK[ | 64.77 | 60.94 | 62.80 | 62.31 | 61.08 | 61.69 |
| ZS-BERT[ | 56.92 | 57.59 | 57.25 | 60.51 | 60.98 | 60.74 | |
| RelationPrompt[ | 80.33 | 79.62 | 79.96 | 68.51 | 74.76 | 71.50 | |
| RE-Matching[ | 83.21 | 82.64 | 82.93 | 74.39 | 73.54 | 73.96 | |
| EMMA[ | |||||||
| IPPR[ | 65.65 | 63.19 | 64.39 | 69.21 | 70.70 | 69.95 | |
| AlignRE[ | 86.41 | 85.14 | 85.75 | 75.00 | 73.26 | 74.10 | |
| PromptMatch[ | 83.05 | 82.55 | 82.80 | 71.86 | 71.14 | 71.50 | |
| DCL-ZSRE | 88.66 | 87.50 | 88.08 | 86.48 | 87.41 | 86.94 | |
| 15 | SUMASK[ | 44.76 | 41.13 | 42.87 | 43.55 | 40.27 | 41.85 |
| ZS-BERT[ | 35.54 | 38.19 | 36.82 | 34.12 | 34.38 | 34.25 | |
| RelationPrompt[ | 74.33 | 72.51 | 73.40 | 63.69 | 67.93 | 65.74 | |
| RE-Matching[ | 73.80 | 73.52 | 73.66 | 67.31 | 67.33 | 67.32 | |
| EMMA[ | |||||||
| IPPR[ | 49.72 | 49.44 | 49.58 | 43.38 | 44.95 | 44.15 | |
| AlignRE[ | 77.63 | 77.00 | 77.31 | 69.01 | 67.52 | 68.26 | |
| PromptMatch[ | 72.83 | 72.10 | 72.46 | 62.13 | 61.76 | 61.95 | |
| DCL-ZSRE | 85.23 | 84.36 | 84.79 | 79.83 | 79.83 | 79.83 | |
表2 实验模型在2个数据集上的性能对比
Tab. 2 Performance comparison of experimental models on two datasets
| m | 模型 | FewRel | Wiki-ZSL | ||||
|---|---|---|---|---|---|---|---|
| P/% | R/% | F1/% | P/% | R/% | F1/% | ||
| 5 | SUMASK[ | 78.27 | 72.55 | 75.30 | 75.64 | 70.96 | 73.23 |
| ZS-BERT[ | 76.96 | 78.86 | 77.90 | 71.54 | 72.39 | 71.96 | |
| RelationPrompt[ | 90.15 | 88.50 | 89.30 | 70.66 | 83.75 | 76.63 | |
| RE-Matching[ | 92.82 | 92.34 | 92.58 | 78.19 | 78.41 | 78.30 | |
| EMMA[ | 94.87 | 94.48 | 94.67 | ||||
| IPPR[ | 88.03 | 87.14 | 87.58 | 86.56 | 83.31 | 84.91 | |
| AlignRE[ | 93.30 | 92.90 | 93.09 | 83.11 | 80.30 | 81.64 | |
| PromptMatch[ | 91.14 | 90.86 | 91.00 | 77.39 | 75.90 | 76.63 | |
| DCL-ZSRE | 95.29 | 94.20 | 94.74 | ||||
| 10 | SUMASK[ | 64.77 | 60.94 | 62.80 | 62.31 | 61.08 | 61.69 |
| ZS-BERT[ | 56.92 | 57.59 | 57.25 | 60.51 | 60.98 | 60.74 | |
| RelationPrompt[ | 80.33 | 79.62 | 79.96 | 68.51 | 74.76 | 71.50 | |
| RE-Matching[ | 83.21 | 82.64 | 82.93 | 74.39 | 73.54 | 73.96 | |
| EMMA[ | |||||||
| IPPR[ | 65.65 | 63.19 | 64.39 | 69.21 | 70.70 | 69.95 | |
| AlignRE[ | 86.41 | 85.14 | 85.75 | 75.00 | 73.26 | 74.10 | |
| PromptMatch[ | 83.05 | 82.55 | 82.80 | 71.86 | 71.14 | 71.50 | |
| DCL-ZSRE | 88.66 | 87.50 | 88.08 | 86.48 | 87.41 | 86.94 | |
| 15 | SUMASK[ | 44.76 | 41.13 | 42.87 | 43.55 | 40.27 | 41.85 |
| ZS-BERT[ | 35.54 | 38.19 | 36.82 | 34.12 | 34.38 | 34.25 | |
| RelationPrompt[ | 74.33 | 72.51 | 73.40 | 63.69 | 67.93 | 65.74 | |
| RE-Matching[ | 73.80 | 73.52 | 73.66 | 67.31 | 67.33 | 67.32 | |
| EMMA[ | |||||||
| IPPR[ | 49.72 | 49.44 | 49.58 | 43.38 | 44.95 | 44.15 | |
| AlignRE[ | 77.63 | 77.00 | 77.31 | 69.01 | 67.52 | 68.26 | |
| PromptMatch[ | 72.83 | 72.10 | 72.46 | 62.13 | 61.76 | 61.95 | |
| DCL-ZSRE | 85.23 | 84.36 | 84.79 | 79.83 | 79.83 | 79.83 | |
| 模型 | FewRel | Wiki-ZSL | ||||
|---|---|---|---|---|---|---|
| P | R | F1 | P | R | F1 | |
| w/o ICL | 87.97 | 87.02 | 87.49 | 86.30 | 86.74 | 86.51 |
| w/o MCL | 31.60 | 19.71 | 23.90 | 28.22 | 20.78 | 22.85 |
| w/o IMCL | 54.22 | 54.25 | 53.22 | 48.81 | 50.99 | 49.88 |
| w/o Class | 84.87 | 85.91 | 86.97 | 83.36 | 84.09 | 83.72 |
| DCL-ZSRE | 88.66 | 87.50 | 88.08 | 86.48 | 87.41 | 86.94 |
表3 消融实验结果 ( %)
Tab. 3 Ablation experimental results
| 模型 | FewRel | Wiki-ZSL | ||||
|---|---|---|---|---|---|---|
| P | R | F1 | P | R | F1 | |
| w/o ICL | 87.97 | 87.02 | 87.49 | 86.30 | 86.74 | 86.51 |
| w/o MCL | 31.60 | 19.71 | 23.90 | 28.22 | 20.78 | 22.85 |
| w/o IMCL | 54.22 | 54.25 | 53.22 | 48.81 | 50.99 | 49.88 |
| w/o Class | 84.87 | 85.91 | 86.97 | 83.36 | 84.09 | 83.72 |
| DCL-ZSRE | 88.66 | 87.50 | 88.08 | 86.48 | 87.41 | 86.94 |
| F1值/% | F1值/% | ||||
|---|---|---|---|---|---|
| FewRel | Wiki-ZSL | FewRel | Wiki-ZSL | ||
| 0.10 | 88.078 | 86.940 | 0.33 | 87.982 | 86.934 |
| 0.20 | 87.990 | 86.844 | 0.50 | 87.984 | 86.874 |
表4 超参数α对DCL-ZSRE模型F1值的影响
Tab. 4 Impact of hyperparameter α on F1-score of DCL-ZSRE model
| F1值/% | F1值/% | ||||
|---|---|---|---|---|---|
| FewRel | Wiki-ZSL | FewRel | Wiki-ZSL | ||
| 0.10 | 88.078 | 86.940 | 0.33 | 87.982 | 86.934 |
| 0.20 | 87.990 | 86.844 | 0.50 | 87.984 | 86.874 |
| 模型 | FewRel | Wiki-ZSL | ||
|---|---|---|---|---|
| 时间/h | 参数量/107 | 时间/h | 参数量/107 | |
| EMMA | 1.379 | 220.739 | 2.386 | 220.739 |
| DCL-ZSRE | 1.380 | 224.284 | 2.404 | 224.284 |
表5 DCL-ZSRE和EMMA在2个数据集上所需的参数量与运行5次的平均时间对比
Tab. 5 Comparison of required parameters and average running time of five runs of DCL-ZSRE and EMMA on two datasets
| 模型 | FewRel | Wiki-ZSL | ||
|---|---|---|---|---|
| 时间/h | 参数量/107 | 时间/h | 参数量/107 | |
| EMMA | 1.379 | 220.739 | 2.386 | 220.739 |
| DCL-ZSRE | 1.380 | 224.284 | 2.404 | 224.284 |
| [1] | 许亮. 面向中文的零样本关系三元组抽取方法研究[D]. 北京:北京交通大学, 2023: 2-6. |
| XU L. Research on zero-shot relation triple extraction method for Chinese[D]. Beijing: Beijing Jiaotong University, 2023: 2-6. | |
| [2] | CAUFIELD J H, HEGDE H, EMONET V, et al. Structured Prompt Interrogation and Recursive Extraction of Semantics (SPIRES): a method for populating knowledge bases using zero-shot learning[J]. Bioinformatics, 2024, 40(3): No.btae104. |
| [3] | 许亮,张春,张宁,等.融合多Prompt模板的零样本关系抽取模型[J].计算机应用,2023,43(12):3668-3675. |
| XU L, ZHANG C, ZHANG N, et al. Zero-shot relation extraction model via multi-template fusion in Prompt[J]. Journal of Computer Applications, 2023, 43(12): 3668-3675. | |
| [4] | GAUTAM S, POP R. FactGenius: combining zero-shot prompting and fuzzy relation mining to improve fact verification with knowledge graphs[C]// Proceedings of the 7th Fact Extraction and VERification Workshop. Stroudsburg: ACL, 2024: 297-306. |
| [5] | ZHAO J, ZHAN W, ZHAO X, et al. RE-Matching: a fine-grained semantic matching method for zero-shot relation extraction[C]// Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2023: 6680-6691. |
| [6] | 潘理虎,刘云,谢斌红,等. 基于语义增强的多特征融合小样本关系抽取[J]. 计算机应用研究, 2022, 39(6): 1663-1667. |
| PAN L H, LIU Y, XIE B H, et al. Multi-feature fusion and few-shot relation extraction based on semantic enhancement[J]. Application Research of Computers, 2022, 39(6): 1663-1667. | |
| [7] | LI S, BAI G, ZHANG Z, et al. Fusion makes perfection: an efficient multi-grained matching approach for zero-shot relation extraction[C]// Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers). Stroudsburg: ACL, 2024: 79-85. |
| [8] | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional Transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg: ACL, 2019: 4171-4186. |
| [9] | HAN X, ZHU H, YU P, et al. FewRel: a large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2018: 4803-4809. |
| [10] | CHEN C Y, LI C T. ZS-BERT: towards zero-shot relation extraction with attribute representation learning[C]// Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg: ACL, 2021: 3470-3479. |
| [11] | LAN Z, CHEN M, GOODMAN S, et al. ALBERT: a lite BERT for self-supervised learning of language representations[EB/OL]. [2024-10-15].. |
| [12] | LAMPERT C H, NICKISCH H, HARMELING S. Attribute-based classification for zero-shot visual object categorization[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(3): 453-465. |
| [13] | LV F, ZHANG J, YANG G, et al. Learning cross-domain semantic-visual relationships for transductive zero-shot learning[J]. Pattern Recognition, 2023, 141: No.109591. |
| [14] | PRATT S, COVERT I, LIU R, et al. What does a platypus look like? generating customized prompts for zero-shot image classification[C]// Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2023: 15645-15655. |
| [15] | ZHANG K, JIMÉNEZ GUTIÉRREZ B, SU Y. Aligning instruction tasks unlocks large language models as zero-shot relation extractors[C]// Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg: ACL, 2023: 794-812. |
| [16] | PÀMIES M, LLOP J, MULTARI F, et al. A weakly supervised textual entailment approach to zero-shot text classification[C]// Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 286-296. |
| [17] | NAJAFI S, FYSHE A. Weakly-supervised questions for zero-shot relation extraction[C]// Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg: ACL, 2023: 3075-3087. |
| [18] | XU L, BU X, TIAN X. Dynamic prompt-driven zero-shot relation extraction[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024, 32: 2900-2912. |
| [19] | DUAN B, LIU X, WANG S, et al. Relational representation learning for zero-shot relation extraction with instance prompting and prototype rectification[C]// Proceedings of the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2023: 1-5. |
| [20] | GONG J, ELDARDIRY H. Zero-shot relation classification from side information[C]// Proceedings of the 30th ACM International Conference on Information and Knowledge Management. New York: ACM, 2021: 576-585. |
| [21] | ZHANG B, XU Y, LI J, et al. SMDM: tackling zero-shot relation extraction with semantic max-divergence metric learning[J]. Applied Intelligence, 2023, 53(6): 6569-6584. |
| [22] | 琚生根,黄方怡,孙界平. 融合预训练语言模型的成语完形填空算法[J]. 软件学报, 2022, 33(10): 3793-3805. |
| JU S G, HUANG F Y, SUN J P. Idiom cloze algorithm integrating with pre-trained language model[J]. Journal of Software, 2023, 33(10): 3793-3805. | |
| [23] | LIU Y, OTT M, GOYAL N, et al. RoBERTa: a robustly optimized BERT pretraining approach[EB/OL]. [2024-10-15].. |
| [24] | JOSHI M, CHEN D, LIU Y, et al. SpanBERT: improving pre-training by representing and predicting spans[J]. Transactions of the Association for Computational Linguistics, 2020, 8: 64-77. |
| [25] | MÜLLER M, SALATHÉ M, KUMMERYOLD P E. COVID-Twitter-BERT: a natural language processing model to analyse COVID-19 content on Twitter[J]. Frontiers in Artificial Intelligence, 2023, 6: No.1023281. |
| [26] | PEETERS R, BIZER C. Dual-objective fine-tuning of BERT for entity matching[J]. Proceedings of the VLDB Endowment, 2021, 14(10): 1913-1921. |
| [27] | BELLO A, NG S C, LEUNG M F. A BERT framework to sentiment analysis of Tweets[J]. Sensors, 2023, 23(1): No.506. |
| [28] | LIAO W, LIU Z, DAI H, et al. Mask-guided BERT for few-shot text classification[J]. Neurocomputing, 2024, 610: No.128576. |
| [29] | HU H, WANG X, ZHANG Y, et al. A comprehensive survey on contrastive learning[J]. Neurocomputing, 2024, 610: No.128645. |
| [30] | CHEN X, FAN H, GIRSHICK R, et al. Improved baselines with momentum contrastive learning[EB/OL]. [2024-10-16].. |
| [31] | 朱玄烨,孔兵,陈红梅,等. 困难样本采样联合对比增强的深度图聚类[J]. 计算机应用研究, 2024, 41(6): 1769-1777. |
| ZHU X Y, KONG B, CHEN H M, et al. Deep graph clustering with hard sample sampling joint contrastive augmentation[J]. Application Research of Computers, 2024, 41(6): 1769-1777. | |
| [32] | CHEN T, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[C]// Proceedings of the 37th International Conference on Machine Learning. New York: JMLR.org, 2020: 1597-1607. |
| [33] | YANG Z, FEI J, TAN Z, et al. CL&CD: contrastive learning and cluster description for zero-shot relation extraction[J]. Knowledge-Based Systems, 2024, 293: No.111652. |
| [34] | ZHANG K, WU L, LV G, et al. Description-enhanced label embedding contrastive learning for text classification[J]. IEEE Transactions on Neural Networks and Learning Systems, 2024, 35(10): 14889-14902. |
| [35] | 赵基藤,李国正,汪鹏,等. 基于监督对比重放的持续关系抽取[J]. 中文信息学报, 2023, 37(11): 60-67, 80. |
| ZHAO J T, LI G Z, WANG P, et al. Continual relation extraction via supervised contrastive replay[J]. Journal of Chinese Information Processing, 2023, 37(11): 60-67, 80. | |
| [36] | WANG S, ZHANG B, XU Y, et al. RCL: relation contrastive learning for zero-shot relation extraction[C]// Findings of the Association for Computational Linguistics: NAACL 2022. Stroudsburg: ACL, 2022: 2456-2468. |
| [37] | LUO D, GAN Y, HOU R, et al. Synergistic anchored contrastive pre-training for few-shot relation extraction[C]// Proceedings of the 38th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2024: 18742-18750. |
| [38] | 高怡,纪焘,吴苑斌,等. 基于标签增强和对比学习的鲁棒小样本事件检测[J]. 中文信息学报, 2023, 37(4): 98-108. |
| GAO Y, JI T, WU Y B, et al. Robust few shot event detection based on label augmentation and contrastive learning[J]. Journal of Chinese Information Processing, 2023, 37(4): 98-108. | |
| [39] | LIN Z, FENG M, DOS SANTOS C N, et al. A structured self-attentive sentence embedding[EB/OL]. [2024-10-15].. |
| [40] | VAN DEN OORD A, LI Y, VINYALS O. Representation learning with contrastive predictive coding[EB/OL]. [2024-10-19].. |
| [41] | LOSHCHILOV I, HUTTER F. Decoupled weight decay regularization[EB/OL]. [2024-10-15].. |
| [42] | LI G, WANG P, KE W. Revisiting large language models as zero-shot relation extractors[C]// Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg: ACL, 2023: 6877-6892. |
| [43] | CHIA Y K, BING L, PORIA S, et al. RelationPrompt: leveraging prompts to generate synthetic data for zero-shot relation triplet extraction[C]// Findings of the Association for Computational Linguistics: ACL 2022. Stroudsburg: ACL, 2022: 45-57. |
| [44] | LI Z, ZHANG F, CHENG J. AlignRE: an encoding and semantic alignment approach for zero-shot relation extraction[C]// Findings of the Association for Computational Linguistics: ACL 2024. Stroudsburg: ACL, 2024: 2957-2966. |
| [45] | SAINZ O, LOPEZ DE LACALLE O L, LABAKA G, et al. Label verbalization and entailment for effective zero-and few-shot relation extraction[C]// Proceedings of the 2021 Conference on Empirical Methods in Natural Language. Processing. Stroudsburg: ACL, 2021: 1199-1212. |
| [1] | 许志雄, 李波, 边小勇, 胡其仁. 对抗样本嵌入注意力U型网络的3D医学图像分割[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 3011-3016. |
| [2] | 刘超, 余岩化. 融合降噪策略与多视图对比学习的知识感知推荐模型[J]. 《计算机应用》唯一官方网站, 2025, 45(9): 2827-2837. |
| [3] | 王祉苑, 彭涛, 杨捷. 分布外检测中训练与测试的内外数据整合[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2497-2506. |
| [4] | 张伟, 牛家祥, 马继超, 沈琼霞. 深层语义特征增强的ReLM中文拼写纠错模型[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2484-2490. |
| [5] | 闫家鑫, 陈艳平, 杨卫哲, 黄瑞章, 秦永彬. 基于特征组合的异构图注意力网络关系抽取[J]. 《计算机应用》唯一官方网站, 2025, 45(8): 2470-2476. |
| [6] | 孙雨阳, 张敏婕, 胡婕. 基于语义前缀微调的零样本对话状态跟踪领域迁移模型[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2221-2228. |
| [7] | 谢劲, 褚苏荣, 强彦, 赵涓涓, 张华, 高勇. 用于胸片中硬负样本识别的双支分布一致性对比学习模型[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2369-2377. |
| [8] | 王震洲, 郭方方, 宿景芳, 苏鹤, 王建超. 面向智能巡检的视觉模型鲁棒性优化方法[J]. 《计算机应用》唯一官方网站, 2025, 45(7): 2361-2368. |
| [9] | 杨大伟, 徐西海, 宋威. 结合语义增强和感知注意力的关系抽取方法[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1801-1808. |
| [10] | 王海杰, 张广鑫, 史海, 陈树. 基于实体表示增强的文档级关系抽取[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1809-1816. |
| [11] | 余明峰, 秦永彬, 黄瑞章, 陈艳平, 林川. 基于对比学习增强双注意力机制的多标签文本分类方法[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1732-1740. |
| [12] | 颜文婧, 王瑞东, 左敏, 张青川. 基于风味嵌入异构图层次学习的食谱推荐模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1869-1878. |
| [13] | 姜超英, 李倩, 刘宁, 刘磊, 崔立真. 基于图对比学习的再入院预测模型[J]. 《计算机应用》唯一官方网站, 2025, 45(6): 1784-1792. |
| [14] | 龙雨菲, 牟宇辰, 刘晔. 基于张量化图卷积网络和对比学习的多源数据表示学习模型[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1372-1378. |
| [15] | 胡婕, 吴翠, 孙军, 张龑. 基于回指与逻辑推理的文档级关系抽取模型[J]. 《计算机应用》唯一官方网站, 2025, 45(5): 1496-1503. |
| 阅读次数 | ||||||
|
全文 |
|
|||||
|
摘要 |
|
|||||