《计算机应用》唯一官方网站 ›› 2024, Vol. 44 ›› Issue (7): 2018-2025.DOI: 10.11772/j.issn.1001-9081.2023071051
毛典辉1,2, 李学博1, 刘峻岭1, 张登辉1, 颜文婧2()
收稿日期:
2023-08-03
修回日期:
2023-09-16
接受日期:
2023-09-21
发布日期:
2023-10-26
出版日期:
2024-07-10
通讯作者:
颜文婧
作者简介:
毛典辉(1979—),男,湖北浠水人,教授,博士,主要研究方向:区块链、智能金融科技和食品安全、深度学习;基金资助:
Dianhui MAO1,2, Xuebo LI1, Junling LIU1, Denghui ZHANG1, Wenjing YAN2()
Received:
2023-08-03
Revised:
2023-09-16
Accepted:
2023-09-21
Online:
2023-10-26
Published:
2024-07-10
Contact:
Wenjing YAN
About author:
MAO Dianhui, born in 1979, Ph. D., professor. His research interests include blockchain, smart financial technology and food safety, deep learning.Supported by:
摘要:
近年来,随着深度学习技术的快速发展,实体关系抽取在许多领域取得了显著的进展。然而,由于汉语具有复杂的句法结构和语义关系,面向中文的实体关系抽取任务中仍然存在着多项挑战。其中,中文文本中的重叠三元组问题是领域中的重要难题之一。针对中文文本中的重叠三元组问题,提出了一种混合神经网络实体关系联合抽取(HNNERJE)模型。HNNERJE模型以并行方式融合序列注意力机制和异构图注意力机制,并结合门控融合策略构建了深度集成框架。该模型不仅可以同时捕获中文文本的语序信息和实体关联信息,还能够自适应地调整主客体标记器的输出,从而有效解决重叠三元组问题。另外,通过引入对抗训练算法提高模型对未见样本和噪声的适应能力。运用SHAP(SHapley Additive exPlanations)方法对HNNERJE模型进行解释分析,基于模型的识别结果解析它在抽取实体和关系时所依据的关键特征。HNNERJE模型在NYT、WebNLG、CMeIE和DuIE数据集上的F1值分别达到了92.17%、93.42%、47.40%和67.98%。实验结果表明:HNNERJE模型可以将非结构化的文本数据转化为结构化的知识表示,有效提取其中蕴含的有价值信息。
中图分类号:
毛典辉, 李学博, 刘峻岭, 张登辉, 颜文婧. 基于并行异构图和序列注意力机制的中文实体关系抽取模型[J]. 计算机应用, 2024, 44(7): 2018-2025.
Dianhui MAO, Xuebo LI, Junling LIU, Denghui ZHANG, Wenjing YAN. Chinese entity and relation extraction model based on parallel heterogeneous graph and sequential attention mechanism[J]. Journal of Computer Applications, 2024, 44(7): 2018-2025.
数据集 | 训练集样本数 | 测试集样本数 | 关系类型数 |
---|---|---|---|
NYT | 61 194 | 5 000 | 24 |
WebNLG | 5 519 | 703 | 171 |
CMeIE | 14 339 | 3 585 | 53 |
DuIE | 18 606 | 2 067 | 48 |
表1 数据集规模统计
Tab. 1 Statistics of dataset size
数据集 | 训练集样本数 | 测试集样本数 | 关系类型数 |
---|---|---|---|
NYT | 61 194 | 5 000 | 24 |
WebNLG | 5 519 | 703 | 171 |
CMeIE | 14 339 | 3 585 | 53 |
DuIE | 18 606 | 2 067 | 48 |
重叠三元类型 | NYT | WebNLG | CMeIE | DuIE | ||||
---|---|---|---|---|---|---|---|---|
训练集样本数 | 测试集样本数 | 训练集样本数 | 测试集样本数 | 训练集样本数 | 测试集样本数 | 训练集样本数 | 测试集样本数 | |
合计 | 61 194 | 5 000 | 5 519 | 703 | 14 339 | 3 585 | 18 606 | 2 067 |
Normal | 40 718 | 3 266 | 1 930 | 246 | 5 508 | 1 425 | 11 391 | 1 274 |
EPO | 10 631 | 978 | 243 | 26 | 189 | 40 | 722 | 83 |
SEO | 9 845 | 1 297 | 3 346 | 457 | 8 642 | 2 120 | 6 493 | 710 |
表2 数据集三元组类型的样本数统计
Tab. 2 Statistics of triple types in datasets
重叠三元类型 | NYT | WebNLG | CMeIE | DuIE | ||||
---|---|---|---|---|---|---|---|---|
训练集样本数 | 测试集样本数 | 训练集样本数 | 测试集样本数 | 训练集样本数 | 测试集样本数 | 训练集样本数 | 测试集样本数 | |
合计 | 61 194 | 5 000 | 5 519 | 703 | 14 339 | 3 585 | 18 606 | 2 067 |
Normal | 40 718 | 3 266 | 1 930 | 246 | 5 508 | 1 425 | 11 391 | 1 274 |
EPO | 10 631 | 978 | 243 | 26 | 189 | 40 | 722 | 83 |
SEO | 9 845 | 1 297 | 3 346 | 457 | 8 642 | 2 120 | 6 493 | 710 |
模型 | 权重衰减率 | 批量大小 | 学习率 | 训练次数 | 丢失率 | 词嵌入维度 | 关系嵌入维度 | 句子长度 | 优化器 |
---|---|---|---|---|---|---|---|---|---|
CasRel[ | — | 6 | 1.00×10-5 | 150 | — | 768 | 768 | 150/200/300 | Adam |
RIFRE[ | 1.00×10-5 | 6 | 1.00×10-1 | 150 | — | 768 | 768 | 150/200/300 | SGD |
HNNERJE | 1.00×10-5 | 6 | 1.00×10-1 | 150 | 0.5 | 768 | 768 | 150/200/300 | SGD |
表3 实验参数详情
Tab. 3 Details of experimental parameters
模型 | 权重衰减率 | 批量大小 | 学习率 | 训练次数 | 丢失率 | 词嵌入维度 | 关系嵌入维度 | 句子长度 | 优化器 |
---|---|---|---|---|---|---|---|---|---|
CasRel[ | — | 6 | 1.00×10-5 | 150 | — | 768 | 768 | 150/200/300 | Adam |
RIFRE[ | 1.00×10-5 | 6 | 1.00×10-1 | 150 | — | 768 | 768 | 150/200/300 | SGD |
HNNERJE | 1.00×10-5 | 6 | 1.00×10-1 | 150 | 0.5 | 768 | 768 | 150/200/300 | SGD |
模型 | NYT | WebNLG | CMeIE | DuIE | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
F1 | 精确率 | 召回率 | F1 | 精确率 | 召回率 | F1 | 精确率 | 召回率 | F1 | 精确率 | 召回率 | |
CasRel[ | 89.60 | 89.70 | 89.50 | 91.80 | 93.40 | 90.10 | — | — | — | — | — | — |
RIFRE[ | 92.00 | 93.60 | 90.50 | 92.60 | 93.30 | 92.00 | — | — | — | — | — | — |
CasRel* | 89.96 | 88.79 | 89.38 | 91.15 | 92.19 | 91.11 | 44.69 | 44.38 | 45.01 | 66.35 | 66.91 | 65.80 |
RIFRE* | 91.78 | 91.63 | 91.94 | 92.38 | 92.75 | 92.00 | 45.96 | 51.47 | 41.52 | 66.96 | 68.92 | 65.11 |
91.60 | 91.68 | 91.51 | 92.45 | 93.00 | 91.91 | 47.12 | 47.59 | 46.55 | 66.41 | 64.43 | 68.21 | |
HNNERJE | 92.17 | 92.96 | 91.48 | 93.42 | 93.13 | 93.71 | 47.40 | 48.49 | 46.37 | 67.98 | 71.35 | 67.91 |
表4 在WebNLG、NYT、CMeIE和DuIE数据集上不同模型实验结果比较 ( %)
Tab. 4 Comparison of experimental results of different models on WebNLG, NYT, CMeIE and DuIE datasets
模型 | NYT | WebNLG | CMeIE | DuIE | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
F1 | 精确率 | 召回率 | F1 | 精确率 | 召回率 | F1 | 精确率 | 召回率 | F1 | 精确率 | 召回率 | |
CasRel[ | 89.60 | 89.70 | 89.50 | 91.80 | 93.40 | 90.10 | — | — | — | — | — | — |
RIFRE[ | 92.00 | 93.60 | 90.50 | 92.60 | 93.30 | 92.00 | — | — | — | — | — | — |
CasRel* | 89.96 | 88.79 | 89.38 | 91.15 | 92.19 | 91.11 | 44.69 | 44.38 | 45.01 | 66.35 | 66.91 | 65.80 |
RIFRE* | 91.78 | 91.63 | 91.94 | 92.38 | 92.75 | 92.00 | 45.96 | 51.47 | 41.52 | 66.96 | 68.92 | 65.11 |
91.60 | 91.68 | 91.51 | 92.45 | 93.00 | 91.91 | 47.12 | 47.59 | 46.55 | 66.41 | 64.43 | 68.21 | |
HNNERJE | 92.17 | 92.96 | 91.48 | 93.42 | 93.13 | 93.71 | 47.40 | 48.49 | 46.37 | 67.98 | 71.35 | 67.91 |
模型 | 三元组数N | 不同数据集上的F1/% | |||
---|---|---|---|---|---|
NYT | WebNLG | CMeIE | DuIE | ||
CasRel[ | 1 | 88.20 | 89.30 | — | — |
2 | 90.30 | 90.80 | — | ||
3 | 91.90 | 94.20 | — | — | |
4 | 94.20 | 92.40 | — | — | |
≥5 | 83.70 | 90.90 | — | — | |
RIFRE[ | 1 | 90.70 | 90.20 | — | — |
2 | 92.80 | 92.00 | — | — | |
3 | 93.40 | 94.80 | — | — | |
4 | 94.80 | 93.00 | — | — | |
≥5 | 89.60 | 92.00 | — | — | |
CasRel* | 1 | 88.25 | 88.85 | 32.84 | 65.31 |
2 | 90.83 | 90.59 | 39.81 | 66.16 | |
3 | 92.58 | 93.83 | 42.47 | 67.12 | |
4 | 94.46 | 92.16 | 49.17 | 69.56 | |
≥5 | 83.48 | 90.01 | 47.12 | 65.31 | |
RIFRE* | 1 | 90.20 | 89.02 | 35.30 | 65.51 |
2 | 92.43 | 91.59 | 41.85 | 66.50 | |
3 | 92.96 | 94.41 | 43.26 | 66.87 | |
4 | 95.06 | 93.35 | 50.40 | 67.99 | |
≥5 | 89.63 | 91.95 | 50.16 | 68.61 | |
HNNERJE | 1 | 90.92 | 90.24 | 35.99 | 66.95 |
2 | 93.04 | 92.17 | 42.68 | 68.05 | |
3 | 93.50 | 95.06 | 45.26 | 67.70 | |
4 | 95.90 | 94.44 | 51.39 | 69.65 | |
≥5 | 90.28 | 92.38 | 52.58 | 69.08 |
表5 不同三元组数的句子中提取三元组的F1值
Tab. 5 F1 scores of extracting triple from sentences with different number of triple
模型 | 三元组数N | 不同数据集上的F1/% | |||
---|---|---|---|---|---|
NYT | WebNLG | CMeIE | DuIE | ||
CasRel[ | 1 | 88.20 | 89.30 | — | — |
2 | 90.30 | 90.80 | — | ||
3 | 91.90 | 94.20 | — | — | |
4 | 94.20 | 92.40 | — | — | |
≥5 | 83.70 | 90.90 | — | — | |
RIFRE[ | 1 | 90.70 | 90.20 | — | — |
2 | 92.80 | 92.00 | — | — | |
3 | 93.40 | 94.80 | — | — | |
4 | 94.80 | 93.00 | — | — | |
≥5 | 89.60 | 92.00 | — | — | |
CasRel* | 1 | 88.25 | 88.85 | 32.84 | 65.31 |
2 | 90.83 | 90.59 | 39.81 | 66.16 | |
3 | 92.58 | 93.83 | 42.47 | 67.12 | |
4 | 94.46 | 92.16 | 49.17 | 69.56 | |
≥5 | 83.48 | 90.01 | 47.12 | 65.31 | |
RIFRE* | 1 | 90.20 | 89.02 | 35.30 | 65.51 |
2 | 92.43 | 91.59 | 41.85 | 66.50 | |
3 | 92.96 | 94.41 | 43.26 | 66.87 | |
4 | 95.06 | 93.35 | 50.40 | 67.99 | |
≥5 | 89.63 | 91.95 | 50.16 | 68.61 | |
HNNERJE | 1 | 90.92 | 90.24 | 35.99 | 66.95 |
2 | 93.04 | 92.17 | 42.68 | 68.05 | |
3 | 93.50 | 95.06 | 45.26 | 67.70 | |
4 | 95.90 | 94.44 | 51.39 | 69.65 | |
≥5 | 90.28 | 92.38 | 52.58 | 69.08 |
模型 | 迭代层数 | 不同数据集上的F1/% | |||
---|---|---|---|---|---|
NYT | WebNLG | CMeIE | DuIE | ||
1 | 91.49 | 92.21 | 46.35 | 64.14 | |
2 | 91.51 | 92.36 | 47.12 | 64.77 | |
3 | 91.46 | 92.45 | 46.38 | 66.41 | |
4 | 91.60 | 92.39 | 46.67 | 65.98 | |
HNNERJE | 1 | 92.06 | 92.25 | 47.06 | 66.42 |
2 | 91.68 | 93.42 | 46.75 | 67.98 | |
3 | 92.17 | 93.06 | 47.02 | 65.98 | |
4 | 91.99 | 92.46 | 47.40 | 66.94 |
表6 HNNERJE模型在提取关系三元组时,有无对抗性训练的F1值 ( %)
Tab. 6 F1 scores of HNNERJE model with and without adversarial training in extracting relational triple
模型 | 迭代层数 | 不同数据集上的F1/% | |||
---|---|---|---|---|---|
NYT | WebNLG | CMeIE | DuIE | ||
1 | 91.49 | 92.21 | 46.35 | 64.14 | |
2 | 91.51 | 92.36 | 47.12 | 64.77 | |
3 | 91.46 | 92.45 | 46.38 | 66.41 | |
4 | 91.60 | 92.39 | 46.67 | 65.98 | |
HNNERJE | 1 | 92.06 | 92.25 | 47.06 | 66.42 |
2 | 91.68 | 93.42 | 46.75 | 67.98 | |
3 | 92.17 | 93.06 | 47.02 | 65.98 | |
4 | 91.99 | 92.46 | 47.40 | 66.94 |
1 | CUI L, WU Y, LIU J, et al. Template-based named entity recognition using BART [EB/OL]. (2021-06-03) [2023-09-14]. . |
2 | 李天昊,霍其润,闫跃,等.融合ERNIE和注意力机制的中文关系抽取模型[J].小型微型计算机系统, 2022, 43(6): 1226-1231. |
LI T H, HUO Q R, YAN Y, et al. Chinese relation extraction model based on ERNIE and attention mechanism [J]. Journal of Chinese Computer Systems, 2022, 43(6): 1226-1231. | |
3 | TUO M, YANG W. Review of entity relation extraction [J]. Journal of Intelligent & Fuzzy Systems, 2023, 44(5): 7391-7405. |
4 | GRADENT C, SHIMORINA A, NARAYAN S, et al. Creating training corpora for NLG micro-planning [C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2017: 179-188. |
5 | RIEDEL S, YAO L, McCALLUM A. Modeling relations and their mentions without labeled text [C]// Proceedings of the 2010 European Conference on Machine Learning and Knowledge Discovery in Databases. Berlin: Springer, 2010: 148-163. |
6 | ZHENG S, WANG F, BAO H, et al. Joint extraction of entities and relations based on a novel tagging scheme [C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2017: 1227-1236. |
7 | ZENG X, ZEND D, HE S, et al. Extracting relational facts by an end-to-end neural model with copy mechanism [C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2018: 506-514. |
8 | FU T-J, LI P-H, MA W-Y. GraphRel: modeling text as relational graphs for joint entity and relation extraction [C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2019: 1409-1418. |
9 | 翟社平,柏晓夏,张宇航,等.融合依存分析和图注意网络的三元组抽取[J].计算机工程与应用, 2023, 59(12): 148-156. |
ZHAI S P, BAI X X, ZHANG Y H, et al. Triple extraction of combining dependency analysis and graph attention network [J]. Computer Engineering and Applications, 2023, 59(12): 148-156. | |
10 | 朱秀宝,周刚,陈静,等.基于增强序列标注策略的单阶段联合实体关系抽取方法[J].计算机科学, 2023, 50(8): 184-192. |
ZHU X B, ZHOU G, CHEN J, et al. Single-stage joint entity and relation extraction method based on enhanced sequence annotation strategy [J]. Computer Science, 2023, 50(8): 184-192. | |
11 | BEKOULIS G, DELEU J, DEMEESTER T, et al. Joint entity recognition and relation extraction as a multi-head selection problem [J]. Expert Systems with Applications, 2018, 114: 34-45. |
12 | WEI Z, SU J, WANG Y, et al. A novel cascade binary tagging framework for relational triple extraction [C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: ACL, 2020: 1476-1488. |
13 | ZHAO K, XU H, CHENG Y, et al. Representation iterative fusion based on heterogeneous graph neural network for joint entity and relation extraction [J]. Knowledge-Based Systems, 2021, 219: 106888. |
14 | MIWA M, BANSAL M. End-to-end relation extraction using LSTMs on sequences and tree structures [C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2016: 1105-1116. |
15 | KATIYAR A, CARDIE C. Going out on a limb: joint extraction of entity mentions and relations without dependency trees [C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: ACL, 2017: 917-928. |
16 | MIWA M, SASAKI Y. Modeling joint entity and relation extraction with table representation [C]// Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: ACL, 2014: 1858-1869. |
17 | GUPTA P, SCHÜTZE H, ANDRASSY B. Table filling multi-task recurrent neural network for joint entity and relation extraction [C]// Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers. Stroudsburg: ACL, 2016: 2537-2547. |
18 | HONG Y, LIU Y, YANG S, et al. Improving graph convolutional networks based on relation-aware attention for end-to-end relation extraction [J]. IEEE Access, 2020, 8: 51315-51323. |
19 | WANG X, JI H, SHI C, et al. Heterogeneous graph attention network [C]// Proceedings of the 2019 World Wide Web Conference. New York: ACM, 2019: 2022-2032. |
20 | CHEN H, HONG P, HAN W, et al. Dialogue relation extraction with document-level heterogeneous graph attention networks [J]. Cognitive Computation, 2023, 15: 793-802. |
21 | KAMBER M E Z N, ESMAEILZADEH A, TAGHVA K. Chemical-gene relation extraction with graph neural networks and BERT encoder [C]// Proceedings of the 2022 International Conference on Innovations in Computing Research. Cham: Springer, 2022: 166-179. |
22 | QIN Y, CARLINI N, COTTRELL G, et al. Imperceptible, robust, and targeted adversarial examples for automatic speech recognition [C]// Proceedings of the 36th International Conference on Machine Learning. New York: PMLR, 2019: 5231-5240. |
23 | CHEN H, LU G, WU X, et al. Joint extraction of entities and relations by adversarial training and mixup data augmentation [C]// Proceedings of the 2021 7th International Conference on Computer and Communications. Piscataway: IEEE, 2021: 1486-1490. |
24 | DEVLIN J, CHANG M-W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding [EB/OL]. (2018-10-12) [2023-09-14]. . |
25 | HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C]// Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 770-778. |
26 | VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need [C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook: Curran Associates Inc., 2017: 6000-6010. |
27 | LI S J, HE W, SHI Y B, et al. DuIE: a large-scale Chinese dataset for information extraction [C]// Proceedings of the 8th CCF International Conference on Natural Language Processing and Chinese Computing. Cham: Springer, 2019: 791-800. |
28 | GUAN T F, ZAN H Y, ZHOU X B, et al. CMeIE: construction and evaluation of Chinese medical information extraction dataset [C]// Proceedings of the 9th CCF International Conference on Natural Language Processing and Chinese Computing. Cham: Springer, 2020: 270-282. |
[1] | 秦璟, 秦志光, 李发礼, 彭悦恒. 基于概率稀疏自注意力神经网络的重性抑郁疾患诊断[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2970-2974. |
[2] | 李力铤, 华蓓, 贺若舟, 徐况. 基于解耦注意力机制的多变量时序预测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2732-2738. |
[3] | 吴相岚, 肖洋, 刘梦莹, 刘明铭. 基于语义增强模式链接的Text-to-SQL模型[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2689-2695. |
[4] | 赵志强, 马培红, 黑新宏. 基于双重注意力机制的人群计数方法[J]. 《计算机应用》唯一官方网站, 2024, 44(9): 2886-2892. |
[5] | 薛凯鹏, 徐涛, 廖春节. 融合自监督和多层交叉注意力的多模态情感分析网络[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2387-2392. |
[6] | 汪雨晴, 朱广丽, 段文杰, 李书羽, 周若彤. 基于交互注意力机制的心理咨询文本情感分类模型[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2393-2399. |
[7] | 高鹏淇, 黄鹤鸣, 樊永红. 融合坐标与多头注意力机制的交互语音情感识别[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2400-2406. |
[8] | 李钟华, 白云起, 王雪津, 黄雷雷, 林初俊, 廖诗宇. 基于图像增强的低照度人脸检测[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2588-2594. |
[9] | 莫尚斌, 王文君, 董凌, 高盛祥, 余正涛. 基于多路信息聚合协同解码的单通道语音增强[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2611-2617. |
[10] | 杨帆, 邹窈, 朱明志, 马振伟, 程大伟, 蒋昌俊. 基于图注意力Transformer神经网络的信用卡欺诈检测模型[J]. 《计算机应用》唯一官方网站, 2024, 44(8): 2634-2642. |
[11] | 熊武, 曹从军, 宋雪芳, 邵云龙, 王旭升. 基于多尺度混合域注意力机制的笔迹鉴别方法[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2225-2232. |
[12] | 李欢欢, 黄添强, 丁雪梅, 罗海峰, 黄丽清. 基于多尺度时空图卷积网络的交通出行需求预测[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2065-2072. |
[13] | 刘丽, 侯海金, 王安红, 张涛. 基于多尺度注意力的生成式信息隐藏算法[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2102-2109. |
[14] | 徐松, 张文博, 王一帆. 基于时空信息的轻量视频显著性目标检测网络[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2192-2199. |
[15] | 李大海, 王忠华, 王振东. 结合空间域和频域信息的双分支低光照图像增强网络[J]. 《计算机应用》唯一官方网站, 2024, 44(7): 2175-2182. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||