Journal of Computer Applications ›› 2023, Vol. 43 ›› Issue (5): 1438-1444.DOI: 10.11772/j.issn.1001-9081.2022040625
Special Issue: 人工智能
• Artificial intelligence • Previous Articles Next Articles
Jingsheng LEI1, Kaijun LA1, Shengying YANG1(), Yi WU2
Received:
2022-05-07
Revised:
2022-07-28
Accepted:
2022-08-02
Online:
2022-09-29
Published:
2023-05-10
Contact:
Shengying YANG
About author:
LEI Jingsheng, born in 1966, Ph. D., professor. His research interests include data science and big data, machine learning, artificial intelligence.Supported by:
通讯作者:
杨胜英
作者简介:
雷景生(1966—),男,陕西韩城人,教授,博士,主要研究方向:数据科学与大数据、机器学习、人工智能基金资助:
CLC Number:
Jingsheng LEI, Kaijun LA, Shengying YANG, Yi WU. Joint entity and relation extraction based on contextual semantic enhancement[J]. Journal of Computer Applications, 2023, 43(5): 1438-1444.
雷景生, 剌凯俊, 杨胜英, 吴怡. 基于上下文语义增强的实体关系联合抽取[J]. 《计算机应用》唯一官方网站, 2023, 43(5): 1438-1444.
Add to citation manager EndNote|Ris|BibTeX
URL: https://www.joca.cn/EN/10.11772/j.issn.1001-9081.2022040625
数据集 | 模型 | 实体识别 | 关系抽取 | ||||
---|---|---|---|---|---|---|---|
CoNLL04 | Relation-Metric | 84.46 | 84.67 | 84.57 | 67.97 | 58.18 | 62.68 |
MTQA | 89.00 | 86.60 | 87.80 | 69.20 | 68.20 | 68.90 | |
SpERT | 88.25 | 89.64 | 88.94 | 73.04 | 70.00 | 71.47 | |
ERIGAT | 89.88 | 83.97 | 86.82 | 75.87 | 68.02 | 71.73 | |
eRPR MHS | 86.85 | 85.62 | 86.23 | 64.20 | 64.69 | 64.44 | |
MRC4ERE++ | 89.04 | 87.99 | 88.51 | 72.22 | 70.75 | 71.48 | |
TriMF | 89.35 | 89.22 | 89.28 | 72.64 | 70.72 | 71.67 | |
JERCE | 90.23 | 90.42 | 90.32 | 74.98 | 70.86 | 72.86 | |
ADE | Relation-Metric | 86.16 | 88.08 | 87.11 | 77.36 | 77.25 | 77.29 |
SpERT | 88.99 | 89.59 | 89.28 | 77.77 | 79.96 | 78.84 | |
ERIGAT | 90.53 | 87.12 | 88.79 | 84.71 | 75.79 | 80.09 | |
eRPR MHS | 86.65 | 86.03 | 86.34 | 74.35 | 86.12 | 79.80 | |
TriMF | 89.08 | 90.27 | 89.67 | 74.19 | 84.38 | 78.96 | |
JERCE | 89.99 | 89.62 | 89.80 | 79.22 | 80.99 | 80.10 | |
ACE05 | MRC4ERE++ | 87.03 | 86.96 | 86.99 | 62.02 | 62.31 | 62.16 |
SpERT | 85.48 | 84.77 | 85.12 | 61.32 | 60.03 | 60.67 | |
MTQA | 84.70 | 84.9 | 84.80 | 64.80 | 56.20 | 60.2 | |
eRPR MHS | 86.26 | 84.66 | 85.45 | 60.60 | 60.84 | 60.72 | |
TriMF | 87.66 | 87.47 | 87.56 | 61.98 | 62.87 | 62.42 | |
JERCE | 89.25 | 90.11 | 89.68 | 66.05 | 59.97 | 62.86 |
Tab.1 Experimental results of different models on CoNLL04, ADE and ACE05
数据集 | 模型 | 实体识别 | 关系抽取 | ||||
---|---|---|---|---|---|---|---|
CoNLL04 | Relation-Metric | 84.46 | 84.67 | 84.57 | 67.97 | 58.18 | 62.68 |
MTQA | 89.00 | 86.60 | 87.80 | 69.20 | 68.20 | 68.90 | |
SpERT | 88.25 | 89.64 | 88.94 | 73.04 | 70.00 | 71.47 | |
ERIGAT | 89.88 | 83.97 | 86.82 | 75.87 | 68.02 | 71.73 | |
eRPR MHS | 86.85 | 85.62 | 86.23 | 64.20 | 64.69 | 64.44 | |
MRC4ERE++ | 89.04 | 87.99 | 88.51 | 72.22 | 70.75 | 71.48 | |
TriMF | 89.35 | 89.22 | 89.28 | 72.64 | 70.72 | 71.67 | |
JERCE | 90.23 | 90.42 | 90.32 | 74.98 | 70.86 | 72.86 | |
ADE | Relation-Metric | 86.16 | 88.08 | 87.11 | 77.36 | 77.25 | 77.29 |
SpERT | 88.99 | 89.59 | 89.28 | 77.77 | 79.96 | 78.84 | |
ERIGAT | 90.53 | 87.12 | 88.79 | 84.71 | 75.79 | 80.09 | |
eRPR MHS | 86.65 | 86.03 | 86.34 | 74.35 | 86.12 | 79.80 | |
TriMF | 89.08 | 90.27 | 89.67 | 74.19 | 84.38 | 78.96 | |
JERCE | 89.99 | 89.62 | 89.80 | 79.22 | 80.99 | 80.10 | |
ACE05 | MRC4ERE++ | 87.03 | 86.96 | 86.99 | 62.02 | 62.31 | 62.16 |
SpERT | 85.48 | 84.77 | 85.12 | 61.32 | 60.03 | 60.67 | |
MTQA | 84.70 | 84.9 | 84.80 | 64.80 | 56.20 | 60.2 | |
eRPR MHS | 86.26 | 84.66 | 85.45 | 60.60 | 60.84 | 60.72 | |
TriMF | 87.66 | 87.47 | 87.56 | 61.98 | 62.87 | 62.42 | |
JERCE | 89.25 | 90.11 | 89.68 | 66.05 | 59.97 | 62.86 |
消融方法 | 实体识别 | 关系抽取 |
---|---|---|
JERCE | 89.68 | 62.86 |
-ContextEnhanced | 89.01 | 60.77 |
-SentenceEnhanced | 87.75 | 61.58 |
both | 86.80 | 59.89 |
Tab.2 F1values insemantic enhancement ablation experiments
消融方法 | 实体识别 | 关系抽取 |
---|---|---|
JERCE | 89.68 | 62.86 |
-ContextEnhanced | 89.01 | 60.77 |
-SentenceEnhanced | 87.75 | 61.58 |
both | 86.80 | 59.89 |
加权损失 | 实体识别 | 关系抽取 |
---|---|---|
JERCE | 89.68 | 62.86 |
- | 88.94 | 61.12 |
Tab.3 Influence of weighted loss on model F1 value
加权损失 | 实体识别 | 关系抽取 |
---|---|---|
JERCE | 89.68 | 62.86 |
- | 88.94 | 61.12 |
错误类型 | 错误示例 |
---|---|
边界模糊 | |
逻辑错误 | Miller is also scheduled to meet with Crimean Deputy |
逻辑缺失 |
Tab. 4 Common error examples
错误类型 | 错误示例 |
---|---|
边界模糊 | |
逻辑错误 | Miller is also scheduled to meet with Crimean Deputy |
逻辑缺失 |
1 | 鄂海红,张文静,肖思琪,等. 深度学习实体关系抽取研究综述[J]. 软件学报, 2019, 30(6): 1793-1818. 10.13328/j.cnki.jos.005817 |
E H H, ZHANG W J, XIAO S Q, et al. Survey of entity relationship extraction based on deep learning[J]. Journal of Software, 2019, 30(6): 1793-1818. 10.13328/j.cnki.jos.005817 | |
2 | CHI R J, WU B, HU L M, et al. Enhancing joint entity and relation extraction with language modeling and hierarchical attention[C]// Proceedings of the 2019 International Joint Conference, Asia-Pacific Web and Web-Age Information Management Joint International Conference on Web and Big Data, LNCS 11641. Cham: Springer, 2019: 314-328. |
3 | EBERTS M, ULGES A. Span-based joint entity and relation extraction with Transformer pre-training[C]// Proceedings of the 24th European Conference on Artificial Intelligence. Amsterdam: IOS Press, 2020: 2006-2013. 10.18653/v1/2021.eacl-main.319 |
4 | PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Stroudsburg, PA: ACL, 2018: 2227-2237. 10.18653/v1/n18-1202 |
5 | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: ACL, 2019: 4171-4186. 10.18653/v1/n18-2 |
6 | 张少伟,王鑫,陈子睿,等. 有监督实体关系联合抽取方法研究综述[J]. 计算机科学与探索, 2022, 16(4): 713-733. 10.3778/j.issn.1673-9418.2107114 |
ZHANG S W, WANG X, CHEN Z R, et al. Survey of supervised joint entity relation extraction methods[J]. Journal of Frontiers of Computer Science and Technology, 2022, 16(4): 713-733. 10.3778/j.issn.1673-9418.2107114 | |
7 | GUPTA P, SCHÜLTZE H, ANDRASSY B. Table filling multi-task recurrent neural network for joint entity and relation extraction[C]// Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers. [S.l.]: The COLING 2016 Organizing Committee, 2016: 2537-2547. |
8 | ZHAO T Y, YAN Z, CAO Y B, et al. Entity relative position representation based multi-head selection for joint entity and relation extraction[C]// Proceedings of the 19th Chinese National Conference on Computational Linguistics. Beijing: Chinese Information Processing Society of China, 2020: 962-973. 10.1007/978-3-030-63031-7_14 |
9 | SUI D B, CHEN Y B, LIU K, et al. Joint entity and relation extraction with set prediction networks[EB/OL]. (2020-11-05) [2022-03-20].. 10.1109/tnnls.2023.3264735 |
10 | SHEN Y L, MA X Y, TANG Y C, et al. A trigger-sense memory flow framework for joint entity and relation extraction[C]// Proceedings of the Web Conference 2021. New York: ACM, 2021: 1704-1715. 10.1145/3442381.3449895 |
11 | MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]// Proceedings of the 26th International Conference on Neural Information Processing Systems — Volume 2. Red Hook, NY: Curran Associates Inc., 2013: 3111-3119. |
12 | SAUNSHI N, PLEVRAKIS O, ARORA S, et al. A theoretical analysis of contrastive unsupervised representation learning[C]// Proceedings of the 36th International Conference on Machine Learning. New York: JMLR.org, 2019: 5628-5637. |
13 | FANG H C, WANG S C, ZHOU M, et al. CERT: contrastive self-supervised learning for language understanding[EB/OL]. (2020-06-18) [2022-03-20].. 10.36227/techrxiv.12308378.v1 |
14 | ITER D, GUU K, LANSING L, et al. Pretraining with contrastive sentence objectives improves discourse performance of language models[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 4859-4870. 10.18653/v1/2020.acl-main.439 |
15 | CIPOLLA R, GAL Y, KENDALL A. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics[C]// Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2018: 7482-7491. 10.1109/cvpr.2018.00781 |
16 | DODDINGTON G, MITCHELL A, PRZYBOCKI M, et al. The Automatic Content Extraction (ACE) program — tasks, data, and evaluation[C]// Proceedings of the 4th International Conference on Language Resources and Evaluation. Paris: European Language Resources Association, 2004: 837-840. |
17 | ROTH D, YIH W T. A linear programming formulation for global inference in natural language tasks[C]// Proceedings of the 8th Conference on Computational Natural Language Learning at HLT-NAACL 2004. Stroudsburg, PA: ACL, 2004: 1-8. |
18 | GURULINGAPPA H, RAJPUT A M, ROBERTS A, et al. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports[J]. Journal of Biomedical Informatics, 2012, 45(5): 885-892. 10.1016/j.jbi.2012.04.008 |
19 | LI Q, JI H. Incremental joint extraction of entity mentions and relations[C]// Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA: ACL, 2014: 402-412. 10.3115/v1/p14-1038 |
20 | TRAN T, KAVULURU R. Neural metric learning for fast end-to-end relation extraction[EB/OL]. (2019-08-27) [2022-03-20].. 10.1093/database/bay092 |
21 | LI X Y, YIN F, SUN Z J, et al. Entity-relation extraction as multi-turn question answering[C]// Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 1340-1350. 10.18653/v1/p19-1129 |
22 | LAI Q H, ZHOU Z H, LIU S. Joint entity-relation extraction via improved graph attention networks[J]. Symmetry, 2020, 12(10): No.1746. 10.3390/sym12101746 |
23 | ZHAO T Y, YAN Z, CAO Y B, et al. Asking effective and diverse questions: a machine reading comprehension based framework for joint entity-relation extraction[C]// Proceedings of the 29th International Joint Conferences on Artificial Intelligence. California: ijcai.org, 2021: 3948-3954. 10.24963/ijcai.2020/546 |
[1] | Xingyao YANG, Yu CHEN, Jiong YU, Zulian ZHANG, Jiaying CHEN, Dongxiao WANG. Recommendation model combining self-features and contrastive learning [J]. Journal of Computer Applications, 2024, 44(9): 2704-2710. |
[2] | Huanliang SUN, Siyi WANG, Junling LIU, Jingke XU. Help-seeking information extraction model for flood event in social media data [J]. Journal of Computer Applications, 2024, 44(8): 2437-2445. |
[3] | Song XU, Wenbo ZHANG, Yifan WANG. Lightweight video salient object detection network based on spatiotemporal information [J]. Journal of Computer Applications, 2024, 44(7): 2192-2199. |
[4] | Xiaoxia JIANG, Ruizhang HUANG, Ruina BAI, Lina REN, Yanping CHEN. Deep event clustering method based on event representation and contrastive learning [J]. Journal of Computer Applications, 2024, 44(6): 1734-1742. |
[5] | Chao WEI, Yanping CHEN, Kai WANG, Yongbin QIN, Ruizhang HUANG. Relation extraction method based on mask prompt and gated memory network calibration [J]. Journal of Computer Applications, 2024, 44(6): 1713-1719. |
[6] | Youren YU, Yangsen ZHANG, Yuru JIANG, Gaijuan HUANG. Chinese named entity recognition model incorporating multi-granularity linguistic knowledge and hierarchical information [J]. Journal of Computer Applications, 2024, 44(6): 1706-1712. |
[7] | Jiong WANG, Taotao TANG, Caiyan JIA. PAGCL: positive augmentation graph contrastive learning recommendation method without negative sampling [J]. Journal of Computer Applications, 2024, 44(5): 1485-1492. |
[8] | Jie GUO, Jiayu LIN, Zuhong LIANG, Xiaobo LUO, Haitao SUN. Recommendation method based on knowledge‑awareness and cross-level contrastive learning [J]. Journal of Computer Applications, 2024, 44(4): 1121-1127. |
[9] | Yongfeng DONG, Jiaming BAI, Liqin WANG, Xu WANG. Chinese named entity recognition combining prior knowledge and glyph features [J]. Journal of Computer Applications, 2024, 44(3): 702-708. |
[10] | Weichao DANG, Lei ZHANG, Gaimei GAO, Chunxia LIU. Weakly supervised action localization method with snippet contrastive learning [J]. Journal of Computer Applications, 2024, 44(2): 548-555. |
[11] | Yunhua ZHU, Bing KONG, Lihua ZHOU, Hongmei CHEN, Chongming BAO. Multi-view clustering network guided by graph contrastive learning [J]. Journal of Computer Applications, 2024, 44(10): 3267-3274. |
[12] | Xingyao YANG, Hongtao SHEN, Zulian ZHANG, Jiong YU, Jiaying CHEN, Dongxiao WANG. Sequential recommendation based on hierarchical filter and temporal convolution enhanced self-attention network [J]. Journal of Computer Applications, 2024, 44(10): 3090-3096. |
[13] | Wei TONG, Liyang HE, Rui LI, Wei HUANG, Zhenya HUANG, Qi LIU. Efficient similar exercise retrieval model based on unsupervised semantic hashing [J]. Journal of Computer Applications, 2024, 44(1): 206-216. |
[14] | Yirui HUANG, Junwei LUO, Jingqiang CHEN. Multi-modal dialog reply retrieval based on contrast learning and GIF tag [J]. Journal of Computer Applications, 2024, 44(1): 32-38. |
[15] | Xiaoyan ZHANG, Zhengyu DUAN. Cross-lingual zero-resource named entity recognition model based on sentence-level generative adversarial network [J]. Journal of Computer Applications, 2023, 43(8): 2406-2411. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||