Journal of Computer Applications ›› 2021, Vol. 41 ›› Issue (12): 3637-3644.DOI: 10.11772/j.issn.1001-9081.2021010090
• Artificial intelligence • Previous Articles
Bowen YAO, Biqing ZENG(), Jian CAI, Meirong DING
Received:
2021-01-18
Revised:
2021-04-27
Accepted:
2021-04-29
Online:
2021-12-28
Published:
2021-12-10
Contact:
Biqing ZENG
About author:
YAO Bowen, born in 1997, M. S. candidate. His research interests include natural language processing, relation extraction.Supported by:
通讯作者:
曾碧卿
作者简介:
姚博文(1997—),男,江西赣州人,硕士研究生,CCF会员,主要研究方向:自然语言处理、关系抽取基金资助:
CLC Number:
Bowen YAO, Biqing ZENG, Jian CAI, Meirong DING. Chinese character relation extraction model based on pre-training and multi-level information[J]. Journal of Computer Applications, 2021, 41(12): 3637-3644.
姚博文, 曾碧卿, 蔡剑, 丁美荣. 基于预训练和多层次信息的中文人物关系抽取模型[J]. 《计算机应用》唯一官方网站, 2021, 41(12): 3637-3644.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.joca.cn/EN/10.11772/j.issn.1001-9081.2021010090
参数描述 | 值 |
---|---|
批次大小 | 32 |
文本最大长度 | 85 |
学习率 | 5E-5 |
训练轮数 | 10 |
丢弃率 | 0.3 |
BiLSTM隐藏维度 | 768 |
BiLSTM层数 | 2 |
邻近词窗口长度 | 1 |
Tab. 1 Hyperparameter setting
参数描述 | 值 |
---|---|
批次大小 | 32 |
文本最大长度 | 85 |
学习率 | 5E-5 |
训练轮数 | 10 |
丢弃率 | 0.3 |
BiLSTM隐藏维度 | 768 |
BiLSTM层数 | 2 |
邻近词窗口长度 | 1 |
实验环境 | 配置 |
---|---|
GPU | Tesla T4 |
操作系统 | Windows 10 |
开发语言 | Python3.6 |
深度学习框架 | Pytorch1.7 |
Tab. 2 Experiment environment
实验环境 | 配置 |
---|---|
GPU | Tesla T4 |
操作系统 | Windows 10 |
开发语言 | Python3.6 |
深度学习框架 | Pytorch1.7 |
模型 | 嵌入维度 | 精度/% | 召回率/% | F1值% |
---|---|---|---|---|
CCREPMI-BERT | 768 | 81.5 | 82.3 | 81.9 |
CCREPMI-BERT-wwm | 768 | 79.0 | 79.7 | 79.3 |
CCREPMI-ERNIE | 768 | 79.3 | 80.0 | 79.6 |
Tab. 3 Result comparison of different pre-trained models
模型 | 嵌入维度 | 精度/% | 召回率/% | F1值% |
---|---|---|---|---|
CCREPMI-BERT | 768 | 81.5 | 82.3 | 81.9 |
CCREPMI-BERT-wwm | 768 | 79.0 | 79.7 | 79.3 |
CCREPMI-ERNIE | 768 | 79.3 | 80.0 | 79.6 |
模型类别 | 模型 | 精度 | 召回率 | F1值 |
---|---|---|---|---|
基准模型 | CNN | 45.3 | 44.9 | 45.1 |
CRCNN | 52.1 | 46.1 | 48.9 | |
BiLSTM-Att | 58.3 | 57.6 | 57.9 | |
BERT-based | BERT | 72.5 | 73.7 | 73.0 |
BERT-LSTM | 73.3 | 74.3 | 73.7 | |
RBERT | 81.0 | 81.5 | 81.2 | |
本文模型 | CCREPMI-S | 80.6 | 81.2 | 80.6 |
CCREPMI-G | 81.1 | 81.9 | 81.5 | |
CCREPMI | 81.5 | 82.3 | 81.9 |
Tab. 4 Performance comparison of different models
模型类别 | 模型 | 精度 | 召回率 | F1值 |
---|---|---|---|---|
基准模型 | CNN | 45.3 | 44.9 | 45.1 |
CRCNN | 52.1 | 46.1 | 48.9 | |
BiLSTM-Att | 58.3 | 57.6 | 57.9 | |
BERT-based | BERT | 72.5 | 73.7 | 73.0 |
BERT-LSTM | 73.3 | 74.3 | 73.7 | |
RBERT | 81.0 | 81.5 | 81.2 | |
本文模型 | CCREPMI-S | 80.6 | 81.2 | 80.6 |
CCREPMI-G | 81.1 | 81.9 | 81.5 | |
CCREPMI | 81.5 | 82.3 | 81.9 |
模型 | F1值 |
---|---|
CNN | 78.9 |
MVRNN | 79.1 |
FCM | 80.6 |
CCREPMI | 81.2 |
Tab. 5 Results comparison of different models on English dataset SemEval2010-task8
模型 | F1值 |
---|---|
CNN | 78.9 |
MVRNN | 79.1 |
FCM | 80.6 |
CCREPMI | 81.2 |
1 | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: Association for Computational Linguistics, 2019: 4171-4186. 10.18653/v1/n19-1423 |
2 | RADFORD A, NARASIMHAN K, SALIMANS T.et al. Improving language understanding by generative pre-training[EB/OL]. [2020-09-07].. |
3 | SOCHER R, HUVAL B, MANNING C D, et al. Semantic compositionality through recursive matrix-vector spaces[C]// Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Stroudsburg, PA: Association for Computational Linguistics, 2012: 1201-1211. |
4 | ZENG D J, LIU K, LAI S W, et al. Relation classification via convolutional deep neural network[C]// Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers. Stroudsburg, PA: Association for Computational Linguistics, 2014: 2335-2344. |
5 | SANTOS C N DOS, XIANG B, ZHOU B W. Classifying relations by ranking with convolutional neural networks[C]// Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics/ the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg, PA: Association for Computational Linguistics, 2015: 626-634. 10.3115/v1/p15-1061 |
6 | XU Y, MOU L L, LI G, et al. Classifying relations via long short term memory networks along shortest dependency paths[C]// Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2015: 1785-1794. 10.18653/v1/d15-1206 |
7 | LEE J, SEO S, CHOI Y S. Semantic relation classification via bidirectional LSTM networks with entity-aware attention using latent entity typing[J]. Symmetry, 2019, 11(6): No.785. 10.3390/sym11060785 |
8 | MINTZ M, BILLS S, SNOW R, et al. Distant supervision for relation extraction without labeled data[C]// Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2009: 1003-1011. 10.3115/1690219.1690287 |
9 | LIN Y K, SHEN S Q, LIU Z Y, et al. Neural relation extraction with selective attention over instances[C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA: Association for Computational Linguistics, 2016: 2124-2133. 10.18653/v1/p16-1200 |
10 | PENG N Y, POON H, QUIRK C, et al. Cross-sentence n-ary relation extraction with graph LSTMs[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 101-115. 10.1162/tacl_a_00049 |
11 | JI G L, LIU K, HE S Z, et al. Distant supervision for relation extraction with sentence-level attention and entity descriptions[C]// Proceedings of the 31st AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2017: 3060-3066. |
12 | LI Y, LONG G D, SHEN T, et al. Self-attention enhanced selective gate with entity-aware embedding for distantly supervised relation extraction[C]// Proceedings of the 34th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2020: 8269-8276. 10.1609/aaai.v34i05.6342 |
13 | BANKO M, CAFARELLA M J, SODERLAND S, et al. Open information extraction from the Web[C]// Proceedings of the 20th International Joint Conference on Artificial Intelligence. Menlo Park, CA: AAAI Press, 2007: 2670-2676. 10.3115/1614164.1614177 |
14 | AKBIK A, LÖSER A. KrakeN: N-ary facts in open information extraction[C]// Proceedings of the 2012 Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. Stroudsburg, PA: Association for Computational Linguistics, 2012: 52-56. |
15 | 王明波,王峥,邱秀连. 基于双向GRU和PCNN的人物关系抽取[J]. 电子设计工程, 2020, 28(10):160-165. 10.1109/access.2021.3078114 |
WANG M B, WANG Z, QIU X L. Character relationship extraction based on bidirectional GRU and PCNN[J]. Electronic Design Engineering, 2020, 28(10): 160-165. 10.1109/access.2021.3078114 | |
16 | 刘鉴,张怡,张勇. 基于双向LSTM和自注意力机制的中文关系抽取研究[J]. 山西大学学报(自然科学版), 2020, 43(1):8-13. |
LIU J, ZHANG Y, ZHANG Y. Chinese relationship extraction based on bidirectional LSTM and self-attention mechanism[J]. Journal of Shanxi University (Natural Science Edition), 2020, 43(1): 8-13 | |
17 | CUI Y M, CHE W X, LIU T, et al. Pre-training with whole word masking for Chinese BERT[EB/OL]. (2019-10-29) [2020-10-09]. . 10.1109/taslp.2021.3124365 |
18 | SUN Y, WANG S H, LI Y K, et al. ERNIE: enhanced representation through knowledge integration[EB/OL]. (2019-04-19) [2020-09-11].. |
19 | ZHOU P, SHI W, TIAN J, et al. Attention-based bidirectional long short-term memory networks for relation classification[C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA: Association for Computational Linguistics, 2016: 207-212. 10.18653/v1/p16-2034 |
20 | SHI P, LIN J. Simple BERT models for relation extraction and semantic role labeling[EB/OL]. (2019-04-10) [2020-09-21].. |
21 | WU S C, HE Y F. Enriching pre-trained language model with entity information for relation classification[C]// Proceedings of the 28th ACM International Conference on Information and Knowledge Management. New York: ACM, 2019: 2361-2364. 10.1145/3357384.3358119 |
[1] | XIE Defeng, JI Jianmin. Syntax-enhanced semantic parsing with syntax-aware representation [J]. Journal of Computer Applications, 2021, 41(9): 2489-2495. |
[2] | LIU Yaxuan, ZHONG Yong. Joint extraction method of entities and relations based on subject attention [J]. Journal of Computer Applications, 2021, 41(9): 2517-2522. |
[3] | ZHOU Xianbing, FAN Xiaochao, REN Ge, YANG Yong. Automated English essay scoring method based on multi-level semantic features [J]. Journal of Computer Applications, 2021, 41(8): 2205-2211. |
[4] | WANG Wei, ZHAO Erping, CUI Zhiyuan, SUN Hao. Disambiguation method of multi-feature fusion based on HowNet sememe and Word2vec word embedding representation [J]. Journal of Computer Applications, 2021, 41(8): 2193-2198. |
[5] | WU Lidan, XUE Yuyang, TONG Tong, DU Min, GAO Qinquan. Image colorization algorithm based on foreground semantic information [J]. Journal of Computer Applications, 2021, 41(7): 2048-2053. |
[6] | DU Yan, LYU Liangfu, JIAO Yichen. Fuzzy prototype network based on fuzzy reasoning [J]. Journal of Computer Applications, 2021, 41(7): 1885-1890. |
[7] | ZHANG Sun, YIN Chunyong. Sequential multimodal sentiment analysis model based on multi-task learning [J]. Journal of Computer Applications, 2021, 41(6): 1631-1639. |
[8] | WANG Zhujun, WANG Shi, LI Xueqing, ZHU Junwu. Review of event causality extraction based on deep learning [J]. Journal of Computer Applications, 2021, 41(5): 1247-1255. |
[9] | LAI Xuemei, TANG Hong, CHEN Hongyu, LI Shanshan. Multimodal sentiment analysis based on feature fusion of attention mechanism-bidirectional gated recurrent unit [J]. Journal of Computer Applications, 2021, 41(5): 1268-1274. |
[10] | LIU Ruiheng, YE Xia, YUE Zengying. Review of pre-trained models for natural language processing tasks [J]. Journal of Computer Applications, 2021, 41(5): 1236-1246. |
[11] | BIAN Pengcheng, ZHENG Zhonglong, LI Minglu, HE Yiran, WANG Tianxiang, ZHANG Dawei, CHEN Liyuan. Attention fusion network based video super-resolution reconstruction [J]. Journal of Computer Applications, 2021, 41(4): 1012-1019. |
[12] | CUI Bowen, JIN Tao, WANG Jianmin. Overview of information extraction of free-text electronic medical records [J]. Journal of Computer Applications, 2021, 41(4): 1055-1063. |
[13] | HU Yishan, QIN Pinle, ZENG Jianchao, CHAI Rui, WANG Lifang. Ultrasound thyroid segmentation network based on feature fusion and dynamic multi-scale dilated convolution [J]. Journal of Computer Applications, 2021, 41(3): 891-897. |
[14] | JIANG Qianyu, WANG Fengying, JIA Lipeng. Malware detection method based on perceptual hash algorithm and feature fusion [J]. Journal of Computer Applications, 2021, 41(3): 780-785. |
[15] | HOU Yunlong, ZHU Lei, CHEN Qin, LYU Suidong. Salient object detection based on difference of Gaussian feature network [J]. Journal of Computer Applications, 2021, 41(3): 706-713. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||