Journal of Computer Applications ›› 2022, Vol. 42 ›› Issue (1): 87-93.DOI: 10.11772/j.issn.1001-9081.2021020272
• Artificial intelligence • Previous Articles Next Articles
Xiaopeng WANG, Yuanyuan SUN(), Hongfei LIN
Received:
2021-02-21
Revised:
2021-06-27
Accepted:
2021-07-08
Online:
2021-07-29
Published:
2022-01-10
Contact:
Yuanyuan SUN
About author:
WANG Xiaopeng, born in 1996, M. S. candidate. His research interests include natural language processing.Supported by:
通讯作者:
孙媛媛
作者简介:
王小鹏(1996—),男,甘肃天水人,硕士研究生,研究方向:自然语言处理基金资助:
CLC Number:
Xiaopeng WANG, Yuanyuan SUN, Hongfei LIN. Encoding-decoding relationship extraction model based on criminal Electra[J]. Journal of Computer Applications, 2022, 42(1): 87-93.
王小鹏, 孙媛媛, 林鸿飞. 基于刑事Electra的编-解码关系抽取模型[J]. 《计算机应用》唯一官方网站, 2022, 42(1): 87-93.
Add to citation manager EndNote|Ris|BibTeX
URL: http://www.joca.cn/EN/10.11772/j.issn.1001-9081.2021020272
对比设置 | 模型 | 精确率 | 召回率 | F1值 |
---|---|---|---|---|
预训练模型实验对比 | XBLCN | 77.86 | 83.33 | 80.51 |
MBLCN | 76.73 | 78.08 | 76.58 | |
ELCN | 75.72 | 81.51 | 77.95 | |
特征提取模型实验对比 | CERCN | 75.13 | 81.57 | 78.21 |
CECCN | 76.64 | 78.72 | 77.26 | |
CECN | 76.89 | 80.98 | 79.47 | |
特征聚类模型实验对比 | CELAP | 72.52 | 80.82 | 76.15 |
CELMP | 76.48 | 78.31 | 76.35 | |
本文模型 | CELCN | 77.26 | 82.68 | 79.88 |
Tab. 1 Performance comparison of different models
对比设置 | 模型 | 精确率 | 召回率 | F1值 |
---|---|---|---|---|
预训练模型实验对比 | XBLCN | 77.86 | 83.33 | 80.51 |
MBLCN | 76.73 | 78.08 | 76.58 | |
ELCN | 75.72 | 81.51 | 77.95 | |
特征提取模型实验对比 | CERCN | 75.13 | 81.57 | 78.21 |
CECCN | 76.64 | 78.72 | 77.26 | |
CECN | 76.89 | 80.98 | 79.47 | |
特征聚类模型实验对比 | CELAP | 72.52 | 80.82 | 76.15 |
CELMP | 76.48 | 78.31 | 76.35 | |
本文模型 | CELCN | 77.26 | 82.68 | 79.88 |
模型方法 | 精确率 | 召回率 | F1值 |
---|---|---|---|
CELAP | 39.78 | 45.16 | 42.30 |
CELMP | 42.97 | 34.65 | 38.65 |
CELCN | 43.88 | 41.32 | 42.56 |
Tab. 2 Experimental results of some multi-label overlapping relationship data
模型方法 | 精确率 | 召回率 | F1值 |
---|---|---|---|
CELAP | 39.78 | 45.16 | 42.30 |
CELMP | 42.97 | 34.65 | 38.65 |
CELCN | 43.88 | 41.32 | 42.56 |
1 | 率蕴铤,顾克广. 司法文书[M]. 北京:中国政法大学出版社, 1996:1-2. |
SHUAI Y T, GU K G. Judicial Documents[M]. Beijing: China University of Political Science and Law Press, 1996: 1-2. | |
2 | TOLIAS G, SICRE R, JÉGOU H. Particular object retrieval with integral max-pooling of CNN activations[EB/OL]. (2016-02-24) [2021-02-27].. |
3 | ZHOU P, SHI W, TIAN J, et al. Attention-based bidirectional long short-term memory networks for relation classification[C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: Association for Computational Linguistics, 2016: 207-212. 10.18653/v1/p16-2034 |
4 | CHENG J P, DONG L, LAPATA M. Long short-term memory-networks for machine reading[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2016: 551-561. 10.18653/v1/d16-1053 |
5 | SABOUR S, FROSST N, HINTON G E. Dynamic routing between capsules[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017: 3859-3869. |
6 | CLARK K, LUONG M T, LE Q V, et al. ELECTRA: pre-training text encoders as discriminators rather than generators[EB/OL]. (2020-03-23) [2021-02-27].. 10.18653/v1/2020.emnlp-main.20 |
7 | ZHOU G D, SU J, ZHANG J, et al. Exploring various knowledge in relation extraction[C]// Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: Association for Computational Linguistics, 2005: 427-434. 10.3115/1219840.1219893 |
8 | LIU C Y, SUN W B, CHAO W H, et al. Convolution neural network for relation extraction[C]// Proceedings of the 9th International Conference on Advanced Data Mining and Applications, LNCS8347. Berlin: Springer, 2013: 231-242. |
9 | ZHANG R Y, MENG F R, ZHOU Y, et al. Relation classification via recurrent neural network with attention and tensor layers[J]. Big Data Mining and Analytics, 2018, 1(3): 234-244. 10.26599/bdma.2018.9020022 |
10 | 孙紫阳,顾君忠,杨静. 基于深度学习的中文实体关系抽取方法[J]. 计算机工程, 2018, 44(9):164-170. 10.19678/j.issn.1000-3428.0048518 |
SUN Z Y, GU J Z, YANG J. Chinese entity relation extraction method based on deep learning[J]. Computer Engineering, 2018, 44(9): 164-170. 10.19678/j.issn.1000-3428.0048518 | |
11 | LU T B, GAO P, DU X F, et al. An analysis of active attacks on anonymity systems[J]. International Journal of Security and its Applications, 2016, 10(4):95-104. 10.14257/ijsia.2016.10.4.11 |
12 | KIYAVASH N, HOUMANSADR A, BORISOV N. Multi-flow attacks against network flow watermarking schemes[C]// Proceedings of the 17th USENIX Security Symposium. Berkeley: USENIX Association, 2008: 307-320. 10.1109/icassp.2009.4959879 |
13 | LUO X P, ZHANG J J, PERDISCI R, et al. On the secrecy of spread-spectrum flow watermarks[C]// Proceedings of the 15th European Conference on Research in Computer Security, LNCS6345. Berlin: Springer, 2010: 232-248. |
14 | PETERS M, NEUMANN M, IYYER M, et al. Deep contextualized word representations[C]// Proceedings of the 2018 North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA: Association for Computational Linguistics, 2018: 2227-2237. 10.18653/v1/n18-1202 |
15 | DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[EB/OL]. [2021-02-27]. . 10.18653/v1/n19-1423 |
16 | RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[EB/OL]. [2021-02-27].. |
17 | YANG Z L, DAI Z H, YANG Y M, et al. XLNet: generalized autoregressive pretraining for language understanding[C/OL]// Proceedings of the 33rd Conference on Neural Information Processing Systems. [2021-02-27].. 10.1145/3369985.3370025 |
18 | 李妮,关焕梅,杨飘,等. 基于BERT-IDCNN-CRF的中文命名实体识别方法[J]. 山东大学学报(理学版), 2020, 55(1):102-109. 10.6040/j.issn.1671-9352.2.2019.076 |
LI N, GUAN H M, YANG P, et al. BERT-IDCNN-CRF for named entity recognition in Chinese[J]. Journal of Shandong University (Natural Science), 2020, 55(1):102-109. 10.6040/j.issn.1671-9352.2.2019.076 | |
19 | 王子牛,姜猛,高建瓴,等. 基于BERT的中文命名实体识别方法[J]. 计算机科学, 2019, 46(11A):138-142. |
WANG Z N, JIANG M, GAO J L, et al. Chinese named entity recognition method based on BERT [J]. Computer Science, 2019, 46(11A): 138-142. | |
20 | 尹学振,赵慧,赵俊保,等. 多神经网络协作的军事领域命名实体识别[J]. 清华大学学报(自然科学版), 2020, 60(8):648-655. 10.16511/j.cnki.qhdxxb.2020.25.004 |
YIN X Z, ZHAO H, ZHAO J B, et al. Multi-neural network collaboration for Chinese military named entity recognition[J]. Journal of Tsinghua University (Science and Technology), 2020, 60(8): 648-655. 10.16511/j.cnki.qhdxxb.2020.25.004 | |
21 | 王月,王孟轩,张胜,等. 基于BERT的警情文本命名实体识别[J]. 计算机应用, 2020, 40(2):535-540. 10.11772/j.issn.1001-9081.2019101717 |
WANG Y, WANG M X, ZHANG S, et al. Alarm text named entity recognition based on BERT[J]. Journal of Computer Applications, 2020, 40(2): 535-540. 10.11772/j.issn.1001-9081.2019101717 | |
22 | LEE J, YOON W, KIM S, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining[J]. Bioinformatics, 2020, 36(4): 1234-1240. 10.1093/bioinformatics/btz682 |
23 | HINTON G E , KRIZHEVSKY A , WANG S D. Transforming auto-encoders[C]// Proceedings of the 21st International Conference on Artificial Neural Networks, LNCS6791. Berlin: Springer, 2012: 44-51. |
24 | HINTON G E, SABOUR S, FROSST N. Matrix capsules with EM routing[EB/OL]. [2021-02-27].. |
25 | ZHANG N Y, DENG S M, SUN Z L, et al. Attention-based capsule networks with dynamic routing for relation extraction[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2018: 986-992. 10.18653/v1/d18-1120 |
26 | ZHANG X S, LI P S, JIA W J, et al. Multi-labeled relation extraction with attentive capsule network[C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Palo Alto, CA: AAAI Press, 2019: 7484-7491. 10.1609/aaai.v33i01.33017484 |
27 | ZHAO W, YE J B, YANG M, et al. Investigating capsule networks with dynamic routing for text classification[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2018: 3110-3119. 10.18653/v1/d18-1350 |
28 | 王祺,邱家辉,阮彤,等. 基于循环胶囊网络的临床语义关系识别研究[J]. 广西师范大学学报(自然科学版), 2019, 37(1): 80-88. 10.1007/978-3-030-26072-9_6 |
WANG Q, QIU J H, RUAN T, et al. Recurrent capsule network for clinical relation extraction[J]. Journal of Guangxi Normal University (Natural Science Edition), 2019, 37(1): 80-88. 10.1007/978-3-030-26072-9_6 |
[1] | Xueqiang LYU, Chen PENG, Le ZHANG, Zhi’an DONG, Xindong YOU. Text multi-label classification method incorporating BERT and label semantic attention [J]. Journal of Computer Applications, 2022, 42(1): 57-63. |
[2] | LIU Yaxuan, ZHONG Yong. Joint extraction method of entities and relations based on subject attention [J]. Journal of Computer Applications, 2021, 41(9): 2517-2522. |
[3] | WANG Zhujun, WANG Shi, LI Xueqing, ZHU Junwu. Review of event causality extraction based on deep learning [J]. Journal of Computer Applications, 2021, 41(5): 1247-1255. |
[4] | CUI Bowen, JIN Tao, WANG Jianmin. Overview of information extraction of free-text electronic medical records [J]. Journal of Computer Applications, 2021, 41(4): 1055-1063. |
[5] | WANG Xiaoxia, QIAN Xuezhong, SONG Wei. Relation extraction model via attention-based graph convolutional network [J]. Journal of Computer Applications, 2021, 41(2): 350-356. |
[6] | XU Li, LI Jianhua. Biomedical named entity recognition with graph network based on syntactic dependency parsing [J]. Journal of Computer Applications, 2021, 41(2): 357-362. |
[7] | Bowen YAO, Biqing ZENG, Jian CAI, Meirong DING. Chinese character relation extraction model based on pre-training and multi-level information [J]. Journal of Computer Applications, 2021, 41(12): 3637-3644. |
[8] | DONG Yongfeng, LIU Chao, WANG Liqin, LI Yingshuang. Relationship reasoning method combining multi-hop relationship path information [J]. Journal of Computer Applications, 2021, 41(10): 2799-2805. |
[9] | WU Xiaoping, ZHANG Qiang, ZHAO Fang, JIAO Lin. Entity relation extraction method for guidelines of cardiovascular disease based on bidirectional encoder representation from transformers [J]. Journal of Computer Applications, 2021, 41(1): 145-149. |
[10] | YIN Chunyong, HE Miao. Text classification based on improved capsule network [J]. Journal of Computer Applications, 2020, 40(9): 2525-2530. |
[11] | ZHANG Xinyi, FENG Shimin, DING Enjie. Entity recognition and relation extraction model for coal mine [J]. Journal of Computer Applications, 2020, 40(8): 2182-2188. |
[12] | XUE Lu, SONG Wei. Relation extraction method based on dynamic label [J]. Journal of Computer Applications, 2020, 40(6): 1601-1606. |
[13] | CAO Xiaowei, QU Zhijian, XU Lingling, LIU Xiaohong. 3D model recognition based on capsule network [J]. Journal of Computer Applications, 2020, 40(5): 1309-1314. |
[14] | CHEN Yuna, SHI Xiaodong. Improving machine simultaneous interpretation by punctuation recovery [J]. Journal of Computer Applications, 2020, 40(4): 972-977. |
[15] | CHEN Lichao, ZHANG Lei, CAO Jianfang, ZHANG Rui. Vehicle classification based on HOG-C CapsNet in traffic surveillance scenarios [J]. Journal of Computer Applications, 2020, 40(10): 2881-2889. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||