Journal of Computer Applications ›› 2022, Vol. 42 ›› Issue (1): 87-93.DOI: 10.11772/j.issn.1001-9081.2021020272

• Artificial intelligence • Previous Articles     Next Articles

Encoding-decoding relationship extraction model based on criminal Electra

Xiaopeng WANG, Yuanyuan SUN(), Hongfei LIN   

  1. School of Computer Science and Technology,Dalian University of Technology,Dalian Liaoning 116024,China
  • Received:2021-02-21 Revised:2021-06-27 Accepted:2021-07-08 Online:2021-07-29 Published:2022-01-10
  • Contact: Yuanyuan SUN
  • About author:WANG Xiaopeng, born in 1996, M. S. candidate. His research interests include natural language processing.
    SUN Yuanyuan, born in 1979, Ph. D., professor. Her research interests include natural language processing
    LIN Hongfei, born in 1962, Ph. D., professor. His research interests include natural language processing.
  • Supported by:
    National Key Research and Development Program of China(2018YFC0830603)


王小鹏, 孙媛媛(), 林鸿飞   

  1. 大连理工大学 计算机科学与技术学院,辽宁 大连 116024
  • 通讯作者: 孙媛媛
  • 作者简介:王小鹏(1996—),男,甘肃天水人,硕士研究生,研究方向:自然语言处理
  • 基金资助:


Aiming at the problem that the model in the judicial field relation extraction task does not fully understand the context of sentence and has weak recognition ability of overlapping relations, based on Criminal-Efficiently learning an encoder that classi?es token replacements accurately (CriElectra), an encoding-decoding relationship extraction model was proposed. Firstly, referred to the training method of Chinese Electra, CriElectra was trained on one million criminal dataset. Then, the word vectors of CriElectra were added to Bidirectional Long Short-Term Memory (BiLSTM) model for feature extraction of judicial texts. Finally, the vector clustering was performed to the features through Capsule Network (CapsNet), so that the relationships between entities were extracted. Experimental results show that on the self-built relationship dataset of intentional injury crime, compared with the pre-trained language model based on Chinese Electra, CriElectra has retraining process on judicial texts to make the learned word vectors contain richer domain information, and the F1-score increased by 1.93 percentage points. Compared with the model based on pooling clustering, CapsNet can effectively prevent the loss of spatial information by vector operation and improve the recognition ability of overlapping relationships, which increases the F1-score by 3.53 percentage points.

Key words: judicial field, relation extraction, pretrained language model, Bidirectional Long Short-Term Memory (BiLSTM), Capsule Network (CapsNet)


针对司法领域关系抽取任务中模型对句子上下文理解不充分、重叠关系识别能力弱的问题,提出了一种基于刑事Electra (CriElectra)的编-解码关系抽取模型。首先,参考中文Electra的训练方法,在1 000 000份刑事数据集上训练得到了CriElectra;然后,在双向长短期记忆网络(BiLSTM)模型上加入CriElectra的词特征进行司法文本的特征提取;最后,通过胶囊网络(CapsNet)对特征进行矢量聚类,从而实现实体间的关系抽取。实验结果表明,在自构建的故意伤害罪关系数据集上,与基于中文Electra的这一预训练语言模型相比,CriElectra在司法文本上的重训过程使得学习到的词向量蕴含更丰富的领域信息,且F1值提升了1.93个百分点;与基于池化聚类的模型相比,CapsNet通过矢量运算能够有效防止空间信息丢失,并提高重叠关系的识别能力,使得F1值提升了3.53个百分点。

关键词: 司法领域, 关系抽取, 预训练语言模型, 双向长短期记忆网络, 胶囊网络

CLC Number: