[1] 宁尚明, 滕飞, 李天瑞. 基于多通道自注意力机制的电子病历实体关系抽取[J]. 计算机学报, 2020, 43(5):916-929.(NING S M, TENG F, LI T R. Multi-channel self-attention mechanism for relation extraction in clinical records[J]. Chinese Journal of Computers, 2020, 43(5):916-929.) [2] SOCHER R, HUVAL B, MANNING C D, et al. Semantic compositionality through recursive matrix-vector spaces[C]//Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Stroudsburg, PA:Association for Computational Linguistics, 2012:1201-1211. [3] MARRERO M, URBANO J, SÁNCHEZ-CUADRADO S, et al. Named entity recognition:fallacies, challenges and opportunities[J]. Computer Standards and Interfaces, 2013, 35(5):482-489. [4] KUMAR S. A survey of deep learning methods for relation extraction[EB/OL]. (2017-05-10)[2020-11-10]. https://arxiv.org/pdf/1705.03645.pdf. [5] MIWA M, BANSAL M. End-to-end relation extraction using LSTMs on sequences and tree structures[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA:Association for Computational Linguistics, 2016:1105-1116. [6] KATIYAR A, CARDIE C. Going out on a limb:joint extraction of entity mentions and relations without dependency trees[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA:Association for Computational Linguistics, 2017:917-928. [7] ZHENG S C, WANG F, BAO H Y, et al. Joint extraction of entities and relations based on a novel tagging scheme[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA:Association for Computational Linguistics, 2017:1227-1236. [8] ZENG X R, ZENG D J, HE S Z, et al. Extracting relational facts by an end-to-end neural model with copy mechanism[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA:Association for Computational Linguistics, 2018:506-514. [9] DAI D, XIAO X Y, LYU Y J, et al. Joint extraction of entities and overlapping relations using position-attentive sequence labeling[C]//Proceedings of the 33rd AAAI Conference on Artificial Intelligence, Palo Alto, CA:AAAI Press, 2019:6300-6308. [10] MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word representations in vector space[EB/OL]. (2013-09-07)[2020-11-11]. https://arxiv.org/pdf/1301.3781.pdf. [11] PENNINGTON J, SOCHER R, MANNING C D. GloVe:global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:Association for Computational Linguistics, 2014:1532-1543. [12] DEVLIN J, CHANG M W, LEE K, et al. BERT:pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Stroudsburg, PA:Association for Computational Linguistics, 2019:4171-4186. [13] 张秋颖, 傅洛伊, 王新兵. 基于BERT-BiLSTM-CRF的学者主页信息抽取[J]. 计算机应用研究, 2020, 37(S1):47-49. (ZHANG Q Y, FU L Y, WANG X B. Information extraction from scholar homepage based on BERT-BiLSTM-CRF[J]. Application Research of Computers, 2020, 37(S1):47-49.) [14] GRAVES A, SCHMIDHUBER J. Framewise phoneme classification with bidirectional LSTM and other neural network architectures[J]. Neural Networks, 2005, 18(5/6):602-610. [15] SUNDERMEYER M, SCHLÜTER R, NEY H. LSTM neural networks for language modeling[C]//Proceedings of the 13th Annual Conference of the International Speech Communication Association. Belfast:International Speech Communication Association, 2012:194-197. [16] MIKOLOV T, KARAFIÁT M, BURGET L, et al. Recurrent neural network based language model[C]//Proceedings of the 11th Annual Conference of the International Speech Communication Association. Belfast:International Speech Communication Association, 2010:1045-1048. [17] LAFFERTY J D, McCALLUM A, PEREIRA F C N. Conditional random fields:Probabilistic models for segmenting and labeling sequence data[C]//Proceedings of the 18th International Conference on Machine Learning. San Francisco:Morgan Kaufmann Publishers Inc., 2001:282-289. [18] 李荣陆, 王建会, 陈晓云, 等. 使用最大熵模型进行中文文本分类[J]. 计算机研究与发展, 2005, 42(1):94-101.(LI R L, WANG J H, CHEN X Y, et al. Using maximum entropy model for Chinese text categorization[J]. Journal of Computer Research and Development, 2005, 42(1):94-101.) [19] EDDY S R. Hidden Markov models[J]. Current Opinion in Structural Biology, 1996, 6(3):361-365. |