1 |
HERMANN K M, KOČISKÝ T, GREFENSTETTE E, et al. Teaching machines to read and comprehend[C]// Proceedings of the 28th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2015: 1693-1701. 10.18653/v1/d16-1116
|
2 |
CUI Y M, LIU T, CHE W X, et al. A span-extraction dataset for Chinese machine reading comprehension[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2019: 5883-5889. 10.18653/v1/d19-1600
|
3 |
王小捷,白子薇,李可,等. 机器阅读理解的研究进展[J]. 北京邮电大学学报, 2019, 42(6): 1-9. 10.13190/j.jbupt.2019-111
|
|
WANG X J, BAI Z W, LI K, et al. Survey on machine reading comprehension[J]. Journal of Beijing University of Posts and Telecommunications, 2019, 42(6): 1-9. 10.13190/j.jbupt.2019-111
|
4 |
RAJPURKAR P, ZHANG J, LOPYREV K, et al. SQuAD: 100,000+ questions for machine comprehension of text[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2016: 2383-2392. 10.18653/v1/d16-1264
|
5 |
KADLEC R, SCHMID M, BAJGAR O, et al. Text understanding with the attention sum reader network[C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA: Association for Computational Linguistics, 2016: 908-918. 10.18653/v1/p16-1086
|
6 |
SEO M, KEMBHAVI A, FARHADI A, et al. Bi-directional attention flow for machine comprehension[EB/OL]. (2018-06-21) [2020-12-22].. 10.1109/cvpr.2017.571
|
7 |
DHINGRA B, LIU H X, YANG Z L, et al. Gated-attention readers for text comprehension[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA: Association for Computational Linguistics, 2017: 1832-1846. 10.18653/v1/p17-1168
|
8 |
CUI Y M, CHEN Z P, WEI S, et al. Attention-over-attention neural networks for reading comprehension[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA: Association for Computational Linguistics, 2017: 593-602. 10.18653/v1/p17-1055
|
9 |
RAJPURKAR P, JIA R, LIANG P. Know what you don’t know: unanswerable questions for SQuAD[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA: Association for Computational Linguistics, 2018: 784-789. 10.18653/v1/p18-2124
|
10 |
TRISCHLER A, WANG T, YUAN X D, et al. NewsQA: a machine comprehension dataset[C]// Proceedings of the 2nd Workshop on Representation Learning for NLP. Stroudsburg, PA: Association for Computational Linguistics, 2017: 191-200. 10.18653/v1/w17-2623
|
11 |
YU A W, DOHAN D, LUONG M T, et al. QANet: combining local convolution with global self-attention for reading comprehension[EB/OL]. (2018-04-23) [2020-12-22]..
|
12 |
WANG S H, JIANG J. Machine comprehension using match-LSTM and answer pointer[EB/OL]. (2016-11-07) [2020-12-22].. 10.18653/v1/2020.findings-emnlp.370
|
13 |
PENNINGTON J, SOCHER R, MANNING C D. GloVe: global vectors for word representation[C]// Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA: Association for Computational Linguistics, 2014: 1532-1543. 10.3115/v1/d14-1162
|
14 |
LE Q, MIKOLOV T. Distributed representations of sentences and documents[C]// Proceedings of the 31st International Conference on Machine Learning. New York: JMLR.org, 2014: 1188-1196.
|
15 |
PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[C]// Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Stroudsburg, PA: Association for Computational Linguistics, 2018: 2227-2237. 10.18653/v1/n18-1202
|
16 |
DEVLIN J, CHANG M W, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Stroudsburg, PA: Association for Computational Linguistics, 2019: 4171-4186. 10.18653/v1/n19-1423
|
17 |
LAN Z Z, CHEN M D, GOODMAN S, et al. ALBERT: a lite BERT for self-supervised learning of language representations[EB/OL]. (2020-02-09) [2020-12-22]..
|
18 |
HU M H, WEI F R, PENG Y X, et al. Read+ verify: machine reading comprehension with unanswerable questions[C]// Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2019: 6529-6537. 10.1609/aaai.v33i01.33016529
|
19 |
ZHANG Z S, YANG J J, ZHAO H. Retrospective reader for machine reading comprehension[C]// Proceedings of the 35th AAAI Conference on Artificial Intelligence. Palo Alto, CA: AAAI Press, 2021:14506-14514. 10.1609/aaai.v34i05.6511
|
20 |
CAI J, ZHU Z Z, NIE P, et al. A pairwise probe for understanding BERT fine-tuning on machine reading comprehension[C]// Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2020: 1665-1668. 10.1145/3397271.3401195
|
21 |
SUN C, QIU X P, XU Y G, et al. How to fine-tune BERT for text classification?[C]// Proceedings of the 18th China National Conference on Chinese Computational Linguistics, LNCS11856. Cham: Springer, 2019: 194-206.
|
22 |
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc., 2017: 6000-6010. 10.1016/s0262-4079(17)32358-8
|