[1] BAHDANAU D, CHO K H, BENGIO Y. Neural machine translation by jointly learning to align and translate[EB/OL].[2018-03-20]. https://arxiv.org/pdf/1409.0473v7.pdf. [2] BAHDANAU D, CHOROWSKI J, SERDYUK D, et al. End-to-end attention-based large vocabulary speech recognition[C]//Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway, NJ:IEEE, 2016:4945-4949. [3] VENUGOPALAN S, ROHRBACH M, DONAHUE J, et al. Sequence to sequence-video to text[C]//Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway, NJ:IEEE, 2015:4534-4542. [4] RUSH A M, CHOPRA S, WESTON J. A neural attention model for abstractive sentence summarization[EB/OL].[2018-02-23]. https://arxiv.org/pdf/1509.00685.pdf. [5] CHOPRA S, AULI M, RUSH A M. Abstractive sentence summarization with attentive recurrent neural networks[EB/OL].[2018-03-21] http://aclweb.org/anthology/N/N16/N16-1012.pdf. [6] NALLAPATI R, ZHOU B W, dos SANTOS C N, et al. Abstractive text summarization using sequence-to-sequence RNNs and beyond[C]//Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning. Stroudsburg, PA:ACL, 2016:280-290. [7] ABADI M, BARHAM P, CHEN J M, et al. Tensor flow:a system for large-scale machine learning[C]//Proceedings of the 12th USENIX conference on Operating Systems Design and Implementation. Berkeley, CA:USENIX, 2016:265-283. [8] BRITZ D,GOLDIE A, LUONG M-T, et al. Massive exploration of neural machine translation architectures[EB/OL].[2018-04-05]. https://arxiv.org/pdf/1703.03906.pdf. [9] GEHRING J, AULI M, GRANGIER D, et al. Convolutional sequence to sequence learning[EB/OL].[2018-04-23]. https://arxiv.org/pdf/1705.03122.pdf. [10] LI P J, LAM W, BING L D, et al. Cascaded attention based unsupervised information distillation for compressive summarization[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:ACL, 2017:2081-2090. [11] CHUNG J Y, GULCEHRE C, CHO K H, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[EB/OL].[2018-04-23]. https://arxiv.org/pdf/1412.3555v1.pdf. [12] LOPYREV K. Generating news headlines with recurrent neural networks[EB/OL].[2018-03-20]. https://arxiv.org/pdf/1512.01712.pdf. [13] MNIH V, HEESS N, GRAVES A. Recurrent models of visual attention[EB/OL].[2018-04-08]. https://papers.nips.cc/paper/5542-recurrent-models-of-visual-attention.pdf. [14] LUONG M-T, PHAM H, MANNING C D. Effective approaches to attention-based neural machine translation[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:ACL, 2015:1412-1421. [15] JEAN S, CHO K H, MEMISEVIC R, et al. On using very large target vocabulary for neural machine translation[C]//Proceedings of the 53rd Annual Meeting of the ACL and the 7th International Joint Conference on Natural Language Processing. Stroudsburg, PA:ACL, 2015:1-10. [16] AYANA, SHEN S Q, ZHAO Y, et al. Neural headline generation with sentence-wise optimization[EB/OL].[2018-03-23]. https://arxiv.org/pdf/1604.01904.pdf. [17] LIN C Y, HOVY E. Automatic evaluation of summaries using n-gram co-occurrence statistics[C]//Proceedings of the 2003 Conference of the North American Chapter of the ACL on Human Language Technology. Stroudsburg, PA:ACL, 2003:71-78. [18] 户保田.基于深度神经网络的文本表示及其应用[D].哈尔滨:哈尔滨工业大学,2016:91-94. (HU B T. Deep neural networks for text representation and application[D]. Harbin:Harbin Institute of Technology, 2016:91-94.) [19] HU B T, CHEN Q C, ZHU F Z. LCSTS:A large scale Chinese short text summarization dataset[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg, PA:ACL, 2015:1967-1972. |