1. Khatri C., Singh G., andParikh N.Abstractive and extractive text summarization using document context vector and recurrent neural networks.arXiv preprint arXiv:1807.08000, 2018. 2. Kouris P., Alexandridis G., andStafylopatis A.Abstractive text summarization based on deep learning and semantic content generalization. InProceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp.5082-5092, 2019. 3. Yang M., Qu Q., Tu W., Shen Y., Zhao Z., andChen X.Exploring human-like reading strategy for abstractive text summarization. InProceedings of the AAAI Conference on Artificial Intelligence, 33(1), pp.7362-7369, 2019. 4. Nallapati R., Zhou B., Gulcehre C., andXiang B.Abstractive text summarization using sequence-to-sequence rnns and beyond.arXiv preprint arXiv:1602.06023, 2016. 5. Paulus R., Xiong C., andSocher R.A deep reinforced model for abstractive summarization.arXiv preprint arXiv:1705.04304, 2017. 6. Zhang Y., Li D., Wang Y., Fang Y., andXiao W.Abstract text summarization with a convolutional Seq2seq model. Applied Sciences, 9(8), pp.1665, 2019. 7. Raffel C., Shazeer N., Roberts A., Lee K., Narang S., Matena M., Zhou Y., Li W., andLiu P.J.Exploring the limits of transfer learning with a unified text-to-text transformer.arXiv preprint arXiv:1910.10683, 2019. 8. Rush A.M., Chopra S., andWeston J.A neural attention model for abstractive sentence summarization.arXiv preprint arXiv:1509.00685, 2015. 9. Chopra, S., Auli, M., and Rush, A.M. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp.93-98, 2016. 10. Bahdanau D., Cho K., andBengio Y.Neural machine translation by jointly learning to align and translate.arXiv preprint arXiv:1409.0473, 2014. 11. Chung J., Gulcehre C., Cho K., andBengio Y.Empirical evaluation of gated recurrent neural networks on sequence modeling.arXiv preprint arXiv:1412.3555, 2014. 12. Song S., Huang H., andRuan T.Abstractive text summarization using LSTM-CNN based deep learning.Multimedia Tools and Applications, 78(1), pp.857-875, 2019. 13. Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser Ł., andPolosukhin I.Attention is all you need. InAdvances in neural information processing systems, pp.5998-6008, 2017. 14. Devlin J., Chang M.W., Lee K., andToutanova K.Bert: Pre-training of deep bidirectional transformers for language understanding.arXiv preprint arXiv:1810.04805, 2018. 15. Dai Z., Yang Z., Yang Y., Carbonell J., Le Q.V., andSalakhutdinov R.Transformer-xl: Attentive language models beyond a fixed-length context.arXiv preprint arXiv:1901.02860, 2019. 16. Lewis M., Liu Y., Goyal N., Ghazvininejad M., Mohamed A., Levy O., Stoyanov V., andZettlemoyer L.Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.arXiv preprint arXiv:1910.13461, 2019. 17. Hoang A., Bosselut A., Celikyilmaz A., andChoi Y.Efficient adaptation of pretrained transformers for abstractive summarization.arXiv preprint arXiv:1906.00138, 2019. 18. Zhang H., Xu J., andWang J.Pretraining-based natural language generation for text summarization.arXiv preprint arXiv:1902.09243, 2019. 19. Yang Z., Dai Z., Yang Y., Carbonell J., Salakhutdinov R.R., andLe Q.V.Xlnet: Generalized autoregressive pretraining for language understanding.Advances in neural information processing systems, 32, 2019. 20. Song K., Wang B., Feng Z., Liu R., andLiu F.Controlling the amount of verbatim copying in abstractive summarization. InProceedings of the AAAI Conference on Artificial Intelligence, 34(5), pp.8902-8909, 2020. 21. Gunel B., Zhu C., Zeng M., andHuang X.Mind the facts: Knowledge-boosted coherent abstractive text summarization.arXiv preprint arXiv:2006.15435, 2020. 22. Lin, C.Y. and Och, F.J.Looking for a few good metrics: ROUGE and its evaluation. InNtcir Workshop, 2004. |