[1] N. Nazari, and M. A. Mahdavi, "A survey on automatic text summarization," Journal of AI and Data Mining, vol. 7, no. 1, pp. 121-135, 2019.
[2] D. Radev, E. Hovy, and K. McKeown, “Introduction to the special issue on summarization," Computational linguistics, vol. 28, no. 4, pp. 399-408, 2002.
[3] M. Zhang, G. Zhou, W. Yu, N. Huang, and W. Liu A, "Comprehensive survey of abstractive text summarization based on deep learning," Computational intelligence and neuroscience, vol. 2022, no. 1, pp. 1-21, 2022.
[4] S. Mehrabi, S. A. Mirroshandel, and H. Ahmadifar, “DeepSumm: A Novel Deep Learning-Based Multi-Lingual Multi-Documents Summarization System," Information Systems & Telecommunication (JIST), vol. 7, pp. 204-214, 2019.
[5] K. Kaku, M. Kikuchi, T. Ozono, and T. Shintani, “Development of an extractive title generation system using titles of papers of top conferences for intermediate English students," in Proceedings of the 10th International Congress on Advanced Applied Informatics, 2021, pp. 59-64.
[6] W. Li, X. Xiao, Y. Lyu, and Y. Wang, “Improving neural abstractive document summarization with explicit information selection modeling," in Proceedings of the 2018 conference on empirical methods in natural language processing, 2018, pp. 1787-1796.
[7] M. Molaei, D. Mohamadpur, "Distributed Online Pre-Processing Framework for Big Data Sentiment Analytics," Journal of AI and Data Mining, vol. 10, pp. 197-205, April 2022.
[8] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need," in Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS), 2017, pp. 10-21.
[9] S. Islam, H. Elmekki, A. Elsebai, J. Bentahar, N. Drawel, and W. Pedrycz, “A comprehensive survey on applications of transformers for deep learning tasks," Expert Systems with Applications, vol. 241, pp. 122666, 2024.
[10] J. Zhang, Y. Zhao, M. Saleh, and P. Liu, “Pegasus: Pre-training with extracted gap-sentences for abstractive summarization," in Proceedings of the International conference on machine learning, 2020, pp. 11328-11339.
[11] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer," Journal of machine learning research, vol. 21, pp. 1-67, 2020.
[12] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension," arXiv preprint, arXiv:1910.13461, 2019.
[13] T. Zhang, I. C. Irsan, F. Thung, D. Han, D. Lo, and L. Jiang, “Automatic pull request title generation," in Proceedings of the IEEE International Conference on Software Maintenance and Evolution, 2022, pp. 71-81.
[14] F. Zhang, J. Liu, Y. Wan, X. Yu, X. Liu, and J. Keung, “Diverse title generation for Stack Overflow posts with multiple-sampling-enhanced transformer," Journal of Systems and Software, vol. 200, pp. 111672, 2023.
[15] F. Zhang, X. Yu, J. Keung, F. Li, Z. Xie, Z. Yang, C. Ma, and Z. Zhang, “Improving Stack Overflow question title generation with copying enhanced CodeBERT model and bi-modal information," Information and Software Technology, vol. 148, pp. 106922, 2022.
[16] T. Zhang, I. C. Irsan, F. Thung, D. Han, D. Lo, and L. Jiang, “iTiger: an automatic issue title generation tool," in Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2022, pp. 1637-1641.
[17] K. Liu, G. Yang, X. Chen, and C. Yu, "Sotitle: A transformer-based post title generation approach for stack overflow," in Proceedings of the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), 2022, pp. 577-588.
[18] S. Abdel-Salam, and A. Rafea, “Performance study on extractive text summarization using BERT models," Information, vol. 13, pp. 67, 2022.
[19] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation," in Proceedings of the 2014 conference on empirical methods in natural language processing, 2014, pp. 1532-1543.
[20] S. Bhargav, A. Choudhury, S. Kaushik, R. Shukla, and V. Dutt, “A comparison study of abstractive and extractive methods for text summarization," in Proceedings of the International Conference on Paradigms of Communication, Computing and Data Sciences, 2022, pp. 601-610.
[21] C. Y. Lin, “Rouge: A package for automatic evaluation of summaries," in Proceedings of the Text summarization branches out, 2004, pp. 74-81.