[1] K. Hyland, Second Language Writing. Cambridge University Press, 2019.
[2] OpenAI et al., “GPT-4 Technical Report,” arXiv preprint, arXiv:2303.08774, Mar. 2023. [Online]. Available: http://arxiv.org/abs/2303.08774
[3] A. Castellanos-Gomez, “Good Practices for Scientific Article Writing with ChatGPT and Other Artificial Intelligence Language Models,” Nanomanufacturing, vol. 3, no. 2, pp. 135–138, 2023.
[4] S. Izadi and M. Ghasemzadeh, “Use of Generalized Language Model for Question Matching,” International Journal of Engineering, 2013.
[5] P. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, “Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing,” *ACM Computing Surveys, vol. 55, no. 9, pp. 1–35, 2023.
[6] G. Team et al., “Gemma: Open Models Based on Gemini Research and Technology,” arXiv preprint, arXiv:2403.08295, 2024.
[7] M. Lytvyn, A. Shevchenko, and D. Lider, “Grammarly Inc.,” 2023. [Online]. Available: https://www.grammarly.com/.
[8] C. Banks, “ProWritingAid Ltd.,” 2023. [Online]. Available: https://prowritingaid.com/.
[9] G. Heidorn, “Intelligent Writing Assistance,” in A Handbook of Natural Language Processing: Techniques and Applications for the Processing of Language as Text, vol. 8, Marcel Dekker, 2000.
[10] E. S. Atwell and S. Elliot, “Dealing with Ill-formed English Text,” in The Computational Analysis of English: A Corpus-Based Approach, Longman, 1987, pp. 120–138.
[11] A. Radford et al., “Language Models Are Unsupervised Multitask Learners,” OpenAI Blog, vol. 1, no. 8, pp. 1–10, 2019.
[12] T. B. Brown et al., “Language Models Are Few-shot Learners,” arXiv preprint, arXiv:2005.14165, 2020.
[13] A. Cohan et al., “A Discourse-aware Attention Model for Abstractive Summarization of Long Documents,” arXiv preprint, arXiv:1804.05685, 2018.
[14] S. Rose, D. Engel, N. Cramer, and W. Cowley, “Automatic Keyword Extraction from Individual Documents,” in Text Mining: Applications and Theory, 2010, pp. 1–20.
[15] C. Florescu and C. Caragea, “PositionRank: An Unsupervised Approach to Keyphrase Extraction from Scholarly Documents,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Long Papers), 2017, pp. 1105–1115.
[16] J. D. Velásquez-Henao, C. J. Franco-Cardona, and L. Cadavid-Higuita, “Prompt Engineering: A Methodology for Optimizing Interactions with AI-Language Models in the Field of Engineering,” Dyna (Medellín), vol. 90, no. 230, pp. 9–17, 2023.
[17] T. Luther, J. Kimmerle, and U. Cress, “Teaming Up with an AI: Exploring Human–AI Collaboration in a Writing Scenario with ChatGPT,” AI, vol. 5, no. 3, pp. 1357–1376, 2024.
[18] P. Fernandes et al., “Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural Language Generation,” Transactions of the Association for Computational Linguistics, vol. 11, pp. 1643–1668, 2023.
[19] Z. Hu, Z. Yang, X. Liang, R. Salakhutdinov, and E. P. Xing, “Toward Controlled Generation of Text,” in Proceedings of the International Conference on Machine Learning, 2017, pp. 1587–1596.
[20] N. S. Keskar, B. McCann, L. R. Varshney, C. Xiong, and R. Socher, “CTRL: A Conditional Transformer Language Model for Controllable Generation,” arXiv preprint, arXiv:1909.05858, 2019.
[21] M. Khalifa and M. Albadawy, “Using Artificial Intelligence in Academic Writing and Research: An Essential Productivity Tool,” Computer Methods and Programs in Biomedicine Update, p. 100145, 2024.
[22] J. M. Swales and C. B. Feak, Academic Writing for Graduate Students: Essential Tasks and Skills, 3rd ed., University of Michigan Press, 2004.
[23] J. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, and A. Bouchachia, “A Survey on Concept Drift Adaptation,” ACM Computing Surveys, vol. 46, no. 4, pp. 1–37, 2014.
[24] J. Cohen, “A Coefficient of Agreement for Nominal Scales,” Educational and Psychological Measurement, vol. 20, no. 1, pp. 37–46, 1960.
[25] J. Zobel, Writing for Computer Science, 2nd ed., Springer, 1997.