Document Type : Original Research Paper
Author
Department of English, Faculty of Literature, Alzahra University, Tehran, Iran
Abstract
Background and Objectives: The rapid integration of generative AI tools like ChatGPT and Microsoft Copilot into education has opened new opportunities for feedback, idea generation, and revision support in academic writing. However, their impact on EFL learners’ engagement remains underexplored. Engagement in language learning spans behavioral, cognitive, emotional, and agentic dimensions, each playing a crucial role in learning effectiveness. Behavioral engagement involves active participation in writing tasks, cognitive engagement refers to mental effort and strategy use, emotional engagement captures learners' emotional responses, and agentic engagement reflects their active role in shaping instruction. Despite growing interest in AI-assisted learning, little is known about how learners engage with AI feedback across these dimensions, and few studies compare ChatGPT and Copilot regarding functional and pedagogical capabilities, user experience, and challenges and ethical concerns. This study examines how EFL learners engage with AI tools during academic writing and investigates their comparative experiences with ChatGPT and Microsoft Copilot.
Methods: This qualitative study was conducted in an academic writing course with 18 Iranian undergraduate EFL students over a full semester in a national University in Tehran. Students engaged in writing five genres of essays (classification, process, extended definition, problem-solution, and argumentative), using ChatGPT and Microsoft Copilot for support during drafting and revision. Data were collected through reflective journals, semi-structured interviews, and some students’ prompt use records. Thematic analysis following a six-phase process was applied to examine the nature of engagement and comparative perceptions of the two AI tools, and the analysis involved both deductive and inductive coding strategies.
Findings: Results revealed dynamic and multi-dimensional engagement with AI tools across all four engagement domains. Behaviorally, students actively revised multiple drafts, showing a shift from broad, general prompts to genre-specific and purpose-driven ones. They frequently used both ChatGPT and Copilot in cycles of immediate and delayed revision, demonstrating growing independence in managing the pace and focus of their work without teacher support. Cognitively, learners critically evaluated the feedback, selectively adopting suggestions that enhanced logic, clarity, and coherence. Many reported recognizing recurring writing issues, allowing them to anticipate needed revisions before receiving feedback, indicating increasing awareness of writing patterns and conventions. Emotionally, students described both confidence-building experiences through constructive feedback and moments of frustration when facing vague or excessive suggestions. Overall, AI tools reduced revision anxiety for many and made the process feel more manageable and encouraging. Agentically, students exhibited ownership over their writing by accepting or rejecting AI-generated suggestions based on their intent. They developed more precise prompting skills over time and used additional resources (e.g., dictionaries, teacher comments) to supplement AI feedback, demonstrating a move beyond AI dependence toward personalized writing strategies.
When comparing Microsoft Copilot and ChatGPT, participants highlighted clear distinctions in their functional and pedagogical capabilities. Copilot was primarily valued for its effectiveness in grammar correction, formatting, and citation management, making it especially useful during the final stages of writing. In contrast, ChatGPT was more frequently used in the early and middle stages of the writing process due to its strength in idea generation, content development, and structural reorganization. In terms of user experience, Copilot was appreciated for being fast and easy to access, offering straightforward, predictable support for surface-level improvements. ChatGPT, on the other hand, was described as more interactive and flexible, enabling more dynamic engagement with content and fostering deeper reflection on writing choices. Despite their benefits, both tools raised ethical and practical concerns. Participants noted that each could generate generic or inaccurate content, with ambiguity surrounding authorship and intellectual ownership. Some learners expressed concerns about becoming overly dependent on AI tools, potentially undermining their voice and critical engagement in the writing process.
Conclusion: This study provides a nuanced understanding of how EFL learners engage with AI tools in academic writing. It highlights that engagement is not passive interaction but an active, reflective, and agentic process shaped by the affordances and limitations of the technology. The findings suggest that while both ChatGPT and Copilot can support academic writing, they serve different pedagogical purposes. Educators should guide students in using AI tools critically and ethically, promoting engagement that enhances rather than replaces learners’ writing agency.
Keywords
Main Subjects
COPYRIGHTS
© 2025 The Author(s). This is an open-access article distributed under the terms and conditions of the Creative Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) (https://creativecommons.org/licenses/by-nc/4.0/)
Send comment about this article