Abstract
Artificial Intelligence (AI) technologies are transforming educational settings by offering tools that enhance learning experiences. AI-powered applications, such as ChatGPT and Poe, provide real-time assistance, fostering learner autonomy and self-determined motivation. However, limited research has explored their impact on undergraduate students’ learning strategies and motivation. This study investigates the effectiveness of AI-powered educational applications in enhancing metacognitive and social learning strategies, as well as self-determined motivation, among Chinese undergraduate students. This mixed-methods quasi-experimental study involved 310 undergraduates (45% female, 55% male; M age = 21) at the Criminal Investigation Police University of China. Participants were assigned to an AI-integrated experimental group (n = 139) or a control group (n = 171). Validated questionnaires assessed metacognitive/social strategies (SILL) and autonomous motivation (RAI). Qualitative data from 834 reflective journals were thematically analyzed. ANCOVA was used to compare post-test outcomes, controlling for pre-test scores, while journals provided experiential insights. Ethical approval and informed consent were obtained. Using a quasi-experimental design, this study investigated the impact of AI-integrated instruction on metacognitive strategies, social strategies, and motivation in 310 undergraduate students. ANCOVA revealed significant improvements in the AI group (p < .001), with large effect sizes observed for metacognition (η² = 0.39) and motivation (η² = 0.31), which are large effect sizes according to Cohen’s benchmarks. Qualitative analysis of 834 journals highlighted themes of autonomy, support for metacognitive strategies, and reduced anxiety, although risks of superficial application were noted. Mixed methods confirmed AI’s effectiveness in enhancing strategic learning. AI applications facilitate independent academic exploration and enhance learners’ motivation by providing immediate support and personalized learning experiences. These findings highlight the potential of AI-powered tools to foster learner autonomy. However, successful integration into educational settings requires strategic pedagogical approaches to maximize benefits while addressing potential challenges.
Similar content being viewed by others
Introduction
Artificial Intelligence (AI) has evolved as a multifaceted domain, starting with early discussions on machine cognition and programming Limitations from the 1940s. The exploration of AI began with early thinkers such as Turing, who proposed the imitation game, and Asimov, who introduced the Three Laws of Robotics, providing foundational frameworks for understanding intelligent machine behavior and assessing machine cognition against human capacity1,2,3. McLuhan’s perspective, which views technology as an extension of human faculties, further situates AI as an extension of human intelligence, allowing machines to complement human thinking and decision-making4.
However, the rapid integration of AI into daily life, especially in education, has raised concerns about the convergence of human and machine intelligence and its implications for traditional communication and advisory systems5,6. While AI holds significant transformative potential, its widespread application in educational contexts has raised questions regarding the obsolescence of educator roles and the risk of covert AI-assisted academic work7,8. Despite AI’s growing presence in the workplace during the Fourth Industrial Revolution, its role in higher education is still debated, with varying perspectives on its utility and impact on traditional teaching paradigms9,10.
With the emergence of advanced AI technologies such as Chat Generative Pre-Trained Transformer (ChatGPT), the implications for educational contexts are profound, offering new opportunities and challenges. ChatGPT, using supervised and reinforcement learning, aids human language comprehension and extends its application across diverse educational domains11,12. It has increasingly been used to assist English for Academic Purposes (EAP) learners in generating feedback, paraphrasing, summarizing texts, and modeling academic discourse structures13,14. Despite the growing presence of AI, its adoption in educational settings remains a topic of debate, requiring further scholarly exploration to understand its impact on student learning and educational practices fully.
While AI offers numerous benefits for student empowerment, particularly through cognitive and metacognitive scaffolding, it also raises concerns about ethical implications, privacy issues, and the influence of corporate interests on educational practices15,16,17. Some studies have suggested that the over-reliance on AI could lead to a reduction in learners’ autonomy and critical thinking skills18,19,20. A critical analysis of these mixed findings highlights the need to balance the positive potential of AI with its risks, ensuring that the integration of AI tools promotes deep learning rather than fostering dependency21,22.
Efforts to address these ethical challenges emphasize the importance of fostering technological literacy, ethical awareness, and critical engagement with AI tools23,24,25. These discussions are essential for understanding the broader implications of AI in education, particularly concerning potential biases in AI systems and the unintended ideological indoctrination that may arise from their use26,27,28. As AI tools become increasingly prevalent in educational settings, it is crucial to ensure that their integration is done thoughtfully and responsibly.
In the context of second language acquisition, recent studies have highlighted the intersection between AI and Self-Determination Theory (SDT), suggesting that AI tools, particularly conversational AI, can enhance intrinsic motivation and learner autonomy by offering self-paced, need-satisfying learning experiences29,30,31. SDT posits that motivation varies along a continuum from intrinsic to external forms, with autonomous motivation being linked to greater learning success and persistence32,33,34. The integration of AI in EAP contexts has been shown to foster learner autonomy, enabling students to take more control of their learning processes and tailor their language learning experience to meet their individual needs and goals35,36,37.
Furthermore, AI applications can enhance learner motivation by providing timely feedback, personalized learning pathways, and engaging interactive experiences. These tools, by promoting self-regulated learning and enhancing intrinsic motivation, align with SDT’s focus on supporting learners’ psychological needs for autonomy, competence, and relatedness38. This study aims to explore how AI-powered applications in EAP instruction influence metacognitive strategies, social strategies, and self-determined motivation, with particular focus on their potential to support autonomous learning. By connecting SDT and Basic Psychological Needs (BPN) theory with AI integration, this research seeks to provide valuable insights into how AI tools can facilitate enhanced learning outcomes for EAP learners25,38.
Basic psychological needs (BPN)
In addition to motivational orientations, Self-Determination Theory (SDT) highlights the importance of Basic Psychological Needs (BPN)—autonomy, competence, and relatedness—in fostering autonomous motivation25,26. Autonomy refers to learners’ need for agency over their actions, competence reflects their effectiveness in learning, and relatedness pertains to the sense of connection and care from others26. When these needs are met, learners are expected to act autonomously in their language-learning endeavors26,27,28. Conversational AI tools may support the satisfaction of these needs by offering instant feedback (supporting competence), learner-controlled pace (supporting autonomy), and simulation of collaborative interaction (supporting relatedness)34,35.
Research indicates that L2 learners with fulfilled basic needs exhibit higher engagement and achievement in language learning than those with frustrated needs26,27,28. These findings corroborate previous studies that explored the relationship between BPN and learning emotions, suggesting that self-perceived BPN is associated with positive life and study emotions26. Additionally, competence and autonomy were negatively correlated with life and study fear emotions, whereas relatedness was linked to higher levels of life and study fear emotions26.
Learners’ use of learning strategies
As the second area of this study, learners’ use of learning strategies is related to the learners’ autonomy. Different scholars have conceptualized learning strategies in various ways. Learning strategies are defined as the operations learners undertake to make sense of their learning process. Similarly, Oxford and Nyikos characterize learning strategies as operations learners use to facilitate information acquisition, storage, retrieval, and utilization29. They elaborate on these strategies as specific actions to enhance the learning experience in terms of ease, speed, self-direction, enjoyment, effectiveness, and transferability to new contexts29. Oxford describes learning strategies as actions undertaken by learners to expedite the learning process and increase self-direction, enjoyment, effectiveness, and transferability to new contexts30. Ortega perceives learning strategies as conscious perceptual and behavioral actions learners take to gain control over their learning process31. Moreover, Oxford extends this definition, portraying language learning strategies as complex, dynamic actions that learners use in specific situations to accomplish language tasks and foster language learning development32.
Apart from differing definitions, learning strategies have been categorized in various ways. Rubin classifies learning strategies into direct strategy, which involve activities such as clarifying and practicing, and indirect methods, including planning, evaluating, and organizing one’s learning33. Similarly, Oxford categorizes learning strategies into six groups: memory, cognitive, compensation, metacognitive, affective, and social29. Each category serves distinct functions, from aiding in retaining new information to managing emotions and social interactions related to language learning29.
Research highlights the interplay between learning strategies and learner autonomy. Learners who effectively use a range of strategies tend to exhibit higher levels of independence, as they are better equipped to take control of their learning process29. Autonomous learners proactively select strategies that align with their goals, monitor their progress, and adjust their approaches as needed31. This strategic flexibility is crucial for language learners, who must navigate various linguistic and cultural contexts.
Recent work has begun to illustrate how AI-powered tools—such as intelligent writing assistants, grammar checkers, and interactive chatbots—can scaffold the use of metacognitive and cognitive strategies by enabling learners to reflect on errors, revise their output, and receive immediate, context-sensitive guidance35,36. Furthermore, the integration of AI in language learning offers new dimensions to the use of learning strategies. AI-powered tools can provide personalized feedback, suggest tailored learning activities, and offer real-time language practice opportunities, thus enhancing the effectiveness of learning strategies34,35. These tools can also track learners’ progress and adapt to their evolving needs, fostering a more autonomous and engaging learning experience. However, the ethical implications of AI in education, particularly in language learning, cannot be overlooked. Issues related to data privacy, algorithmic bias, and the potential for overreliance on technology must be carefully considered36,37. Ensuring that AI tools are designed and implemented with ethical considerations is essential to maximize their benefits while mitigating potential risks.
Research objectives
The increasing integration of Artificial Intelligence (AI) in education has generated significant interest in its potential to enhance learning experiences, particularly in English for Academic Purposes (EAP). However, research examining the direct effects of AI-powered applications on EAP learners’ metacognitive and social learning strategies and self-determined motivation remains limited. While previous studies have explored AI’s role in adaptive learning, feedback provision, and engagement, there is a gap in understanding how AI tools such as ChatGPT and Poe influence learners’ autonomy, self-regulation, and motivation. This study aims to address this gap by examining the impact of AI-driven applications on the learning strategies and motivation of undergraduate students in China. Specifically, the research seeks to investigate (1) the effect of AI-powered applications on EAP learners’ metacognitive strategies, (2) the impact of AI-powered applications on undergraduate students’ use of social learning strategies, and (3) the influence of AI-powered applications on students’ self-determined motivation. By doing so, this study contributes to the growing body of literature on AI in education. It offers practical insights into how AI tools can be leveraged to enhance autonomous learning and motivate undergraduate students. In line with the objectives, we state the following hypothesis:
Hypothesis
The integration of AI-powered applications into instruction results in statistically significant gains in undergraduates’ metacognitive and social learning strategies and enhances their self-determined motivation compared with conventional instruction.
Methodology
Participants
Participants in this study were 310 undergraduate students from the Criminal Investigation Police University of China. The sample included 45% female students (n = 139) and 55% male students (n = 171), aged between 19 and 26 years (M = 21, SD = 2.53). The initial reference to the Criminal Investigation Police University in the abstract has been corrected for consistency. Students were randomly assigned to either an experimental group (n = 139) or a control group (n = 171), with the assignment balanced across ten intact classes. No matching or balancing procedures were applied to minimize pre-existing differences between the groups. The experimental group received AI-integrated instruction, while the control group received regular instruction without exposure to AI tools. This group-based assignment supported a quasi-experimental design with both quantitative and qualitative components.
Research instruments
Two validated questionnaires were used to collect quantitative data. The first instrument was an adapted version of Oxford’s Strategy Inventory for Language Learning (SILL) [Oxford, 1990], which assesses learners’ use of metacognitive and social strategies. The instrument demonstrated strong reliability, with Cronbach’s alpha values of 0.87 for metacognitive strategies and 0.84 for social strategies. The second instrument measured autonomous motivation using an 18-item scale developed by Clément and Pelletier (2001), covering both intrinsic and extrinsic motivation subtypes. Items were rated on a 7-point Likert scale, and a Relative Autonomy Index (RAI) was calculated by weighting and summing the motivational subscales. The internal consistency of the subscales was acceptable, with Cronbach’s alpha values greater than 0.82.
In addition to the surveys, qualitative data were collected from 834 reflective journals submitted biweekly by students in the experimental group. These journals documented students’ frequency of AI usage, perceived impact, and reflections on their learning experiences throughout the semester. It is important to note that these instruments were validated in Western contexts, and cultural/linguistic adaptations for the Chinese EAP context were considered during the implementation phase. However, this was not explicitly noted in previous sections. Thematic analysis of the journal data was conducted to complement the quantitative findings.
Procedure
Ethical approval
was obtained from the university’s Institutional Review Board, and participants provided informed consent before participating in the study. During the first week of the semester, both groups completed pre-tests on motivation and strategy use under supervised, in-person conditions. Students in the experimental group then participated in a structured 6-hour workshop delivered over two sessions. The workshop provided training on using AI tools, including ChatGPT, Poe, and Google Bard, for academic English tasks. The sessions included live demonstrations, guided exercises, and collaborative tasks to support the adoption of AI tools for metacognitive and communicative purposes.
After the workshop, students in the experimental group were encouraged to use AI tools for various language learning activities, such as summarizing texts, revising essays, expanding vocabulary, and setting goals. Students documented their experiences in reflective journals every two weeks over 12 weeks. Instructors monitored engagement and provided feedback through the course’s learning management system (LMS). The control group received standard instruction without AI integration and completed the same tasks without the support of generative tools. Both groups completed the post-tests at the end of the semester, allowing for between-group comparisons.
Research design
A mixed-methods, quasi-experimental design with a pretest-posttest control group structure was employed to assess the effectiveness of AI-integrated instruction. The quantitative phase involved the use of ANCOVA to compare post-test scores while controlling for baseline differences. This design enabled causal inferences regarding the impact of the AI intervention. Paired t-tests were also conducted as supplementary analyses to validate within-group changes.
The qualitative phase followed a phenomenological approach to thematic analysis. A coding framework was applied to 834 journal entries from the experimental group, with entries coded according to (1) frequency of AI use (low, medium, high), (2) purpose of AI use (metacognitive vs. performance-focused), and (3) self-perceived effectiveness. Eight key themes emerged from this analysis, providing deeper insight into learner experiences, perceptions, and strategic engagement with AI tools.
Data analysis
Quantitative data were analyzed using ANCOVA, with pre-test scores serving as covariates to examine group differences in post-test scores for metacognitive strategies, social strategies, and motivation. The assumptions for ANCOVA—including homogeneity of variance, regression slope homogeneity, linearity, and normality—were met. Descriptive statistics, F-values, p-values, and effect sizes (partial η²) were reported. Paired t-tests were used for within-group verification.
Thematic analysis of journal data was conducted using NVivo to ensure systematic coding. Codes were reviewed iteratively to ensure inter-rater agreement and conceptual saturation. The reliability of the coding was assessed using Cohen’s kappa, with an agreement rate of 0.85, ensuring high inter-rater reliability. Excerpts representing the eight extracted themes were used to triangulate the quantitative findings, highlighting patterns of consistent use, strategic engagement, motivational gains, and sociocultural development.
Results
Before conducting the ANCOVA, all assumptions were rigorously tested. The normality of the dependent variables was assessed using the Shapiro-Wilk test and confirmed for both groups across all dependent measures. Levene’s Test for Equality of Variances confirmed homogeneity of variance (p >.05) for each dependent variable. Linearity between covariates (pre-test scores) and dependent variables (post-test scores) was verified using scatterplot matrices. Homogeneity of regression slopes was tested via interaction terms between group and pre-test scores, with all interaction terms found to be nonsignificant (p >.05), thereby meeting the assumption for ANCOVA.
The ANCOVA was used to compare post-test scores between the experimental and control groups on each dependent variable (metacognitive strategies, social strategies, and self-determined motivation) while statistically controlling for pre-test scores as covariates. Since only two groups were compared, the need for Bonferroni correction was not warranted. The analysis was conducted following standard ANCOVA procedures, and no additional adjustments were necessary. Descriptive statistics and adjusted means are reported for each outcome domain.
ANCOVA results for metacognitive strategies controlling for pre-test scores
Table 1 presents the ANCOVA results for metacognitive strategy use, comparing the experimental and control groups while controlling for pre-test scores.
ANCOVA results revealed significant differences between the experimental and control groups across all metacognitive strategy items after controlling for pre-test scores. The experimental group consistently outperformed the control group, with effect sizes ranging from moderate (η² = 0.21) to large (η² = 0.39). The most pronounced effect was observed for the item “I look for opportunities to read as much as possible in English” (η² = 0.39), indicating that AI tools were particularly effective in fostering self-directed reading habits. These findings support the hypothesis that AI-enhanced learning significantly boosts the use of metacognitive strategies. Learners in the experimental group demonstrated greater awareness of their progress, increased goal-setting behavior, and improved reflective practices, consistent with previous research on technology-driven strategic self-regulation.
AI-empowered applications instruction on social strategies
Table 2 summarizes the ANCOVA outcomes for social strategy use, comparing the experimental and control groups while accounting for pre-test scores.
Statistically significant differences were found between the experimental and control groups across all social strategy items. The largest effect was observed for “I ask for help from English speakers” (η² = 0.29), indicating that AI-supported practice helps reduce communication anxiety and encourages help-seeking behavior. Significant improvements were also observed for “I practice English with other students” (η² = 0.25) and “I ask questions in English” (η² = 0.23), suggesting that the intervention fostered interactive and collaborative engagement. Smaller but significant effects were found for “I ask English speakers to correct me” and “I try to learn about the culture of English speakers,” indicating that AI tools also positively influenced sociolinguistic awareness and cultural engagement.
AI-empowered applications instruction on self-determined motivation
Table 3 displays the ANCOVA results for self-determined motivation, comparing the experimental and control groups while controlling for pre-test scores.
ANCOVA results showed significant positive effects of the AI intervention on all dimensions of self-determined motivation. The largest effect was observed for intrinsic motivation (η² = 0.31), indicating that AI-enhanced learning experiences promoted enjoyment, curiosity, and engagement. Autonomous motivation (η² = 0.27) also showed notable growth, suggesting that students in the experimental group developed stronger internal regulation in pursuing English learning goals. Finally, improvements in extrinsic motivation (η² = 0.25) suggest that AI applications served as external motivators by providing immediate feedback and perceived performance benefits.
Qualitative findings
The qualitative data, derived from 834 reflective journal entries from students in the experimental group, provide deeper insight into learners’ actual usage patterns, the purposes of their engagement with AI tools, and their perceived outcomes. Thematic relationships are developed to show a clear hierarchy of primary and secondary themes and subthemes. Overall, eight key themes emerge, illustrating how students integrate AI into academic routines, utilize it for metacognitive development, experience motivational shifts, and navigate both surface-level and deep-learning behaviors. The following narrative presents each theme with integrated, anonymized excerpts; to exemplify, individual quotes are attributed as Student 1, Student 2, and so forth.
Routine AI integration into study habits
Approximately 68% of entries reflect medium to high tool use across the semester, indicating systematic engagement. To exemplify, Student 1 stated, “I used ChatGPT every Sunday night to prepare for my weekly assignments.” Similarly, Student 2 reported, “Bard became part of my daily vocabulary routine.” When comprehension slips, Student 3 noted, “Whenever I did not understand a reading, I turned to Poe to simplify it.” Over time, normalization replaces hesitation; as Student 4 reflected, “I felt strange at first, but now using AI tools is just part of my learning process.” Collectively, these accounts show AI use as habitual rather than ad hoc.
AI as a metacognitive scaffold
Students depict AI as a partner for planning, monitoring, and evaluating learning. To exemplify, Student 5 stated, “ChatGPT helped me make a checklist for improving my weak grammar points.” Similarly, Student 6 explained, “Using Bard, I could evaluate my writing by comparing AI feedback to my teacher’s.” In addition, Student 7 observed, “The tool showed me how to structure my thoughts before writing,” while Student 8 added, “I started using Poe to reflect on why I made certain mistakes.” Together, these excerpts illustrate AI’s role in externalizing strategy use and cultivating reflective self-regulation.
Confidence and reduced language anxiety
A recurring thread concerns lower anxiety and higher confidence. To exemplify, Student 9 stated, “I’m no longer afraid of writing because I know I can test my drafts first.” Similarly, Student 10 shared, “The AI makes me feel like I have a safety net while practicing speaking,” and Student 11 added, “I used to avoid asking questions in class—now I try ideas with ChatGPT first.” Consequently, as Student 12 put it, “It feels less risky to experiment with English when AI is there to help,” indicating a reduced affective filter and greater willingness to participate.
Strategic use for writing and revision
Learners describe a shift from passive generation toward iterative, strategic revision. To exemplify, Student 13 stated, “I rewrote my paragraphs after asking Bard to point out unclear ideas.” Similarly, Student 14 observed, “ChatGPT showed me how to improve my transitions between sentences.” Furthermore, Student 15 explained, “I corrected my errors after the AI showed examples of better word choice,” and Student 16 added, “I asked the AI to give me feedback and then edited based on that advice.” Collectively, these patterns highlight AI as a strategy amplifier that strengthens cohesion, clarity, and accuracy.
Enhanced motivation and enjoyment
Many students frame AI use as intrinsically engaging. To exemplify, Student 17 stated, “Using AI made learning feel more fun and interesting.” Similarly, Student 18 shared, “I was excited to try new writing styles after experimenting with ChatGPT,” while Student 19 reflected, “It felt like I was playing with the language, not just studying it.” When fatigue threatens persistence, Student 20 noted, “Bard kept me motivated when I felt tired—it gave instant support.” Thus, enjoyment becomes a driver of sustained effort.
Increased autonomy and self-regulation
As routines solidify, students assume greater control over when and how to engage. To exemplify, Student 21 stated, “I used ChatGPT every Sunday night to prepare for my weekly assignments,” indicating a regular weekly routine. Similarly, Student 22 reported, “Bard became part of my daily vocabulary routine,” indicating micro-practices interwoven with schedules. When obstacles arise, Student 23 added, “Whenever I did not understand a reading, I turned to Poe to simplify it.” Over time, as Student 24 reflected, “I felt strange at first, but now…,” familiarity embeds AI within goal setting, monitoring, and evaluation—hallmarks of self-regulated learning.
Surface-Level use for task completion
A smaller subgroup admits convenience-driven, shallow use. To exemplify, Student 25 stated, “I just copied the summary it gave me without thinking much.” Similarly, Student 26 shared, “Sometimes I asked for answers quickly before class without reading,” and Student 27 added, “I mostly used AI to finish tasks faster, not to learn deeply.” In the same vein, Student 28 noted, “I did not reflect much—I just wanted the output.” These admissions caution against over-reliance that bypasses deeper processing.
Cultural awareness and curiosity
Students also leverage AI to explore cultural dimensions of language. To exemplify, Student 29 stated, “Poe showed me examples from different cultures, which I found interesting.” Similarly, Student 30 shared, “I used AI to learn idioms and sayings from English-speaking countries,” while Student 31 explained, “I asked Bard to explain cultural traditions in English texts we read.” In turn, Student 32 reflected, “Learning about how people think in other places was new and fun,” positioning AI as a bridge to cultural literacy alongside linguistic growth.
In sum, the narratives cohere around a dual arc: on one side, AI becomes routine, strategic, motivating, and autonomy-enhancing; on the other, a minority gravitates toward surface-level shortcuts. Accordingly, pedagogical framing is pivotal—when instructors channel use toward metacognitive growth and iterative revision, students are more likely to appropriate AI for deeper learning rather than mere task completion.
Discussion
The findings of this study highlight the significant potential of AI-powered applications in enhancing learners’ metacognitive and social strategies as well as their self-determined motivation in the context of English for Academic Purposes (EAP) instruction. These improvements were evident both statistically and thematically, providing a nuanced understanding of AI’s impact on language learning. Quantitative data demonstrated substantial improvements across all metacognitive strategy items in the experimental group. The most significant effects were seen in areas such as planning, self-monitoring, and reflective evaluation, which are key components of metacognitive development. These findings align with existing research emphasizing the central role of metacognition in language proficiency and learner autonomy1,2. Complementing these findings, qualitative data revealed that learners not only increased their use of metacognitive strategies but also integrated AI tools into their daily academic routines. Routine use of tools like ChatGPT and Bard became part of their study habits, providing a consistent framework for metacognitive development over the semester. This reflects the notion that sustained engagement with AI tools can promote self-regulation, a key factor in effective language learning3.
The theme of AI as a metacognitive scaffold was central in demonstrating how these tools helped students plan, monitor, and assess their language learning. Students frequently described AI as a “thinking partner” that facilitated deeper reflection and strategic learning. This mirrors the effectiveness of strategic language learning interventions recommended in prior studies3, further reinforcing AI’s role in supporting self-regulated learning. The tools provided timely feedback, allowing students to evaluate their progress, correct errors, and make adjustments to their learning strategies.
In terms of social strategies, the experimental group showed statistically significant improvements in initiating conversation, seeking clarification, and interacting with peers and native speakers. Qualitative reflections elaborated on these findings, highlighting that AI tools provided a low-pressure environment for students to practice and gain confidence. The theme of confidence and reduced language anxiety indicated that AI tools acted as a safety net, allowing students to take risks in their language production without the fear of immediate judgment. This environment facilitated authentic language use, which is often difficult to achieve in traditional classroom settings4.
AI tools also supported students’ writing and revision strategies. Rather than relying solely on AI for content generation, students engaged in iterative writing, using AI for feedback on structure, grammar, and lexical choices. This strategic use of writing and revision suggests that AI can shift learners from passive consumption to active engagement, promoting recursive revision and self-improvement. These findings are consistent with research emphasizing the importance of writing as a process rather than a product, which is a core aspect of EAP instruction4.
Motivational outcomes were also significantly improved. The highest effect was observed in intrinsic motivation, indicating that AI-supported learning fostered curiosity, enjoyment, and engagement in the learning process. Autonomous motivation also showed substantial growth, suggesting that AI tools facilitated students’ ability to regulate their own learning, set goals, and track progress. This is crucial in promoting lifelong learning habits, as it aligns with theories of self-determined motivation, which argue that intrinsic and autonomous motivation are key drivers of sustained learning success5,6.
Despite these positive outcomes, the study also identified limitations and potential challenges. A subset of students reported using AI tools primarily for surface-level task completion, such as quickly generating answers without engaging in more profound reflection. This raises important questions about the potential for over-reliance on AI tools, which could limit students’ ability to engage in critical thinking and independent problem-solving. To mitigate these risks, explicit instruction on the responsible use of AI tools and digital literacy is essential.
Furthermore, while AI tools can foster cultural awareness, the depth of this engagement varies. Some students used AI to explore cultural nuances and idiomatic expressions, while others did not fully capitalize on these opportunities. This highlights the importance of designing AI-supported tasks that not only promote linguistic proficiency but also encourage cultural exploration and intercultural competence, which are vital components of academic communication7.
In summary, the study demonstrates that AI-powered tools, when thoughtfully integrated into language learning contexts, can significantly enhance metacognitive strategies, social strategies, and motivation. However, the effectiveness of AI is contingent upon how it is used and the pedagogical strategies employed to support its integration. Future research should further explore the long-term effects of AI on language learning, particularly in terms of developing students’ critical thinking skills, and examine the ethical and pedagogical risks, such as plagiarism and over-reliance on technology, that may arise as AI becomes increasingly integrated into educational settings8.
Implications
The findings of this study offer valuable implications for EAP instructors, curriculum designers, and educational policymakers who seek to integrate AI tools into language learning contexts effectively. The results highlight the need for a structured and strategic approach to incorporating AI-powered applications such as ChatGPT, Bard, and Poe into language instruction.
First, AI tools should not be treated as isolated technologies but rather as integral components of a reflective, metacognitive learning process. To ensure their practical use, instructors can design curricula where AI is embedded into reflective tasks such as goal-setting, planning, and self-assessment. These tasks should guide students to use AI-generated content as a springboard for deeper thinking and self-regulation. Additionally, AI tools can be used throughout the writing process to support iterative drafting, coherence checking, and grammar revision—extending the principles of process writing pedagogy. Curriculum design should provide specific activities where AI tools are part of a cyclical process, encouraging students to reflect on their progress with each iteration.
In terms of social strategies, AI can play a crucial role in language practice, especially in creating safe, low-pressure environments for students to rehearse conversations, script questions, and practice clarification requests before engaging in live interactions. This can help build confidence, reduce language anxiety, and improve overall communication skills. However, to mitigate the risks of surface-level engagement, curriculum designers must include explicit digital literacy instruction, teaching students how to evaluate AI-generated content critically, avoid over-reliance on AI for superficial task completion, and use the tools ethically.
Moreover, the practice of reflective journaling emerged as a key element in this study, serving both as a rich source of qualitative data and as a pedagogical tool. Regular journaling or guided reflections should be incorporated into EAP curricula, enabling students to document their learning process and fostering a deeper understanding of how AI tools enhance their language development. Incorporating structured reflection into the curriculum can help sustain learner autonomy and ensure that AI integration is purposeful and aligned with pedagogical goals.
Finally, while these practices can be highly effective in individual contexts, scaling AI interventions across diverse institutional or cultural settings presents additional challenges. The feasibility of widespread AI adoption depends on institutional readiness, faculty training, and the technological infrastructure available. Further research is needed to explore how AI can be successfully implemented at scale in different educational environments, considering local contexts, cultural factors, and resource limitations.
Taken together, these implications suggest that AI can be a robust scaffold for fostering self-regulated, motivated, and competent academic language users. However, its integration requires careful planning, critical reflection, and ongoing support to ensure that AI serves as an educational tool that complements.
Limitations and future research
Despite the meaningful insights yielded by this study, several limitations should be acknowledged. While a quasi-experimental design with both experimental and control groups was employed, random assignment at the individual level was not feasible, and intact class structures may have introduced uncontrolled classroom dynamics that influenced outcomes. Although the inclusion of a control group strengthens causal interpretations, other confounding variables—such as teacher effects, course content variation, or the Hawthorne effect—cannot be entirely ruled out. Furthermore, while statistically significant differences were observed, the study relied on self-reported survey instruments, which may be subject to social desirability bias and inaccuracies in learner self-perception.
To address this Limitation, qualitative data from 834 reflective journals were analyzed to provide deeper insight into learner behavior and tool engagement. However, this data, while rich, was also self-reported. Objective usage data—such as platform-generated analytics or timestamped logs—was not available due to platform privacy constraints. As a result, while students described using AI tools regularly and strategically, their actual engagement could not be independently verified. Future research could integrate learning analytics from AI platforms to provide objective data on tool usage, offering a more accurate picture of student engagement and behavior.
Additionally, the study did not conduct a subgroup analysis to examine whether levels of AI usage (low, medium, high) correlated with learning gains. Such analyses could reveal differential impacts based on learner profiles or the intensity of AI usage. This approach could provide insights into how varying frequencies of AI engagement affect learner outcomes, helping to identify the most effective levels of interaction for optimizing learning.
Finally, while the study focused on short-term changes over a single academic semester, the long-term effects of AI integration on learner autonomy, motivation, and academic performance remain unknown. Future research should incorporate longitudinal tracking to explore how AI influences learning over multiple semesters, providing valuable data on sustained engagement and its long-term impact on academic performance. Additionally, behavioral analytics and task-based performance assessments could offer a more comprehensive understanding of how AI influences learning outcomes over time and across diverse educational contexts.
Conclusion
Findings show that integrating AI-powered applications into EAP instruction yields reliable post-test advantages—after controlling for pre-test scores—in metacognitive strategies, social strategies, and self-determined motivation, with moderate-to-large effects (partial η² ≈ 0.21–0.39). Gains are strongest for self-directed reading within metacognition, help-seeking within social strategies, and intrinsic motivation, indicating that AI support not only sharpens strategy use but also energizes engagement. Reflective journals corroborate these results: students routinely and strategically embed AI into study cycles, use it as a metacognitive scaffold, report lower anxiety and higher confidence, and display greater autonomy, while a minority rely on surface-level shortcuts. Given the single-institution, cultural context, generalizability remains tentative; future work should test the approach across diverse settings and specify guardrails that steer learners from convenience use toward deeper, self-regulated learning.
Data availability
Data is provided within the manuscript.
References
Bozkurt, A. & Sharma, R. C. Exploring the learning analytics equation: what about the carpe diem of teaching and learning? Asian J. Distance Educ. 17 (2), 1–10 (2022).
Bozkurt, A. & Sharma, R. C. Digital transformation and the way we (mis) interpret technology. Asian J. Distance Educ. 17 (1), 56–68 (2022).
Bozkurt, A., Karadeniz, A., Baneres, D., Guerrero-Roldán, A. E. & Rodríguez, M. E. Artificial intelligence and reflections from the educational landscape: A review of AI studies in half a century. Sustainability 13 (2), 800 (2021).
Dale, R. GPT-3: what’s it good for? Nat. Lang. Eng. 27 (1), 113–118 (2021).
Devlin, J., Chang, M. W., Lee, K., Toutanova, K. & BERT Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). :4171–4186. (2019).
Li, G., Zarei, M. A., Alibakhshi, G. & Labbafi, A. Teachers and educators’ experiences and perceptions of artificial-powered interventions for autism groups. BMC Psychol. 12 (1), 199 (2024).
Doraiswamy, P. M., Blease, C. & Bodner, K. Artificial intelligence and the future of psychiatry: insights from a global physician survey. Artif. Intell. Med. 102, 101753 (2020).
D’alfonso, S. et al. Artificial intelligence-assisted online social therapy for youth mental health. Front. Psychol. 8, 796 (2017).
Xiao, J., Alibakhshi, G., Zamanpour, A., Zarei, M.A., Sherafat, S., Behzadpoor, S.F. How AI literacy affects students’ educational attainment in online learning: testing a structural equation model in higher education context. Int Rev Res Open Distrib Learn. 25(3), 179–198 (2024).
Farrow, R. The possibilities and limits of explicable artificial intelligence (XAI) in education: a socio-technical perspective. Learn. Media Technol. 48 (2), 266–279 (2023).
Goksel, N. & Bozkurt, A. Artificial intelligence in education: Current insights and future perspectives. In: Handbook of Research on Learning in the Age of Transhumanism. IGI Global; :224–236. (2019).
Chaka, C. Fourth industrial revolution—a review of applications, prospects, and challenges for artificial intelligence, robotics and blockchain in higher education. Res. Pract. Technol. Enhanc Learn. 18, 002–002 (2023).
Humble, N. & Mozelius, P. The threat, hype, and promise of artificial intelligence in education. Discover Artif. Intell. 2 (1), 22 (2022).
Jandrić, P. Biology, information, society. Postdigit Sci. Educ. 3 (2), 261–265 (2021).
Zhang, Y., Luo, H., Peng, S. & Han, X. Virtual reality or videoconferencing for online learning? Evidence from comparative meta-analyses. Appl. Sci. 15 (11), 2076–3417 (2025).
Al-Jaghoub, S., Al-Qora’n, L. F., Al-Odat, A. M. & Alheet, A. F. Educators’ perceptions on artificial intelligence in higher education: insights from the Jordanian higher education. Int J Inf Educ Technol. 15(4), 716–731 (2025).
Thalji, N. J. & Alkhasawneh, S. How can artificial intelligence shape the future of sustainable education? Challenges and opportunities. J. Theor. Appl. Inf. Technol. 103(9), 3836–3850 (2025).
Gordijn, B. & Have, H. T. ChatGPT: evolution or revolution? Med. Health Care Philos. 26 (1), 1–2 (2023).
Gore, P. A. Jr Academic self-efficacy as a predictor of college outcomes: two incremental validity studies. J. Career Assess. 14 (1), 92–115 (2006).
Gómez-Trigueros, I. M., Ruiz-Bañuls, M. & Ortega-Sánchez, D. Digital literacy of teachers in training: moving from ICTs (information and communication technologies) to LKTs (learning and knowledge technologies). Educ. Sci. 9 (4), 274 (2019).
Graham, S. et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr. Psychiatry Rep. 21 (11), 116 (2019).
Rosenfeld, A., Benrimoh, D., Armstrong, C. et al. Big data analytics and AI in mental healthcare. arXiv , 1903–12071 https://doi.org/10.48550/arXiv (2019).
Alamer, A. & Lee, J. Language achievement predicts anxiety and not the other way around: A cross-lagged panel analysis approach. Lang. Teach. Res. 28 (4), 1572–1593 (2024).
Alamer, A. Basic psychological needs, motivational orientations, effort, and vocabulary knowledge: A comprehensive model. Stud. Second Lang. Acquis. 44 (1), 164–184 (2022).
Gagné, M. & Deci, E. L. Self-determination theory and work motivation. J. Organ. Behav. 26 (4), 331–362 (2005).
Oga-Baldwin, W. Q., Nakata, Y., Parker, P. & Ryan, R. M. Motivating young Language learners: A longitudinal model of self-determined motivation in elementary school foreign Language classes. Contemp. Educ. Psychol. 49, 140–150 (2017).
McEown, K. & Sugita-McEown, M. The role of positive and negative psychological factors in predicting effort and anxiety toward languages other than english. J. Multiling. Multicult Dev. 43 (8), 746–758 (2022).
Oxford, R. & Nyikos, M. Variables affecting choice of Language learning strategies by university students. Mod. Lang. J. 73 (3), 291–300 (1989).
Oxford, R. L. Language Learning Strategies: What Every Teacher Should Know (New Bury House, 1990).
Ortega, L. Sequences and processes in Language learning. In: The Handbook of Language Teaching (pp.81–105). (eds. Long, M. H., Doughty, C. J.) Wiley-Blackwell,. https://doi.org/10.1002/9781444315783.ch6 (2009).
Oxford, R. L. Teaching and Researching Language Learning Strategies: Self-regulation in Context (Routledge, 2016).
Rubin, J. Study of cognitive processes in second Language learning. Appl. Linguist. 2 (2), 117–131 (1981).
García-López, C., Tabuenca-Cuevas, M. & Navarro-Soria, I. A systematic review of the use of AI in EFL and EL classrooms for gifted students. Trends High. Educ. 4 (3), 33 (2025).
Sánchez, G. P., Escudero-Nahón, A., Corona, M. A. I. & Mandujano, M. M. Bridging pedagogy and technology: A conceptual analysis for software design in educational platforms. Edelweiss Appl. Sci. Technol. 9 (6), 1293–1306 (2025).
Jones, M. Ethical considerations in AI-driven education. J. Educ. Ethics. 12 (1), 23–45 (2019).
Yun, G., Lee, K. M. & Choi, H. H. Empowering student learning through artificial intelligence: A bibliometric analysis. J. Educ. Comput. Res. 62 (8), 1822–1855 (2025).
Noels, K. C., Clément, R. & Pelletier, L. G. Intrinsic, extrinsic, and integrative orientations of French-Canadian learners of english. Can. Mod. Lang. Rev. 57 (3), 424–442 (2001).
Sulistiyo, U. & Kamil, D. Language learning strategies and learner autonomy: the case of Indonesian tertiary EFL students. LEARN. J. Lang. Educ. Acquis Res. Netw. 15 (1), 257–281 (2022).
Acknowledgements
We would like to express our gratitude to all the esteemed experts and teachers who participated in the interviews, completed the questionnaire, and assisted us in conducting this research.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
YZ conceived and designed the experiments; YZ performed the experiments; BZ Analyzed and interpreted the data. YZ and BZ contributed reagents, materials, and analysis tools or data; BZ wrote the paper and proofread the manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
The Institutional Review Board of the Criminal Investigation Police University of China reviewed and approved the studies involving human participants. The IRB confirmed that all methods were performed in accordance with the relevant guidelines and regulations. The researchers explained the research objectives to the participants in the introductory section of the questionnaire. All participants provided their consent and completed the informed consent form. The data was distributed anonymously, compiled, and analyzed, and the results were provided to the authorities.
Competing interests
The authors declare no competing interests.
Consent to publish
Not applicable.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Zhai, Y., Nezakatgoo, B. Evaluating AI-Powered Applications for Enhancing Undergraduate Students’ Metacognitive Strategies, Self-Determined Motivation, and Social Learning in English Language Education. Sci Rep 15, 35199 (2025). https://doi.org/10.1038/s41598-025-19118-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-025-19118-z