Introduction

The integration of artificial intelligence (AI) into English as a Foreign Language (EFL) education is rapidly transforming pedagogical approaches (Chen et al., 2025), offering unprecedented opportunities for personalized and interactive learning (Alhusaiyan, 2025). AI-driven tools, from intelligent tutoring systems to conversational agents, provide immediate feedback and adaptive content (Özçelik, 2025), potentially revolutionizing language acquisition (Al-Abdullatif, 2024). However, the mere presence of technology does not guarantee its effective adoption (Zhang & Umeanowai, 2025), success is contingent upon students’ willingness to accept and meaningfully engage with AI (Guo & Wang, 2025), it is a process heavily influenced by their digital literacy and, crucially, their trust in these systems (Fan & Zhang, 2024).

While the technology acceptance model (TAM) provides a robust framework for understanding adoption through perceived usefulness (PU) and ease of use (PEOU), its application in educational AI research exhibits two critical limitations. First, it has predominantly prioritized utilitarian outcomes such as efficiency, satisfaction, and academic performance (Ayanwale et al., 2024; Zhou et al., 2023). This focus overlooks the potential of AI to cultivate higher-order 21st-century skills, particularly student creativity. Although AI tools are increasingly capable of supporting creative language tasks, the pathway from technology acceptance to creative output remains a nascent and empirically underexplored area, creating a significant gap in both TAM and AI-in-education literature (Wu & Yu, 2024).

A significant limitation of the TAM in AI educational research is its predominant focus on utilitarian outcomes like efficiency and satisfaction (Ayanwale et al., 2024), while largely neglecting its potential to foster essential 21st-century skills like student creativity. This gap is critical, as AI-powered tools offer unique opportunities for creative language exploration, yet the pathway from acceptance to creative output remains empirically underexplored. Understanding this pathway is vital for leveraging AI to develop higher-order thinking skills in learners.

At the same time, the concept of trust in AI is widely viewed on a unidimensional level, revolving around technical reliability. In a learning environment, especially within Chinese EFL classrooms (where AI-based tutoring systems are rapidly being deployed) (Lin et al., 2025), and where levels of use are more than 60 percent in urban districts (Jiang et al., 2025), this issue of trust is more complicated. As per emergent studies, it has been observed that there are attributes related to people like benevolence and integrity (Kim et al., 2024), which are essential in relational engagement. Thus, a second gap exists: the lack of a multidimensional trust lens within TAM that can differentiate between Functionality Trust (competence) driving initial adoption and Human-like Trust (benevolence, integrity), pivotal for sustaining interaction and creative risk-taking (Lammers & Lasch, 2023).

To address these dual gaps, this study proposes and tests an extended TAM framework. We investigate how digital literacy serves as an antecedent to PEOU and PU, which in turn foster two distinct dimensions of trust: Functionality and Human-like. Ultimately, we examine how these trust dimensions differentially predict attitudes, behavioral intentions, and, crucially, student creativity. Empirically, we leverage large-scale survey data from two studies (n = 440 and n = 660) with EFL students to provide robust evidence for these relationships.

This research makes several pivotal contributions to the literature on technology acceptance in education. Theoretically, it significantly extends the TAM framework by, first, reconceptualizing a key outcome, demonstrating that the model’s pathways ultimately enhance student creativity, a significant, yet previously underexplored, non-utilitarian outcome in AI-enabled learning. Second, it introduces and validates a dual-dimensional trust model, offering a more nuanced understanding of the psychological mechanisms driving AI acceptance in learning environments. Third, it positions digital literacy as a key foundational precursor in this acceptance process. Practically, our findings offer clear guidance for educators and developers: emphasizing that AI tools must be not only functionally reliable to ensure sustained use but also designed with empathetic, human-like qualities to promote the relational engagement necessary for creative exploration.

This paper is organized into six key sections to provide a comprehensive exploration of this study. Section 1 Introduction establishes the research context, identifies gaps in the literature, and outlines the study’s theoretical and practical contributions. Section 2 Literature Review synthesizes existing scholarship on the TAM, digital literacy, and trust in AI. Section 3 is “Study One,” which delves into the first phase of the research, with Section 3.1 Methodology for Study One detailing the sample selection, data collection instruments, analytical procedures employed, and Discussion on the Findings of Study 1 interprets the initial results, highlighting how digital literacy and trust dimensions influence EFL students’ AI acceptance and creativity. Furthermore, Section 4 “Study Two” expands on the first study by incorporating a larger sample and refining the trust construct, which is followed by Section 4.1 (Methodology for Study Two) explains the adjustments made in the research design, including the incorporation of a multidimensional trust model (Human-like Trust and Functionality Trust) and its operationalization. Section 4.2 Findings for Study 2 presents the empirical outcomes, emphasizing the differential impacts of trust dimensions on behavioral intentions and creative engagement. Section 5 is the Discussion on the Findings of Study 2. Finally, Section 6 is the General Conclusion discusses the study’s key contributions.

Literature review

The integration of AI into EFL pedagogy heralds a shift from standardized instruction towards personalized, adaptive learning ecosystems (Börekci & Çelik, 2024; Namaziandost & Rezai, 2024). AI-driven tools, such as intelligent tutoring systems and conversational agents, provide unparalleled opportunities for immersive language practice and immediate feedback (Pillai et al., 2024; Wu et al., 2024), potentially accelerating proficiency gains (Li, 2022). However, the efficacy of these technologies is not inherent (Wicaksono et al., 2024), it is contingent upon students’ acceptance and meaningful engagement (Vidarshika et al., 2025), which are themselves mediated by two pivotal factors: students’ competency to navigate digital tools (digital literacy) and their psychological willingness to rely on them (trust) (Fan & Zhang, 2024). While the TAM offers a foundational framework for understanding adoption through PU and PEOU (Davis, 1989), its application in educational AI research requires critical expansion (Ali et al., 2025).

Moreover, digital literacy, extending beyond basic operational skills to encompass the critical evaluation and creative application of digital tools (Baskara, 2025), serves as the foundational bedrock for AI acceptance (Haroud & Saqri, 2025). In the specific context of EFL, digitally literate students are better equipped to leverage AI functionalities (Sudirman, 2025), resulting in a lower cognitive load and higher perceptions of both the usefulness and ease of use of AI applications (Börekci & Çelik, 2024; Yang & Lou, 2024). This alignment with TAM is well-established; digital literacy enhances PEOU and PU, which in turn foster positive attitudes and behavioral intentions toward technology (Saihi et al., 2024). Consequently, digital literacy is not merely a supplementary skill but a critical prerequisite that determines a student’s initial willingness to engage with innovative AI-driven learning solutions (Dilzhan, 2024).

However, while digital literacy may facilitate initial adoption, sustained and deep engagement, particularly for complex tasks like creative language use, requires trust (Ahmed & Akyıldız, 2022). At the same time, trust is defined as a multifaceted psychological construct that is critical for understanding student engagement with AI in educational settings (Pitts & Motamedi, 2025). As learners increasingly turn to AI systems for guidance and feedback, their willingness to rely on these tools under conditions of uncertainty is fundamentally shaped by the presence of trust (Jiang et al., 2025). Traditional technology acceptance research often treated trust as a unidimensional construct synonymous with technical reliability, or “Functionality Trust” (Chu, 2025). Encompassing perceived competence, accuracy, and dependability (Liu et al., 2024; Zou et al., 2023).

Drawing from interdisciplinary research, we posit a second, critical dimension, “Human-like Trust” (Choung et al., 2023). This construct moves beyond the computer as Social Actor (CASA) paradigm, which primarily examines how humans respond to anthropomorphic cues (Seok et al., 2025). Instead, Human-like Trust focuses on the user’s psychological state of assured belief in the AI’s benevolent intent, integrity, and ethical alignment (Pitts & Motamedi, 2025; Kim et al., 2025). It is the perception that the AI agent is not only competent but also operates with a sense of goodwill towards the user, creating a safe psychological environment for learning (Schroeder et al., 2021). In educational settings, empirical evidence suggests that Human-like Trust is a powerful predictor of relational engagement and motivation (Chu, 2025), as students begin to view the AI not merely as a tool but as a supportive partner in learning (Li, 2022).

The integration of this dual-trust model within the TAM framework addresses a significant limitation in the current literature. While TAM robustly predicts utilitarian outcomes (Balaskas et al., 2025), its conventional application has overlooked affect-driven processes essential for creativity (Ayanwale et al., 2024). Recent studies have begun to incorporate trust into TAM (Al-Mamun, 2025; Gong et al., 2025), confirming its role as a critical mediator between perceptions and adoption intentions (Yao & Wang, 2024; Liu et al., 2024). However, a predominant focus remains on cognitive learning outcomes, such as comprehension and retention, neglecting the affective and creative dimensions vital for holistic language acquisition (Huang et al., 2024). Creativity in language learning, manifested as risk-taking, metaphorical thinking, and personalized expression, requires an environment of psychological safety (Schroeder et al., 2021; Xiao et al., 2024).

The measurement of creativity within computer-assisted language learning (CALL) research has evolved to capture its multifaceted nature (Tafazoli, 2025), moving beyond solely product-oriented metrics (originality) to include crucial cognitive and confluential processes (Mohsen et al., 2025; Khoso et al., 2025). This study adopts a comprehensive four-dimensional scale assessing (originality and uniqueness of ideas), Fluency (volume of ideas generated), Flexibility (diversity of ideas and perspectives), and Elaboration (ability to develop and detail ideas) (Govindasamy et al., 2024). This approach is aligned with leading frameworks in creativity research (Guilford, 1967) and is empirically validated in digital learning environments (Kaufman & Beghetto, 2009).

Therefore, this study identifies a critical gap at the intersection of digital literacy, multidimensional trust, and creative outcome within TAM. While the individual components have been studied, their integrative pathways remain empirically underexplored. Specifically, the literature lacks a model that explains how digital literacy facilitates the development of distinct trust dimensions, and how these dimensions, in turn, differentially predict sustained usage intention (driven by Functionality Trust) versus creative engagement (driven by Human-like Trust).

Figure 1. A flowchart illustrating the relationship between digital literacy, PEOU, PU, trust in AI, attitude toward AI, intention to use AI, and EFL students’ creativity.

Fig. 1
Fig. 1
Full size image

Research model.

Section one

Methodology for study 1

In section one, we conducted Study 1, where we sought to understand the foundational role of trust in influencing the acceptance and use of AI technologies among EFL students. Specifically, this study aimed to test the initial hypotheses focusing on the relationships between Digital Literacy, PEOU, PU, and trust in AI technologies. Given the expanding presence of AI-driven smart technologies within educational contexts, including interactive language learning applications and virtual assistants, understanding the determinants of students’ acceptance and trust in these technologies is both timely and essential (Terzopoulos & Satratzemi, 2020; McLean & Osei-Frimpong, 2019; Reeves & Nass, 1996).

Participants and procedure

A cross-sectional online survey was conducted in September 2024 to collect data from EFL students at a large public university in China. Participants were selected using a convenience sampling approach from a pool of undergraduate students enrolled in compulsory English language courses across various academic majors. The inclusion criteria required participants to be: (1) undergraduate students, (2) currently enrolled in an EFL course, and (3) have prior experience using at least one AI-driven educational tool (intelligent tutoring systems, AI-powered writing assistants, or conversational chatbots) for language learning. Students enrolled in graduate programs or with no experience using AI for educational purposes were excluded from the study. Prior to participating, all students were presented with a digital informed consent form on the first page of the online survey. The form detailed the study’s purpose, procedures, potential risks and benefits, the voluntary nature of participation, and the right to withdraw at any time without penalty. Participants indicated their consent by clicking “I Agree” before proceeding to the survey questions. All data were anonymized and stored securely on a password-protected server.

Recruitment was conducted through a university-administered email list serve targeting eligible departments like English Language Teaching (ELT), Applied Linguistics, English Literature, and Translation studies. The survey instrument was designed in English (See Appendix-A). To ensure conceptual accuracy and comprehension for all participants, researchers conducted a pilot test with a small group of EFL students (n = 50) to identify and rectify any ambiguous items. The final survey was administered in English Language whereas participants required ~15 minutes to complete the questionnaire. From the initial 480 responses, 20 were removed due to incomplete data or patterned responding (straight-lining), resulting in a final valid sample of n = 460 participants. The sample consisted of 57% female and 43% male students, with ages ranging from 18 to 25 years, for more details please see (Appendix Table B1).

Ethical consideration

Ethical considerations were strictly maintained during this research to promote the integrity of the study and the well-being of the research participants. Each respondent was given a fully informed consent form which articulated the purpose of the study, procedure, risks, and benefits involved, and made it clear that the study was voluntary, and that the person could drop out at any time, incurring no penalty. A strict maintenance of anonymity and confidentiality ensured that data collected could not be traced to the respondent by de-identification of all retrieved data and storing it on secure, encrypted servers. Psychological discomfort and bias in responses were minimized by designing the survey questions such they were not sensitive or leading, and participants were assured that there were no correct or incorrect responses.

Measures

The measures used in the survey were adapted from established scales to fit the context of AI acceptance and creativity among EFL students. Digital Literacy was operationalized using the comprehensive scale developed by Rodríguez-de-Derecho et al. (2016). This instrument encompasses six distinct dimensions: Technological Literacy (7 items), Personal Security Literacy (5 items), Critical Literacy (5 items), Device Security Skill (4 items), Informational Skill (5 items), and Communication Skill Literacy (3 items), providing a holistic view of a student’s digital capabilities. “Perceived Ease of Use” and “Perceived Usefulness” were each measured with five items adapted from the original TAM scale (Davis, 1989), specifically contextualized for AI technologies in learning. “Trust in AI was assessed with four items. Additionally, “Attitude Toward AI” has 4 items and “Behavioral Intention to Use AI” has 6 items these both were measured to capture students’ overall acceptance of AI technologies. The dependent variable, “EFL Students’ Creativity,” was adapted from Govindasamy et al. (2024), assessed with items covering four dimensions: Originality, Flexibility, Fluency, and Elaboration, each with three items.

In this initial study, the four-item scale for “Trust in AI” was designed to capture the core, cognitive dimension of trust, namely, the perceived reliability and dependability of the AI tool, as this is the fundamental driver of initial technology adoption within the TAM paradigm (Gefen et al., 2003). This parsimonious operationalization was strategically chosen to provide a clear baseline understanding of trust’s role within the core model before introducing its multifaceted nature. However, this foundational step was essential for model clarity in Study 1 and directly informed the design of Study 2, which explicitly addresses these complexities by decomposing trust into its multidimensional facets (Functionality and Human-like Trust) on a larger sample. Moreover, qualitative triangulation is one of the limitations of this study, which is acknowledged in the limitations and directions for future research.

Analytic approach

This research used Partial Least Squares Structural Equation Modeling (PLS-SEM) implemented through SmartPLS 4 to evaluate the proposed relationships in the research model (Fig. 1). PLS-SEM offers suitable applications for exploratory studies along with complex research frameworks and thus serves as an appropriate technique for this investigation. The evaluation of internal consistency reliability included a composite reliability assessment, which showed adequate reliability when exceeding 0.70. To evaluate convergent validity, the model used average variance extracted (AVE) measurements where each construct needed to capture at least 0.50 of its indicator variances. The heterotrait-monotrait (HTMT) ratio confirmed the discriminant validity between constructs to prove their separate identities. The researcher evaluated path coefficients and tested hypotheses after establishing the validity of the measurement model. One of the many indicators in identifying satisfactory outcomes with levels below 0.08 was the standardized root mean square residual (SRMR), which was employed in the PLS-SEM model fit. The bootstrapping process performed 5000 resamples and derived 95% confidence intervals to examine both direct and indirect impacts of the variables.

The selection of PLS-SEM was guided by the study’s primary objectives, as PLS-SEM is particularly suitable for this research due to its ability to handle complex models with multiple latent constructs and its less stringent requirements regarding data distribution and sample size (Hair et al., 2019; Ringle et al., 2020). Given the exploratory nature of examining the multidimensional trust construct and its relationship with digital literacy and creativity, PLS-SEM offers superior statistical power for identifying key driver relationships and testing the extended TAM framework without imposing distributional assumptions. Furthermore, the method’s emphasis on maximizing explained variance in the endogenous constructs, specifically behavioral intention and creativity, aligns with the study’s goal of identifying impactful predictors of AI adoption and creative outcomes in EFL learning contexts.

Figure 2: showing the pathway to EFL students’ creativity. At the top, digital literacy influences PEOU and PU, followed by trust in AI, attitude toward AI, and behavioral intention to use AI.

Fig. 2
Fig. 2
Full size image

Pathway model illustrating constructs leading to EFL Students’ creativity.

Findings of study one

Descriptive statistics

Descriptive statistics provide a crucial summary of the central tendencies, variability, and distribution shapes of all key constructs measured in the study. They offer an initial understanding of how participants generally perceived the various factors, such as digital literacy, trust in AI, and creativity, before examining the complex relationships between them. The patterns observed in these statistics help contextualize the sample’s overall inclinations and provide a baseline for interpreting the subsequent explanatory and predictive analyses. Moreover, the correlation between study 1 variables is also checked (See Appendix Table B2).

Table 1 displays the descriptive statistics for the key constructs measured in Study 1. All variables show high mean scores (ranging from 3.76 to 4.15 on a 5-point scale), indicating generally positive perceptions toward AI among participants. The negative skewness and positive kurtosis values for all constructs suggest that responses are clustered toward the higher end of the scale with a left-skewed distribution. High Cronbach’s alpha values (0.84-0.90) confirm strong internal consistency reliability for all measurement scales. Figure 3 below shows the mean and reliability (α) for the descriptive statistics of the study 1 variables.

Fig. 3
Fig. 3
Full size image

Mean and reliability (α) for descriptive statistics of study 1 variables.

Table 1 Descriptive statistics (study 1).

Common method variance (CMV) bias

CMV bias arises when variations in responses are attributed more to the measurement method than to the actual constructs of interest, potentially inflating relationships among variables (Podsakoff et al., 2003). To address CMV bias in this study, we implemented both procedural and statistical remedies. Methodologically, steps were taken to reduce CMV by ensuring participant anonymity to mitigate social desirability and respondent fatigue. Additionally, survey items were randomized, especially for constructs like ChatGPT use, engagement, and creativity, to prevent a sequential response pattern. Reverse coding was also applied to specific items to counteract response biases further. Statistically, we utilized Harman’s single-factor test to screen for CMV initially. This approach involves an exploratory factor analysis (EFA) to assess if a single factor accounts for most of the variance across all measured items. Table 2 below shows that the first factor accounted for less than 50% of the total variance, suggesting that CMV is unlikely to be a substantial issue. Table 3 below shows EFA for CMV bias, where factors 1–4 were used to deal with it.

Table 2 Exploratory factor analysis for common method variance.
Table 3 Statistical assessment of common method bias.

Full Collinearity Test

To rigorously address the potential for CMVB, a multifaceted statistical approach was employed, moving beyond Harman’s foundational single-factor test. In addition to the procedural remedies of respondent anonymity, item randomization, and reverse coding, two advanced statistical techniques were implemented. First, a full collinearity test was conducted as per Kock’s (2015) recommendation, which is a robust marker for CMV Bias in variance-based SEM. This test assesses the variance inflation factors (VIFs) of all latent constructs in the structural model; a VIF value below the stringent threshold of 3.3 indicates the model is free from the pathological collinearity indicative of significant CMV Bias. Second, we employed the latent method factor (LMF) approach (Podsakoff et al., 2003), adding a first-order common method factor to the PLS model onto which all principal indicators were loaded.

Measurement model

In a context of analysis, according to the PLS-SEM guidelines, the first step carried out in the research was to estimate the measurement model, to test the reliability and validity of the constructs before the test of the hypotheses. To support convergent validity, the values for each item loading the matching construct were greater than 0.5, as well as the AVE value, which was above 0.5 for all the constructs, thus implying that the chosen constructs captured adequate variance.

Table 4 measurement model demonstrates strong internal consistency reliability for all constructs, as both Cronbach’s Alpha and Composite Reliability (rho_c) values exceed the recommended threshold of 0.7, with many above 0.9, indicating excellent reliability. Furthermore, convergent validity is generally established, as the AVE for most constructs surpasses the 0.5 benchmark, confirming that the indicators adequately represent their respective constructs; the only exceptions are the overall “Digital Literacy” (AVE = 0.620) and “Students’ Creativity” (AVE = 0.554) constructs, whose AVE values are slightly lower but still acceptable, particularly given their high composite reliability, suggesting the constructs are well-defined despite the minor AVE shortfall.

Table 4 Construct validation and reliability metrics in the measurement model.

Figure 4 shows A path diagram displaying the relationships between variables in a structural equation model (SEM). The diagram includes nodes representing variables such as Digital Literacy, PU, Trust in AI, Attitude Toward AI, Intention to Use AI, and EFL Students’ Creativity.

Fig. 4
Fig. 4
Full size image

Established measurement model of the present study, which includes key variables and their indicators.

Table 5 shows Discriminant Validity by HTMT 0.85 assesses discriminant validity by presenting the HTMT ratios for each construct pair. The values below the diagonal represent correlations between constructs, with all values falling below the HTMT threshold of 0.85. This indicates that each construct is distinct from others, confirming adequate discriminant validity within the model and supporting the uniqueness of each construct in measuring its intended aspect within the study.

Table 5 Discriminant validity by HTMT 0.85.

Table 6: The model confirms that students’ digital literacy provides a crucial foundation, shaping their perceptions of AI’s usefulness and ease of use. However, the most significant finding is the paramount role of Trust in AI (β = 0.56), which emerges as the strongest direct predictor of a positive attitude, significantly outweighing the influence of PU and ease of use alone. This trust-driven attitude then powerfully propels the intention to use (β = 0.58). The final path to creativity (β = 0.50) is substantial, indicating a strong perceived link between the intention to engage with AI and creative self-efficacy.

Table 6 Path analysis results.

Table 7 reveals a nuanced hierarchy of influence within the AI acceptance process for EFL learners. The strong support for H1 and H2 (β = 0.45 and 0.40) confirms that digital literacy is a critical enabler, providing the necessary skills to navigate AI tools, which in turn shapes perceptions of their usability and value. While PU (H6: β = 0.38) and ease of use (H7: β = 0.33) directly influence attitude, their impact is overshadowed by trust (H5: β = 0.56). This trust-based attitude is then a strong predictor of the intention to use (H8: β = 0.58). The support for H9 (β = 0.50) indicates a strong perceived link between the intention to use AI and creative self-efficacy.

Table 7 Hypotheses testing.

Moreover, Table 8 provides the crucial insight that moves this research beyond a standard TAM application: trust is not just an additive factor but a fundamental psychological mechanism that explains how perceptions translate into attitudes. The significant indirect effects (β = 0.26 and 0.24) demonstrate that a substantial portion of the influence of PEOU and PU on Attitude is channeled through Trust in AI. This means that the AI’s usefulness and ease of use are important primarily because they foster a sense of reliability and confidence in technology.

Table 8 Mediation analysis.

Predictive validity of the inner model using PLS predict

To assess the predictive validity of the inner model in this study, we employed the PLS Predict approach. We utilized SmartPLS to develop a k-fold cross-validation technique to deliver appropriate Q²_predict values, which indicate the comparison between the predicting difference of PLS and the original observation based on the guidelines of Shmueli et al. (2016).

Table 9 shows predictive validity of inner model, whereas the Q²_predict values, all greater than zero (range: 0.19–0.29), confirm the model’s predictive relevance for all constructs. Lower prediction errors (MAE: 0.128–0.152; RMSE: 0.165–0.185) indicate accurate out-of-sample forecasts. Creativity (Q²_predict = 0.29) and Attitude (Q²_predict = 0.27) show the strongest predictive power, while PU (Q²_predict = 0.19) remains acceptable.

Table 9 Predictive validity of inner model (PLS predict results).

Figure 5 shows that predictive validity of the inner model constructs was assessed using PLS Predict. A bar and line chart showing the Q²_predict values and RMSE for different variables related to EFL students’ creativity. The bars will be used to indicate the Q2_predict values of variables.

Fig. 5
Fig. 5
Full size image

Predictive validity assessment of inner model constructs using PLS predicted results.

Discussion on the findings of study 1

The findings of this study underscore the pivotal role of digital literacy as a foundational enabler of AI acceptance among EFL students, directly enhancing both PEOU and PU of AI tools. More notably, the results reveal that trust in AI serves as a critical psychological mechanism through which these perceptions translate into positive attitudes. Interestingly, the mediating effect of trust is stronger for the path from PU to attitude than from perceived ease of use (PEU). This suggests that for students, trusting an AI tool is more strongly tied to believing it delivers genuine value and learning benefits (PU) than merely finding it easy to operate (PEU). In other words, students are willing to navigate minor usability challenges if they are confident the tool is ultimately useful and competent, highlighting that functional reliability is a greater driver of trust than interface simplicity in educational AI contexts.

This trust-based attitude significantly predicts behavioral intention, affirming the core TAM relationships. Furthermore, the positive path from behavioral intention to self-reported creativity suggests that students’ willingness to engage with AI is closely associated with their perceived creative capacity. However, an alternative theoretical ordering must be considered: it is plausible that students with higher pre-existing creative self-efficacy may be more predisposed to form positive attitudes toward open-ended, AI-driven learning tools, indicating potential for reverse causality. While our model tests a specific directional hypothesis, this reciprocal relationship should be explored in future longitudinal research. These insights hold important implications for educators and designers, emphasizing that beyond promoting digital literacy.

Section two

Methodology for study two

Building upon insights from Study 1, Study 2 delved deeper into the multidimensional aspects of trust within the TAM framework to provide a more nuanced understanding of how EFL students perceive and engage with AI-driven educational technologies. This phase replicated the core hypotheses concerning digital literacy, PEOU, PU, and behavioral intention, while also expanding the analysis by exploring trust as a complex, multidimensional construct. Recognizing trust’s critical role in technology acceptance, Study 2 examined distinct aspects of trust, namely Human-like Trust (integrating benevolence and integrity) and Functionality Trust (reflecting competence). The trust dimensions originate from research frameworks established by Mayer et al. (1995) and McKnight et al. (2011) regarding technology acceptance trust features. The research approach expanded its procedure to include a wider sampling method, which strengthened the trust dimension evaluation to investigate effective acceptance behaviors throughout different AI applications.

Design and hypotheses extension

Building on Study 1, Study 2 examines trust within the TAM framework among EFL students, specifically exploring how Digital Literacy and Creativity interact with new dimensions of trust. This study replicates the initial hypotheses (H1–H9) and expands the analysis by treating trust as a multidimensional construct. By studying trust as a nuanced construct, this research offers a detailed view of how Human-like Trust (benevolence and integrity) and Functionality Trust (competence) each uniquely contribute to fostering creativity and enhancing digital literacy within the EFL context.

Participants and procedure

An online survey was conducted in November 2024, targeting a larger and more diverse sample of EFL students from multiple universities across China. Stratified sampling was employed to ensure balanced representation across gender, academic level, and major fields of study, enhancing generalizability. A total of n = 640 valid responses were obtained, with 54% female and 46% male participants, with an average age of 21. The sample primarily included Chinese nationals (90%), along with a smaller percentage of international students (10%) representing various academic disciplines. Invitations were sent through university channels, with course incentives offered to participants. The survey took approximately 12 minutes to complete. For more details, please see (Appendix Table B3).

While convenience sampling from multiple universities enhanced the diversity and size of the sample, it is important to acknowledge its limitations regarding generalizability. Participants were not randomly selected from a national registry of all EFL students, meaning the sample may not be fully representative of the broader population. Consequently, the findings may be most readily generalizable to students in similar institutional contexts within China, rather than to all EFL learners universally. This limitation is partially mitigated by the large sample size and the strategic inclusion of participants from varied academic disciplines and university types, which improves the capture of a wider range of experiences and strengthens the external validity within the defined context of this study. Future research would benefit from employing stratified random sampling techniques to enhance national representativeness further.

Measures

To maintain consistency, core constructs from Study 1 were retained and adapted, while new measures for trust were introduced to reflect its multidimensionality. The TAM-related items Digital Literacy, PEOU, PU, Attitude Toward AI, and Behavioral Intention to Use AI were adapted to fit the context of EFL students’ interactions with a broader set of AI-driven educational tools. For this study, trust was operationalized as a multidimensional construct to capture students’ complex perceptions of AI-driven educational technologies. Following frameworks by Mayer et al. (1995) and McKnight et al. (2011), trust was conceptualized along two distinct dimensions: Human-like Trust in AI (combining benevolence and integrity) reflects students’ perceptions of AI tools as supportive and ethically grounded agents, with six items used to capture this dimension. Functionality Trust in AI (competence) represents students’ confidence in the functional effectiveness of AI to deliver quality educational content, measured by five items.

Analytic approach

PLS-SEM was implemented in SmartPLS 4 to analyze the complex relationships within the research model, selected for its predictive robustness and suitability for exploratory research involving both reflective and formative constructs. The analysis followed a two-step approach: first, the measurement model was validated by confirming indicator reliability (outer loadings >0.70), internal consistency (Composite Reliability >0.70), convergent validity (AVE > 0.50), and discriminant validity (HTMT ratio < 0.85); second, the structural model was evaluated by examining hypothesized paths using bootstrapping (5000 resamples) for significance testing, assessing model fit (SRMR < 0.08), and establishing predictive relevance (Q² > 0) via blindfolding procedures.

Justification of analytical rigor and model specification

The application of PLS-SEM, including bootstrapping and blindfolding procedures, was selected based on its appropriateness for the study’s predictive and explanatory objectives. Bootstrapping (with 5000 resamples) is a non-parametric technique that does not assume normal data distribution, providing robust estimates of the standard errors and confidence intervals for evaluating the significance of path coefficients, a critical step for hypothesis testing in complex models (Hair & Alamer, 2022). Similarly, blindfolding was employed to generate the Q² statistic, a necessary measure to assess the model’s predictive relevance for endogenous constructs and to ensure that the path model possesses substantive explanatory power beyond data fitting. While the model incorporates multiple pathways to test the extended TAM framework thoroughly, the specification of relationships was theoretically grounded rather than saturated; each hypothesized path was derived from established TAM literature and our specific research questions concerning the multidimensional nature of trust.

Multidimensional trust analysis

The specific section of the analysis focused on the contribution of the Human-like Trust and Functionality Trust to the attitude of students towards AI and their intentions concerning the behavior. The analysis, in which trust is viewed as a multidimensional concept, has given us the opportunity to conclude how each of the dimensions, in its own way, influences the acceptance and creative involvement of EFL students in AI technologies.

Operationalization of multidimensional trust

The operationalization of trust in Study 2 was rigorously designed to capture its multifaceted nature. Moving beyond a unidimensional construct, trust was explicitly modeled as a second-order construct comprised of two distinct dimensions, Functionality Trust and Human-like Trust. This approach is grounded in the established theoretical frameworks of Mayer et al. (1995) and McKnight et al. (2011), which posit that trust is a holistic judgment formed by distinct perceptual dimensions. Functionality Trust was operationalized as a reflective first-order construct measuring students’ cognitive assessment of the AI’s competence, reliability, and performance. This dimension was measured using five items (see appendix-A). Human-like Trust was operationalized as a reflective first-order construct capturing the affective and relational perceptions of the AI’s benevolence, integrity, and ethical alignment. This dimension was measured using six items (see appendix-A). This operationalization ensures that the multidimensional framework is not merely claimed but is structurally embedded within the analytical model.

Findings for study 2

Descriptive statistics

The research-relevant variables receive descriptive analysis and demographic characteristics, while a correlation matrix gathers their data below. The sample receives an initial interpretation through descriptive statistics, which leads to an introduction of the fundamental variables through the correlation matrix before analysis processes begin. Correlation between study variables is also checked for study 2, See (Appendix Table B4).

Table 10 reveals a consistently positive predisposition among EFL students towards AI technologies, a crucial contextual finding that shapes the interpretation of all subsequent analyses. The high mean scores across all constructs (ranging from M = 4.12 to M = 4.32 on a 5-point scale) indicate a sample that is already digitally literate, finds AI tools useful and easy to use, and holds broadly trusting and positive attitudes toward them.

Table 10 Descriptive statistics.

Figure 6. A radar chart displaying various factors influencing EFL students’ creativity. The blue area represents the relative values of each factor, with each axis corresponding to one of the variables.

Fig. 6
Fig. 6
Full size image

Comparison of mean values across key study variables.

Common method bias assessment

To rigorously address the potential for CMVB, a multifaceted statistical approach was employed, moving beyond foundational Harman’s single-factor test. In addition to the procedural remedies of respondent anonymity, item randomization, and reverse coding, two advanced statistical techniques were implemented. First, a full collinearity test was conducted as per Kock’s (2015) recommendation, which is a robust marker for CMB in variance-based SEM. This test assesses the VIFs of all latent constructs in the structural model; a VIF value below the stringent threshold of 3.3 indicates the model is free from the pathological collinearity indicative of significant CMVB. Second, we employed the LMF method (Podsakoff et al., 2003).

Table 11, consistently indicates that CMB is not a pervasive issue in this study. Harman’s single-factor test revealed that the first factor accounted which is below the 50% threshold. More robustly, the full collinearity test confirmed that all VIF values for the latent constructs were significantly below the stringent cutoff of 3.3 (range: 1.77–2.31). Finally, the LMF analysis demonstrated that the average substantive variance (0.71) was markedly greater than the average method variance (0.03), yielding a ratio of 23.7:1. Convergent evidence from these three techniques confirms that the relationships observed in the model are substantially influenced by the constructs of interest.

Table 11 Statistical assessment of common method bias.

Table 12 shows that all values are below the HTMT 0.85 thresholds, confirming that the constructs meet the discriminant validity criteria. This table provides evidence that each construct is conceptually distinct from the others, further supporting the robustness of the model’s construct validity.

Table 12 Discriminant validity by HTMT 0.85.

Structural model and path analysis

The structural model analysis provides insights into the hypothesized relationships between constructs. Each dimension of trust (Human-like and Functionality) is analyzed for its influence on students’ Attitude Toward AI, Behavioral Intention to Use AI, and Creativity.

Table 13 reveals a compelling narrative about the drivers of AI integration in EFL learning, moving beyond established TAM relationships. While the significant paths from Digital Literacy to PEOU (β = 0.42) and PU (β = 0.37) confirm that technical competence is a foundational precursor, the critical finding lies in the formation of Functionality Trust. The strong influence of PU (β = 0.54) on Functionality Trust even greater than that of PEOU (β = 0.50) suggests that for students to trust an AI’s reliability, it is not enough for the tool to be easy to use; it must be perceived as genuinely useful for their specific language learning goals. This robust trust in the AI’s competence subsequently becomes the primary driver of a positive Attitude Toward AI (β = 0.58), indicating that confidence in the tool’s capability is a more powerful affective catalyst than its ease of use or PU alone. Finally, the model demonstrates that the intention to use AI, fueled by this trust-based attitude, is strongly linked to students’ self-reported creative capacity (β = 0.59).

Table 13 Path analysis results with functionality trust.

Figure 7 shows A path diagram illustrating the relationships between various observed and latent variables in a structural equation model (SEM). The model also shows the path relationship between variables.

Fig. 7
Fig. 7
Full size image

Established measurement model for Human-like Trust in AI.

Table 14 results further substantiate the pivotal role of Functionality Trust as the crucial linchpin in the AI acceptance process for EFL learners. The analysis confirms that digital literacy cultivates the PU (β = 0.36) and ease of use (β = 0.41) of AI tools, yet it is the translation of these perceptions into trust in the AI’s competence that is most critical for adoption. Notably, PU exerts a stronger influence on Functionality Trust (β = 0.53) than ease of use (β = 0.49), underscoring that students prioritize the AI’s tangible value and effectiveness for language learning over its simplicity. This hard-earned trust in the tool’s reliability is the primary antecedent to a positive attitude (β = 0.57), which in turn strongly predicts the intention to use (β = 0.61). The final path to self-reported creativity (β = 0.58) suggests that the behavioral intention to engage with a functionally trustworthy AI is a significant predictor of students’ confidence in their creative abilities.

Table 14 Path analysis results with functionality trust.

Figure 8. shows a path diagram showing the relationships between various observed and latent variables in a structural equation model (SEM). Model also showing the path relationship between variables along with their indicators, respectively.

Fig. 8
Fig. 8
Full size image

Established measurement model for functionality trust in A.

Indirect analysis

Table 15 reveals a crucial psychological pathway in AI acceptance that extends beyond conventional TAM findings. The significant indirect effects (PU → Human-like Trust → Attitude, β = 0.37) demonstrate that students’ perception of an AI’s benevolence and integrity, its “human-like” qualities, functions as a key mechanism through which usefulness translates into positive attitude formation. This suggests that for EFL students, believing an AI tool is designed with ethical integrity and supportive intent is not merely ancillary; it is a fundamental prerequisite for developing a favorable disposition toward it. The indirect path to creativity (β = 0.29), while perceptually measured, implies that this sense of relational safety and ethical assurance may reduce apprehension.

Table 15 Indirect effects with human-like trust in AI (Study 2).

Table 16 highlights a parallel, yet distinct, competency-based pathway. The strongest indirect effect in the model (PU → Functionality Trust → Attitude, β = 0.38) underscores that PU builds trust primarily through confidence in the AI’s competence and reliability. The comparable strength of indirect effects for both trust dimensions (0.37 for Human-like vs. 0.38 for Functionality in the PU-Attitude path) is a novel contribution; it indicates that both perceived competence (Functionality Trust) and perceived ethical integrity (Human-like Trust) are equally vital yet psychologically distinct drivers of acceptance.

Table 16 Analysis of indirect effects with functionality trust in AI (Study 2).

Predictive validity of the inner model

To test the Predictive Validity of the Inner Model of PLS through the PLS Predict Application, the rates of predictive accuracy of the key constructs were estimated by means of PLS Predict. The convergent and discriminant validity are important in the model to ensure that it can be applied to real prediction outside the sample data.

Table 17 demonstrates the model’s strong predictive validity, with all Q²_predict values well above zero (0.26–0.30), confirming its relevance for forecasting outcomes. Creativity shows the highest predictive power (Q²_predict = 0.30), while low error metrics (MAE: 0.137–0.150; RMSE: 0.175–0.185) underscore the accuracy of these predictions. These findings support the model’s robustness in explaining and predicting key constructs in AI-enabled learning environments.

Table 17 Predictive validity of the inner model using PLS predict.

Figure 9 A bar chart with the comparative predictive significance of the important constructs: Attitude Towards AI, Behavioral Intention to Use AI and the Creativity of EFL Students. The bars show the Q2_predict values (in blue), the MAE (Mean Absolute Error) relationships are in red, and the RMSE (Root Mean Squared Error) relationships are in green.

Fig. 9
Fig. 9
Full size image

Predictive validity comparison of key constructs using PLS predict.

Multidimensional trust analysis

The Multidimensional Trust Analysis examines the independent influence of Human-like Trust and the Functionality Trust on the primary parameters of the study: Attitude Towards AI, Behavioral Intention to Use AI, and EFL Students' Creativity. The key dimensions are analyzed, enabling the reader to clearly understand the role that each aspect of the trust dimensions plays in ensuring proper engagement of the students through the AI-based educational tools. The perceived benevolence and integrity of the Human-like Trust must positively affect social relatedness with AI tools, which can involve a stronger level of engagement and inventiveness. Conversely, the competence and the reliability of the AI embodied in Functionality Trust are predicted to increase the perceived self-efficacy of the tool, and, therefore, add to the power of attitudes and behavioral intentions to use AI to study.

Table 18 provides a critical comparative analysis of the differential effects of various dimensions of trust on important educational outcomes, albeit self-administered creativity has been used. The findings indicate an interesting nuance: the stronger influence of Functionality Trust (competence, reliability) on Attitude (f2 = 0.22) and Creativity (f2 = 0.20) is evident than what Human-like Trust produces, which implies that students emphasize technical reliability (rather than empathy) in creating and engaging with AI. On the contrary, the Human-like Trust (benevolence, integrity) exhibits the most significant influence on Behavioral Intention (fulvecil 0.18), which means that the perceived goodwill could be more decisive to drive motivated intent to use AI than to obtain a particular creative result.

Table 18 Multidimensional trust outcomes.

Figure 10 reported the mediation framework of the impacts of the equation of Humankind-like Trust and functionality trust. In this visualization, trust dimensions do not function uniformly in the sense that multiple elements of trust are interacting whenever students are using AI-based learning technologies.

Fig. 10
Fig. 10
Full size image

Comparative impact of human-like and functionality trust on key outcomes.

Furthermore, Fig. 11 shows how Human-like Trust and Functionality Trust can flow to such outputs as Attitude, Behavioral Intention and Creativity. The wheel-shaped structure, suggesting the distinction in the intensity of the lines, focuses on the positions of the dimensions of trust and the strength of the relations.

Fig. 11
Fig. 11
Full size image

Trust dimension contribution wheel.

Discussion on the findings of study 2

The findings from Study 2 provide a nuanced understanding of AI acceptance by moving beyond the monolithic treatment of trust prevalent in prior TAM literature. Our results confirm that trust is not a single construct but operates through two distinct psychological pathways: one relational and one competency-based. The significant influence of Human-like Trust (benevolence and integrity) on attitudes and behavioral intentions reveals a critical relational dimension. This suggests that for EFL students, the acceptance of AI is not solely a calculated decision based on utility but is also an effective response to the perceived character of the AI tool. When students perceive AI as having integrity and acting with benevolent intent towards their learning, they are more likely to develop a positive attitude and intend to use it. This finding extends TAM by integrating principles from relationship marketing and ethical AI, positing that in educational contexts, technologies that mimic supportive human interactions can foster a sense of psychological safety. This safety, in turn, is shown to be a significant precursor to self-reported creative engagement, implying that students are more willing to experiment and take linguistic risks when they feel supported by a non-judgmental, ethically grounded partner.

Conversely, functionality trust (competence and reliability) emerged as the bedrock of sustained intention and creative confidence. Its stronger predictive power across key outcomes underscores a fundamental reality: no amount of empathetic design can compensate for a tool that is perceived as unreliable or ineffective. This dimension of trust validates and deepens core TAM constructs by demonstrating that PU and ease of use are ultimately synthesized into a global judgment of the tool’s capability, which is the immediate driver of usage decisions. The strong link between Functionality Trust and creativity is particularly insightful; it indicates that students’ confidence in their creative output is contingent on their confidence in the tool’s operational integrity. This finding critically acknowledges that creativity in an AI-mediated context is not purely spontaneous but is built upon a foundation of technical reliability.

General conclusion

Synthesizing findings across both studies, this research establishes that digital literacy and multidimensional trust form a critical pathway not just to AI acceptance, but to the goal of fostering EFL students’ creativity. Study 1 confirmed that digital literacy primes perceptions of usefulness and ease of use, which foster the trust necessary to form positive attitudes and, crucially, the behavioral intention to use AI creatively. Study 2 refined this model by demonstrating that creativity is differentially supported by two distinct trust dimensions: Functionality Trust (competence, reliability) builds the confidence needed for creative experimentation by ensuring the AI is a dependable partner, while Human-like Trust (benevolence, integrity) creates the psychological safety required for creative risk-taking by assuring students of the AI’s supportive and ethical intent.

These insights translate into actionable design principles for developing AI as a catalyst for creativity. To bolster Functionality Trust, systems should implement explainable feedback loops that clarify how AI-generated suggestions work, alongside demonstrable reliability in tasks like error correction or content generation. To cultivate Human-like Trust, interfaces should incorporate empathetic interaction patterns such as supportive messages and ethically transparent operations that reinforce perceived benevolence and integrity. Furthermore, instructional support should aim to strengthen digital literacy not only to improve tool usability but also to help students critically evaluate AI outputs, thereby fostering informed and creative use.

Theoretical and practical implications

Theoretically, this study makes a significant contribution by successfully decomposing the trust construct within the TAM framework, offering a more precise mechanism to explain technology acceptance in educational settings. We move from asking if trust matters to explaining how different types of trust matter in distinct ways: Human-like Trust facilitates the relational adoption of AI, while Functionality Trust ensures its sustainable and confident use. This dual-trust model bridges the gap between purely utilitarian TAMs and theories of relational communication, providing a holistic framework for future research. Practically, these findings offer actionable guidance for educators, instructional designers, and AI developers. For developers, the implications are clear: AI tool design must achieve a synergy of competence and character. Interfaces should be engineered not only for algorithmic accuracy and reliability but also to communicate transparency, ethical operation, and supportive intent through features like explainable AI and empathetic feedback tones. For educators and institutions, this means that fostering AI acceptance requires more than just demonstrating a tool’s usefulness; it necessitates building students’ digital literacy to understand AI’s functionality, thereby bolstering Functionality Trust, while also creating a classroom culture that encourages open exploration with AI as a collaborative partner, thereby nurturing Human-like Trust.

Limitations and future research

This study has several key limitations that provide a foundation for future inquiry. Firstly, the cross-sectional design precludes definitive causal inferences regarding the relationships between trust, acceptance, and creativity. Secondly, the operationalization of creativity relied on self-report measures, which capture perceived creative self-efficacy rather than objectively assessed creative performance. Consequently, while the observed relationships are statistically significant, they reflect students’ beliefs about their capabilities rather than verified creative outcomes. Finally, the cultural homogeneity of the sample (80% Chinese students) may limit the generalizability of the findings across diverse cultural and educational contexts.

Future research should address these limitations through methodologically robust designs. Longitudinal studies, such as multi-wave panel designs, or experimental interventions like trust induction experiments, could establish causal pathways and temporal dynamics between trust, AI use, and creative development. To better capture creativity, future work should integrate behavioral measures, such as assessing the originality, flexibility, and elaboration in student-generated outputs like essays, dialog transcripts with AI chatbots, complemented by instructor evaluations or automated linguistic creativity analyses. Additionally, cross-cultural replications comparing collectivist and individualist settings would help elucidate the role of cultural values in shaping trust and creative engagement with AI.