Abstract
The integration of artificial intelligence (AI) into educational settings has the potential to transform learning experiences; however, its adoption among students remains impacted by various factors. The current study assesses the determinant factors that shape students’ willingness to adopt AI applications. A quantitative approach was applied, and data were collected from 211 undergraduate students at the University of Ha’il. The data were analysed via structural equation modelling (SEM) with the AMOS software. The results revealed that readiness, interactivity, and ethical awareness significantly affect students’ adoption of AI applications, while trust and performance did not have a significant effect. These findings suggest that fostering AI adoption among students requires a holistic approach that addresses students’ technical preparedness and considers trust-building and ethical educational usage. The implications of this study are significant for educational institutions such as universities, as well as for policy makers, educators and developers aiming at integrating the AI applications effectively into students’ learning and curriculum. University administrators could benefit by developing the necessary infrastructure and training, while educators could leverage these insights in designing AI-supported learning environments. Students would benefit from these improved academic experiences that are tailored to their needs. This research contributes to the growing discourse on AI in education and offers a holistic conceptual framework to maximize its potential.
Similar content being viewed by others
Introduction
Artificial intelligence (AI) has transformed higher education globally, revolutionizing the process of traditional education and providing new learning opportunities. The quick advancement of AI has improved several disciplines, including educational sectors. The establishment of generative AI applications, for instance, ChatGPT has drawn significant interest from academia and international organizations in the education field1. As an example of AI applications, ChatGPT can generate text and coding. It offers real-time feedback, personalized tutoring, and improved learning experiences in many different subjects1,2. The applications of AI in the education field include automating procedures that can be used to offer personalized learning experiences, which have effects on learning, teaching and institutional and organizational management3,4. In higher education, it is key for AI applications to have adaptive learning platforms, personalized learning and virtual intelligent learning environments3,5. This advancement has shifted the paradigm of traditional education from a focus on conventional classes to enhanced AI learning spaces, which may improve students’ learning and learning outcomes and can optimize management and administrative tasks6,7.
These emerging trends have also involved some international organizations. For illustration purposes, WIPO8 gave clear definitions of AI technology and explain its prominence in several areas including education. UNESCO9, on the other hand, produced some reports and documents with directions based on correct ways to use AI applications in education and also the status of currently used AI applications, the evolutionary trends and the improvement of use of AI applications in the field of education. These reports also demonstrate the importance of and that the future lies with the use of AI technologies in the education system. Along with this, the report of EDUCAUSE Horizon10 mentions how the use of AI has a transformative effect on education10. Additionally, research has also been done on other AI enhanced education based on growing usage of such applications in the education sphere by many international organisations that are focused on the education field.
As a result, perspectives on the use of AI have advanced significantly across various educational domains, such as in language11, programming12, (intelligent) learning13, online learning14,15 and even higher education16. Emerging technologies have been starting to be used in institutes and universities in the higher education sector. For this reason, students at universities are primarily affected by the advancement of ICT as this is the largest demographic with the greatest access to and the opportunity to use and interact with AI applications in their learning17,1. In addition, the educational practitioner have the opinion that AI applications have successfully been used in this field and can play a very important role in almost all education processes such as teacher, learning, management, assessment, testing etc. and ends up modifying and change almost all education processes. In general, the AI tendencies and its implementation in the educational process are irreversible18.
However, New studies revolve around the fact that there are a number of problems with using AI within universities due to a number of different factors pulling in unique ways. Usefulness, capability and perception of something drives or does not drive the use of AI19,20. Furthermore, privacy and ethical issues may pose barriers to its utilization21. In addition, the impacts of AI on students’ psychological well-being are mixed, as it can provide personalized support but may also generate anxiety and stress22. Cost effectiveness and learning outcomes have emerged as influencing factors in students’ adoption and acceptance21, while the personalization and usability of AI applications are seen as pivotal drivers of AI use23. In addition, cognitive trust in AI in education, which is built by ethical considerations, reliability and transparency, plays a considerable role in AI adoption24. Contextual factors, for instance, expectations regarding effort and performance, facilitating conditions and social influence also affect the intention to utilise AI25. Students’ perceptions and attitudes toward technologies significantly impact their use25. These variables interact in complicated ways, and their interplay and relevance might be vary based on different educational contexts26, highlighting the important of further research in developing comprehensive understanding of the dynamics and phenomena of AI adoption in different educational settings and with various factors.
Although several studies have examined the influence of psychological factors on students’ adoption of AI, a significant knowledge gap remains. A study27 showed a considerable knowledge gap regarding AI and its utilization in education, suggesting that the lack of knowledge about using AI applications may limit the ability to use these tools effectively. While research in other settings and contexts have focused on certain factors, such as the enabling environment, psychological attitudes and epistemic capability as critical determinants of AI adoption28, a comprehensive understanding of the variability and interaction of other factors in different educational and cultural settings is lacking. Other factors, such as readiness, performance, trust, interactivity, and ethical awareness, could also influence students’ adoption of AI. Despite the significance of these factors, no existing study appears to have integrated these factors into a single conceptual model to explain their effects on students’ AI adoption in educational settings. The value of combining these factors relies on their ability to offer a holistic understanding regarding the drivers of AI adoption by focusing on the multifaceted natures of students’ engagement and interaction with AI applications. Readiness is related to students being equipped to engage with AI; performance motivates students to demonstrate the perceived value of learning through AI; trust focuses on students’ confidence in the reliability of using AI; interactivity is related to students’ active engagement; and ethical awareness focuses on the reasonable utilization of AI, addressing its attitudinal, practical and ethical dimensions. Understanding the interplay among these factors provides a holistic conceptual model for AI adoption and utilization in education, thus offering insight for educators, policy makers and AI developers to design and deploy more ethical, engaging and effective AI applications that support the learning environment and lead to the utilisation of AI. This study aims at filling this gap by offering a conceptual holistic model and utilizing structural equation modelling (SEM) to understand what factors affect students’ utilisation of AI applications for their learning, thereby enhancing students’ utilization of AI applications.
Literature review
Artificial intelligence (AI)
Various methods proposed in AI applications are considered for enhancing students’ learning experience29. The research suggests that these AI applications would enable some level of personalized feedback which students can then use to revise the responses they received from open ended questions and academic research30. Additionally, as generative AI conversational chatbots are introduced, they can serve as virtual teaching assistants through interactive learning and the enhancement of understanding between students’ shared understanding and therefore facilitating more students’ collaboration and engagement31,32. Also, AI offer learning through augmented reality, metaverse and simulation33. AI can assist students by arranging the students’ learning tasks, monitoring learning achievement and progress, recommending courses and prioritizing learning tasks along with providing learning tasks34 AI could benefit instructors in grading homework and adjusting their learning methods to meet the students’ requirements35. Advanced AI could also respond to students’ inquiries, even outside the regular hours of classes, and could even assist them with their admission queries and administrative questions about chosen courses36. In addition, AI is useful in creating smart content, which includes digitized textbooks that can be customized based on learners’ needs across all levels and various courses37. Nevertheless, despite the considerable benefits of AI applications and their proven effectiveness in supporting the learning environment, students are often reluctant to utilize these tools—a critical issue that must be addressed by understanding the variables which may affect learners’’ attitudes toward utilizing AI tools in their learning38,39.
Hypothesis development for the conceptual framework
AI readiness
Being ready implies knowing what action to take and just how to execute it40. So, if a person is ready to adopt new technology, it will encourage him to do it41. Dai et al.42 discovered that the students’ motivation to learn AI is associated with their ability of applying it. It was found that the way a learner is prepared for AI impacts how they use and interact with AI systems. Therefore, the below hypothesis is provided.
H1
AI readiness has a positive effect on students’ willingness to utilize AI.
AI performance
Performance is measured on a rating based on how the students feel that employing a certain unique technology is able to enhance their success together with doing the required work43. AI allows accurate and quick processing of information and data. Based on Sudaryanto et al.’s study44, when learners see AI as useful for their learning, they tend to use it more often. For this reason, the researchers have made the following hypothesis:
H2
AI performance has a positive effect on students’ willingness to utilize AI.
Trust
Users trust using a system or technology when they are certain it will protect their interests45. Trust can guide users to adopt and utilize a certain technology46. If a technology is seen as credible, people are more likely to intend to use it positively. Trust might encourage users to provide and share their personal information when using a technology, which could provide more helpful and accurate responses based on users’ prompts47. Within the context of AI, its utilization has been affected by many factors and depends on the specific context of implementation38. Most studies focused on examining individuals’ attitudes towards adopting AI. Nevertheless, despite the critical role of trust, only few studies have included it in their student-based models, highlighting that trust remains an underexplored aspect within the context of AI in education. Most studies have employed a TAM model that incorporates trust48,49,50. However, assessing trust alongside other factors such as readiness, performance, interactivity, and ethical awareness in a single conceptual model is needed to gain a deep understanding of which factors play a critical role in enhancing the utilization of AI applications among students. Thus, the following hypothesis is proposed:
H3
Trust has a positive effect on students’ willingness to utilize AI.
AI interactivity
Interactive AI applications such virtual assistants, chatbots such as ChatGPT, and intelligent tutoring platforms are increasingly utilised in educational settings. The interactivity of AI applications offers students personalized learning interactions with live responses, feedback, explanation and hints. Ng et al.51 suggested that AI tutoring systems could offer personalized guidance that increases students’ interest in learning. Because AI applications can adapt instruction on the basis of student responses, AI interactivity could help students learn better. A study by Mahmoud and Othman19 revealed that such AI applications provide not only tasks but also corresponding materials and content to augment students’ learning with relevant information and knowledge52. Interaction in AI applications can assist students with immersive and personalized learning by creating objectives whose complexity can be modified and adjusted and by providing students with instant feedback53. Thus, the following hypothesis is proposed:
H4
AI interactivity has a positive effect on students’ willingness to utilize AI.
Ethical awareness
Ethical awareness refers to students’ consciousness and understanding of the ethical responsibilities and implications associated with the utilisation of AI in educational contexts54.
This factor has also been paid more attention as universities and institutions are worried about the academic liability and accountability of utilizing AI25. It has also been shown that ethical awareness affects students’ behavioural intentions. Zhu et al.55 conducted a study and found that students’ ethical awareness affected students’ behavioural intent regarding the use of AI applications positively. It was found that the understanding of the ethical policies and the impacts of the usage of AI has a significant part in improving the students’ adoption of AI applications. Recent studies on AI literacy programs also confirm that ethical consciousness has a relevance for the shaping of behavioural intention. They also demonstrated that the effect of such an AI literacy program was positive on its participants’ ethical awareness which is of great note considering the essentiality of an ethical awareness on AI utilisation. The discovery led to a finding that understanding the ethics of AI directly impacts the students use of AI applications. It is therefore propounded in the following hypothesis.
H5
Ethical awareness has a positive effect on students’ willingness to utilize AI.
The conceptual framework model is presented in Fig. 1:
Methodology
Research design
The effects of readiness, performance, interactivity, trust and ethical awareness were investigated by using a quantitative approach to assess their effects on students’ willingness to use AI applications. This approach is used because it is deemed as a best approach to analyze relationships as well as testing the research hypothesis on variables56. The data was taken at one point in time from the students. This applies a strategy that allows testing of relationships and hypothesis of how variables relate to each other and testing how these interact with each other’s.
Sampling and procedures
The participants included University of Ha’il undergraduate students who have previously dealt with AI applications such as ChatGPT, virtual assistants, learning management systems and other AI powered educational applications. To get informed responses from their thoughts and adoption behaviours, participants who exposed to AI and possessed AI knowledge were purposively sampled. First-year students taking computer and information classes formed the selection group for this research. The research sought 350 participant students for the study in accordance with SEM guidelines. Hair et al.57 suggested that a total of 200–400 respondents are necessary to attain reliable parameter estimates. An online survey distributed using email and on the learning management system via Blackboard was used for data collection to ensure ease of participation in the survey. The research received approval from the institutional review board at the University of Ha’il. The research survey was distributed from January to March 2025, in the second semester of the 2025 academic year, and resulted in 211 validated responses after all inconsistent or incomplete data were removed. A total of 226 responses were initially received; however, 15 outlier cases were deleted, yielding a final analytic sample of 211 cases.
Instrumentation
The questionnaire consists of two sections. In the first section, a self-designed items was used to gather respondents’ demographic data, including their gender, college, and AI learning application experience. The second section is a structured questionnaire that aims to evaluate the constructs of interest: AI readiness, performance, interactivity, trust, ethical awareness, and AI adoption. The validation and reliability of the instrument was confirmed via modifications applied to established research scales that have been used in assessments of AI in education. Participants used a 5-point Likert scale for construct measurement. The study measured AI readiness via items adapted from the study by Sanusi et al.58; performance items were drawn from Nazaretsky et al.59; the instrument to measure interactivity came from Yaseen et al.60; measurements of trust were derived from Rahman et al.61; Acosta-Enrique et al.62 was the source of ethical awareness items; and AI adoption was assessed via items from Sanusi et al.58.
Data analysis techniques
The analysis adopted two approaches to analysing the data. The first method involved the generation of descriptive statistics using SPSS to evaluate respondents’ demographic characteristics. The second method of research analysis involved SEM via the AMOS data-processing software. SEM provided a framework for assessing both the theoretical connections among the variables of AI readiness, performance, interactivity, trust, ethical awareness, and AI adoption and the appropriateness of the proposed structural model. SEM analysis comprised two steps. First, researchers used confirmatory factor analysis (CFA) to assess construct and convergent and discriminant validity structures in the measurement model. Then, SEM analysis was conducted to test the causal pathways among AI readiness, performance, interactivity, trust, ethical awareness and AI adoption. The SEM analysis assessed path coefficients together with their corresponding significance values to establish the directional relationships between variables and test the hypotheses. This study adopted SEM as its core analytical method because it can analyse intricate relationships among many latent constructs simultaneously and thus suits the research framework. Kline63 demonstrated that SEM represents a superior alternative to traditional regression because it accounts for measurement error assessment in evaluating complete theoretical models, therefore making it appropriate for studying AI readiness and adoption together with performance and interactivity aspects as well as elements of trust and ethical awareness within a unified system. The AMOS software was used because it can perform strong estimation methods such as maximum likelihood estimation and produce detailed reports about model fit diagnostics64. The graphical modelling interface of this software allows users to visualize their model while refining its structural components, which boosts outcome interpretation. The analysis was performed with SEM using AMOS because this approach provides an appropriate statistical method for working with latent measures while validating data-driven models.
Results
A total of 211 students responded to the survey. Most respondents were female (108; 51.2%), with fewer being male (103; 48.8%). In terms of their academic programs, most respondents were registered in a bachelor’s degree program (196; 92.9%), followed by master’s degree programs (10; 4.7%) or a diploma (5; 2.4%). In terms of their colleges, most students were enrolled in the College of Art (77; 36.5%), followed by the College of Business Administration (68; 32.2%), and then the College of Education (30; 14.2%), while the fewest students were enrolled in applied colleges (5; 2.4%). In regard to their experience of using AI applications in their learning, most had already used them (185; 87.7%), while a few had not yet used them (26; 12.3%). Table 1 presents the demographic information.
CFA
CFA was applied to the measurement model. When all indices in the proposed model have achieved the values suggested by previous scholars, construct validity has been confirmed65. Certain factors with low loading, such as readiness (5) and performance (4, 5, 7), were eliminated, and CFA was performed again. The index values are shown in Fig. 2.
As shown in Fig. 2, all indices in the model achieved the values suggested in previous literature. Thus, construct validity is confirmed. Table 2 presents the fitness index values for the model.
Following CFA, convergent validity must be assessed before discriminant validity is ascertained65. Convergent validity is confirmed when the value of composite reliability (CR) is greater than 0.6 and the value of average variance extracted (AVE) exceeds 0.566. The values of both CR and AVE, as shown in Table 3, confirm convergent validity.
Finally, discriminant validity is an important step to ensure that each construct in the model is distinct from the others. Discriminant validity is confirmed when the AVE square root values (presented in bold) have greater values than other values in their rows and columns65. As shown in Table 4, all values in bold exceed other values in their respective rows and columns. Thus, discriminant validity is confirmed.
Furthermore, Mahalanobis distance was computed on all cases in order to identify possible multivariate outliers. The observations having p1 and p2 values less than 0.001 (especially 0.000) were regarded as significant outliers. According to this criterion, a number of extreme cases were also identified and eliminated in order to increase the robustness of the model. The examples of deleted cases are observations 4, 43, 88, 86 etc. which shown in Table 5. After removing these outliers, 211 valid cases remained for further analysis, out of the original 226 responses received.
Standardized estimate
Assessing the standardized estimate is important for measuring the strength between the relationships and loading factors of the items and also the square of R of the dependent variable in the model. The output of the standardized estimate is shown in Fig. 3.
The R square of the dependent construct, namely behaviour intention to use AI applications, is 0.70, which confirms that 70% of students’ behavioural intention to utilise AI applications could be explained by other independent constructs, namely, readiness, performance, trust, interactivity and ethical awareness. Cohen67 suggested that R square values greater than 0.26 confirm that the proposed model has high explanatory power. Therefore, the proposed model, which has an R square value of 0.70, is a robust model that explains the factors that affect students’ willingness to use AI applications.
Unstandardized estimate
Assessing the unstandardized estimate is also important for calculating the beta weight and critical ratio, which are needed when testing research hypotheses. Therefore, the unstandardized estimate was assessed, and the output is presented in Fig. 4.
Hypothesis testing results
The results, as shown in Table 6, revealed that AI readiness affected students’ behavioural intention to use AI applications (β = 0.339, p < 0.05). Thus, H1 is supported. Surprisingly, both perceived performance and trust had insignificant effects on students’ behavioural intention to use AI applications (β = − 0.138, p > 0.05; β = 0.059, p > 0.05). Thus, H2 and H3 are rejected. Furthermore, both interactivity with AI and ethical awareness had a significant effect on students’ behavioural intention to utilize AI applications (β = 0.475, p < 0.05; β = 0.203, p < 0.05). Therefore, H4 and H5 are confirmed.
Discussion
The aims of the study were to determine the readiness, performance, trust, interactivity and awareness of ethics that influence the student’s desire to use the AI applications. Results showed that students’ interest in the use of AI applications is correlated with AI readiness. The findings are consistent with the results in studies of Dai et al.42, Shirahada et al.68 and Tang et al.41. It is found that the extent to which students could embrace and use new AI applications will rely on the level of AI readiness of the students. It was also established that if students were more prepared to use AI tools, they would actually use those tools. Integration of technology is a complicated phenomenon69 which requires preparation and readiness. Therefore, we need to teach students to use AI in order for them to pick up using AI.
The results also indicated the positive relationship between AI interactivity in learning with the willingness to use or adopt AI applications in learning. These results are also consistent with other previous studies70,71,72. Among the areas of using AI in educational settings are virtual assistants and chatbots. It was determined that AI applications with an interactive aspect would contribute to making students more, although somewhat reluctant, willing to learn and interact with such applications if they have the adaptive ability for student responses and instructions to students. The AI module will further provide support to the students in real time to enhance their engagement and involvement in the learning73. It could be in the form of these AI interactive applications providing immediate help to assist students learn material and motivate them to be more actively engaged with problem solving through personalized, direct and immediate learning experiences.
The findings also revealed that ethical awareness of AI affected students’ willingness to adopt and use AI applications. This finding aligns with most previous studies, which have confirmed the important role of ethical awareness in affecting students’ behavioural intentions to use AI tools74,75,54,76,21,55. The implications of these findings are that students should understand the ethical implications of using AI for potentially supporting the ways they appropriate and adopt such technologies. Moreover, it is found from these findings that an ethical awareness of AI applications can have direct access to the comprehension of students which increases their willingness to use these AI applications. It thus follows that increasing the awareness of students regarding the ethical applications of AI in their learning is a step that could influence their willingness to adopt these AI applications.
Interestingly, students’ willingness to use AI applications was not influenced by the performance. The result is consistent with some studies77 but not others, as most past studies have found that performance of AI applications affects users’ willingness to use them78,79,80. It may explain that students do not yet understand and perceive the performance improvements from using AI applications in their learning tasks and so this factor has no significant effect on their decision to adopt and use AI application. AI applications have the potential to increase efficiency and productivity, but students who have experimented and explored such applications may not yet appreciate the performance advantage. Another reason is that students may regard performance improvement as an outcome over the long term while the adoption decision is influenced by more immediate factors, including personalization, responsiveness or social acceptability. Another explanation is that students are not aware of how much more AI performance can help their learning than traditional methods, so the students just see AI applications as supplementary, not as something that is critical. Additionally, instructor guidance, course design and novelty of AI applications might have overpowered performance as the motivating factors for students to adopt AI applications, as the students might be more concerned with the interactivity, readiness and ethical issues of the AI tools.
Furthermore, the results also indicated that trust did not affect students’ willingness to use the AI applications. This finding is in line with certain previous studies81 but inconsistent with most previous studies38,39. The insignificant effect of trust on students’ willingness to adopt AI applications could be attributed to the nature of students’ interaction and familiarity with AI applications, as many students might not yet perceive the threat or risk associated with using AI applications in their learning, especially when these AI applications are provided by trusted platforms at universities or widely used AI technologies. Thus, trust might not emerge as a salient factor affecting students’ willingness to adopt these tools. Another possible explanation is that students are more focused on the immediate benefits and user experiences of AI applications than on data privacy, reliability, or ethical implications, thus diminishing the value of trust in shaping their decision to adopt these AI applications. Finally, another possible explanation is that students might prioritize tangible, immediate benefits of AI applications, such as interactivity or ease of use (readiness), over the trust factor, especially if they have not had any negative experiences with AI that could affect trust when using these AI applications.
Implications
Theoretical implications
This study contributes to the theoretical understanding of technology adoption in educational settings by focusing on the important roles of readiness, interactivity and ethical awareness in affecting students’ willingness to adopt AI applications. In contrast to previous technology acceptance models, which have revealed the roles of performance expectancy and trust as main predictors, the findings of this study suggest a shift in what students value when interacting with AI applications. In particular, this study confirms the relevance of incorporating ethical and affective dimensions, such as ethical awareness and perceived readiness, into conceptual frameworks for technology adoption. These findings call for further theoretical and conceptual research, suggesting that in the context of adopting AI applications in learning and educational settings, other factors such as preparedness, readiness, interactivity and ethical awareness should be considered and integrated into conceptual frameworks, as they might carry more weight than performance and risk-based evaluation. Therefore, future models should consider incorporating AI readiness, interactivity and ethical awareness as the main factors that shape users’ intentions towards using AI applications.
Practical implications
The findings can be used by instructors, designers, educators and policy makers to promote AI in learning and teaching. First, unpreparedness for AI can be overcome with training and workshops, according to these results. Second, the significant role of interactivity suggests that developers of AI applications should prioritize designing features that support personalization, real-time feedback, and interaction to sustain students’ engagement with these AI applications. Third, the effect of ethical awareness implies that the transparency of data usage, fairness, and responsible design of AI applications should be explained clearly to students to build confidence in their ethical use. Conversely, the insignificant effects of performance and trust imply that educators and developers should not focus solely on academic outcomes and reliability claims of AI applications but also on user experiences, ease of use, and immediate benefits, such as a user-friendly interface or quick access to the learning content and resources, which may encourage students’ adoption. Similarity, the insignificant effect of trust implies that universities and institutions should maintain the baseline security and reliability of AI applications to prevent any trust-related barriers; this could be accomplished by providing official endorsements or certification of these AI applications. By considering these insights, stakeholders could foster the smooth acceptance of these AI applications in students’ learning, thus enhancing student learning experiences in diverse educational contexts.
Limitations
Even though this study has generated valuable findings and contributions, it has some limitations. First, the findings of this study are based on one specific context, which might limit the generalizability of the findings to other institutions, educational contexts or culturally different contexts. Additionally, it used a cross-sectional survey approach that may not allow assessment of the relationship of the variables. Therefore, future research should use a longitudinal design to better understand what causes students to be willing to adopt AI applications over a longer period. Furthermore, a self-reported questionnaires is used in this study for collecting data, which may result in biased responses due to social desirability and overestimations of ethical awareness and readiness. Additionally, this study assessed the effects of certain variables, namely, readiness, performance, trust, interactivity, and ethical awareness, but other factors could also affect students’ willingness to adopt AI applications. Thus, future research may assess other external factors, such as digital AI literacy, AI anxiety or peer influence.
Conclusion
This study investigated the relationships between readiness, performance, trust, interactivity, ethical awareness and students’ willingness to use AI applications. The investigation utilized SEM to analyse the data which was collected from 211 students. The results showed that their readiness, their level of interaction and their understanding of ethics influenced the students’ use of AI applications; however, their performance and trust did not. A few outcomes from the study are that preparing the students mentally, creating interactions around learning and raising awareness about ethics are more important than trust and performance in shaping the way students react to using AI in class. Through this study, we can better understand which factors affect students’ interest in using AI applications. Also, the results can support and motivate those working in education to implement useful strategies to encourage the adoption of AI by students. Designers of AI applications and teachers should pay attention to making their systems easy for everyone to use, guiding students with training programmes and clarifying what is right and wrong when using AI in education.
Data availability
Data is provided within the manuscript or supplementary information files but there are no identifiable information were collected.
References
Chen, X., Hu, Z. & Wang, C. Empowering education development through AIGC: A systematic literature review. Educ. Inform. Technol. 29 (13), 17485–17537 (2024).
Mogavi, R. H. et al. ChatGPT in education: A blessing or a curse? A qualitative study exploring early adopters’ utilization and perceptions. Computers Hum. Behavior: Artif. Hum. 2 (1), 100027 (2024).
Ghnemat, R., Shaout, A. & Abrar, M. Higher education transformation for artificial intelligence revolution: transformation framework. Int. J. Emerg. Technol. Learn. 17 (19), 224–241 (2022).
Santos, A. I. & Serpa, S. Artificial intelligence and higher education. Int. Soc. Technol. Educ. Sci. 1, 1 (2023).
Huraj, L., Pospíchal, J. & Luptáková, I. D. Learning enhancement with AI: From idea to implementation. In 2023 21st International Conference on Emerging eLearning Technologies and Applications (ICETA) 212–219 (IEEE, 2023).
Gallegos, M. D. C. J., Chisag, W. D. A., Valencia, D. A. Z. & Saltos, N. E. C. Impacto de La inteligencia artificial En La educación superior: percepciones de alumnos y profesores sobre El Uso de IA En El Aprendizaje y La evaluación. Reincisol 3 (6), 7008–7033 (2024).
Kuka, L., Hörmann, C. & Sabitzer, B. Teaching and learning with AI in higher education: A scoping review. In Learning with Technologies and Technologies in Learning: Experience, Trends and Challenges in Higher Education 551–571 (2022).
WIPO. Artificial Intelligence. https://www.wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf (2019).
UNESCO. ChatGPT and Artificial Intelligence in Higher Education. https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf (2023).
Pelletier, K. et al. 2024 Educause Horizon Report: Teaching and Learning Edition. https://library.educause.edu/resources/2024/5/2024-educause-horizon-report-teaching-and-learning-edition (Accessed 12 November 2024) (2024).
Cai, Q., Lin, Y. & Yu, Z. Factors influencing learner attitudes towards ChatGPT-assisted Language learning in higher education. Int. J. Hum. Comput. Interact. 40 (22), 7112–7126 (2024).
Jing, Y., Wang, H., Chen, X. & Wang, C. What factors will affect the effectiveness of using ChatGPT to solve programming problems? A quasi-experimental study. Humanit. Social Sci. Commun. 11 (1), 1–12 (2024).
Chen, X., Zou, D., Xie, H. & Wang, F. L. Past, present, and future of smart learning: a topic-based bibliometric analysis. Int. J. Educational Technol. High. Educ. 18 (1), 2 (2021).
Hwang, G. J., Tu, Y. F. & Lin, C. J. Advancements and hot research topics of artificial intelligence in mobile learning: A review of journal publications from 1995 to 2019. Int. J. Mob. Learn. Organisation. 15 (4), 427–447 (2021).
Tang, K. Y., Chang, C. Y. & Hwang, G. J. Trends in artificial intelligence-supported e-learning: A systematic review and co-citation network analysis (1998–2019). Interact. Learn. Environ. 31 (4), 2134–2152 (2023).
Bond, M. et al. A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. Int. J. Educational Technol. High. Educ. 21 (1), 4 (2024).
Alajmi, Q., Al-Sharafi, M. A. & Abuali, A. Smart learning gateways for Omani HEIs towards educational technology: benefits, challenges and solutions. Int. J. Inform. Technol. Lang. Stud. 4 (1), 12–17 (2020).
Wang, C., Chen, X., Yu, T., Liu, Y. & Jing, Y. Education reform and change driven by digital technology: a bibliometric study from a global perspective. Humanit. Social Sci. Commun. 11 (1), 1–17 (2024).
Mahmoud, M. M. H. & Othman, R. Performance management system in developing countries: A case study in Jordan. J. Public. Affairs 23 (4), e2864 (2023).
Sharma, H., Soetan, T., Farinloye, T., Mogaji, E. & Noite, M. D. F. AI adoption in universities in emerging economies: prospects, challenges and recommendations. In Re-imagining Educational Futures in Developing Countries: Lessons from Global Health Crises 159–174 (Springer, 2022).
Rodzi, Z. M. et al. Unraveling the drivers of artificial intelligence (AI) adoption in higher education. In. 2023 International Conference on University Teaching and Learning (InCULT) 1–6 (IEEE, 2023).
Velastegui, D., Pérez, M. L. R. & Garcés, L. F. S. Impact of artificial intelligence on learning behaviors and psychological well-being of college students. Salud Ciencia Y Tecnologia-Serie De Conferencias. 2, 343 (2023).
Rodzi, Z. M. et al. Unraveling the factors influencing the adoption of artificial intelligence (AI) in education. In. 2023 4th International Conference on Artificial Intelligence and Data Sciences (AiDAS) 186–193 (IEEE, 2023).
Aladi, C. C. IT higher education teachers and trust in AI-enabled Ed-Tech: Implications for adoption of AI in higher education. In Proceedings of the 2024 Computers and People Research Conference 1–16 (2024).
Widyaningrum, R., Wulandari, F., Zainudin, M., Athiyallah, A. & Rizqa, M. Exploring the factors affecting ChatGPT acceptance among university students. Multidisciplinary Sci. J. 6 (12), 2024273 (2024).
Morales-García, W. C. et al. Development and validation of a scale for dependence on artificial intelligence in university students. Front. Educ. 9, 1323898 (2024).
Estrada-Araoz, E. G. et al. Assessment of the level of knowledge on artificial intelligence in a sample of university professors: a descriptive study. Data Metadata. 3, 285 (2024).
Chai, C. S., Yu, D., King, R. B. & Zhou, Y. Development and validation of the artificial intelligence learning intention scale (AILIS) for university students. Sage Open. 14 (2), 21582440241242188 (2024).
Kasneci, E. et al. ChatGPT for good? On opportunities and challenges of large Language models for education. Learn. Individual Differences. 103, 102274 (2023).
Dai, W. et al. Assessing the proficiency of large Language models in automatic feedback generation: an evaluation study. Computers Education: Artif. Intell. 7, 100299 (2024).
Kuhail, M. A., Alturki, N., Alramlawi, S. & Alhejori, K. Interacting with educational chatbots: A systematic review. Educ. Inform. Technol. 28 (1), 973–1018 (2023).
Wambsganss, T., Kueng, T., Soellner, M. & Leimeister, J. M. ArgueTutor: An adaptive dialog-based learning system for argumentation skills. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems 1–13 (2021).
Ahuja, A. S., Polascik, B. W., Doddapaneni, D., Byrnes, E. S. & Sridhar, J. The digital metaverse: applications in artificial intelligence, medical education, and integrative health. Integr. Med. Res. 12 (1), 100917 (2023).
Frej, J., Shah, N., Knezevic, M., Nazaretsky, T. & Käser, T. Finding paths for explainable mooc recommendation: A learner perspective. In Proceedings of the 14th Learning Analytics and Knowledge Conference 426–437 (2024).
Celik, I., Dindar, M., Muukkonen, H. & Järvelä, S. The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends 66 (4), 616–630 (2022).
Chrisinger, D. The solution Lies in education: artificial intelligence & the skills gap. Horizon 27 (1), 1–4 (2019).
Ahmad, T. Scenario based approach to re-imagining future of higher education which prepares students for the future of work. High. Educ. Skills Work-Based Learn. 10 (1), 217–238 (2020).
Kelly, S., Kaye, S. A. & Oviedo-Trespalacios, O. What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics Inform. 77, 101925 (2023).
Yan, L. et al. Practical and ethical challenges of large Language models in education: A systematic scoping review. Br. J. Edu. Technol. 55 (1), 90–112 (2024).
Smith, P. J. Learning preferences and readiness for online learning. Educational Psychol. 25 (1), 3–12 (2005).
Tang, Y. M. et al. Comparative analysis of student’s live online learning readiness during the coronavirus (COVID-19) pandemic in the higher education sector. Comput. Educ. 168, 104211 (2021).
Dai, Y. et al. Promoting students’ well-being by developing their readiness for the artificial intelligence age. Sustainability 12 (16), 6597 (2020).
Venkatesh, V., Morris, M. G., Davis, G. B. & Davis, F. D. User acceptance of information technology: toward a unified view. MIS Quarterly, 425–478. (2003).
Sudaryanto, M., Hendrawan, R., A., M. & Andrian, T. The effect of technology readiness, digital competence, perceived usefulness, and ease of use on accounting students artificial intelligence technology adoption. E3S Web Conferences. 388, p04055 (2023).
Falcone, R. & Castelfranchi, C. Social trust: A cognitive approach. Trust and Deception in Virtual Societies 55–90 (2001).
Kesharwani, A. & Singh Bisht, S. The impact of trust and perceived risk on internet banking adoption in india: an extension of technology acceptance model. Int. J. Bank. Mark. 30 (4), 303–322 (2012).
Kim, J. & Gambino, A. Do we trust the crowd or information system? Effects of personalization and bandwagon cues on users’ attitudes and behavioral intentions toward a restaurant recommendation website. Comput. Hum. Behav. 65, 369–379 (2016).
Bikanga Ada, M. It helps with crap lecturers and their low effort: investigating computer science students’ perceptions of using Chatgpt for learning. Educ. Sci. 14 (10), 1106 (2024).
Bubaš, G., Čižmešija, A. & Kovačić, A. Development of an assessment scale for measurement of usability and user experience characteristics of Bing chat conversational AI. Future Internet. 16 (1), 4 (2023).
Kamoun, F., El Ayeb, W., Jabri, I., Sifi, S. & Iqbal, F. Exploring students’ and faculty’s knowledge, attitudes, and perceptions towards chatgpt: a cross-sectional empirical study. J. Inform. Technol. Education: Res. 23, 1 (2024).
Ng, D. T. K. et al. A review of AI teaching and learning from 2000 to 2020. Educ. Inform. Technol. 28 (7), 8445–8501 (2023).
Baidoo-Anu, D. & Ansah, L. O. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. J. AI. 7 (1), 52–62 (2023).
De la Vall, R. R. F. & Araya, F. G. Exploring the benefits and challenges of AI-language learning tools. Int. J. Social Sci. Humanit. Invention. 10 (01), 7569–7576 (2023).
Cisneros, J. D. D. et al. Adjustment of Peruvian university students to artificial intelligence. Arts Educ. 36, 1 (2023).
Zhu, W. et al. Could AI ethical anxiety, perceived ethical risks and ethical awareness about AI influence university students’ use of generative AI products? An ethical perspective. Int. J. Hum. Comput. Interact. 41 (1), 742–764 (2025).
Creswell, J. W. & Creswell, J. D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches (Sage Publications, 2017).
Hair, J. F., Black, W. C., Babin, B. J. & Anderson, R. E. Multivariate Data Analysis (2019).
Sanusi, I. T., Ayanwale, M. A. & Chiu, T. K. Investigating the moderating effects of social good and confidence on teachers’ intention to prepare school students for artificial intelligence education. Educ. Inform. Technol. 29 (1), 273–295 (2024).
Nazaretsky, T., Mejia-Domenzain, P., Swamy, V., Frej, J. & Käser, T. The critical role of trust in adopting AI-powered educational technology for learning: an instrument for measuring student perceptions. Computers Education: Artif. Intell. 1, 100368 (2025).
Yaseen, H. et al. The impact of adaptive learning technologies, personalized feedback, and interactive AI tools on student engagement: the moderating role of digital literacy. Sustainability 17 (3), 1133 (2025).
Rahman, M. S., Sabbir, M. M., Zhang, J., Moral, I. H. & Hossain, G. M. S. Examining students’ intention to use chatgpt: does trust matter? Australasian J. Educational Technol. 39 (6), 51–71 (2023).
Acosta-Enriquez, B. G. et al. What is the influence of psychosocial factors on artificial intelligence appropriation in college students? BMC Psychol. 13 (1), 7 (2025).
Kline, R. B. Principles and Practice of Structural Equation Modeling (Guilford Publications, 2023).
Byrne, B. M. Structural Equation Modeling with Mplus: Basic Concepts, Applications, and Programming (Routledge, 2013).
Awang, P. SEM Made Simple: A Gentle Approach To Learning Structural Equation Modeling (MPWS Rich Publication, 2015).
Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E. & Tatham, R. L. Multivariate Data Analysis (Prentice Hall, 2010).
Cohen, J. Statistical Power for the Behavioural Sciences, vol. 58, 7–19 (Lawrence Erlbaum, 1988).
Shirahada, K., Ho, B. Q. & Wilson, A. Online public services usage and the elderly: assessing determinants of technology readiness in Japan and the UK. Technol. Soc. 58, 101115 (2019).
Blut, M. & Wang, C. Technology readiness: a meta-analysis of conceptualizations of the construct and its impact on technology usage. J. Acad. Mark. Sci. 48, 649–669 (2020).
Cyr, D., Head, M. & Ivanov, A. Perceived interactivity leading to e-loyalty: development of a model for cognitive–affective user responses. Int. J. Hum. Comput. Stud. 67 (10), 850–869 (2009).
Onesi-Ozigagun, O., Ololade, Y. J., Eyo-Udo, N. L. & Ogundipe, D. O. Revolutionizing education through AI: A comprehensive review of enhancing learning experiences. Int. J. Appl. Res. Social Sci. 6 (4), 589–607 (2024).
Owan, V. J., Abang, K. B., Idika, D. O., Etta, E. O. & Bassey, B. A. Exploring the potential of artificial intelligence tools in educational measurement and assessment. Eurasia J. Math. Sci. Technol. Educ. 19 (8), 2307 (2023).
Chen, X. & Ibrahim, Z. A comprehensive study of emotional responses in ai-enhanced interactive installation Art. Sustainability 15 (22), 15830 (2023).
Alshammari, S. H., Almankory, A. Z. & Alrashidi, M. E. The effects of awareness and trust on students’ willingness to use chatgpt: an integrated TAM-ECM model. Rev. Iberoam. Educ. Dist. 28 (2), 1 .
Cheng, I. H. & Lee, S. T. The impact of ethics instruction and internship on students’ ethical perceptions about social media, artificial intelligence, and ChatGPT. J. Media Ethics. 39 (2), 114–129 (2024).
Kong, S. C., Cheung, W. M. Y. & Zhang, G. Evaluating an artificial intelligence literacy programme for developing university students’ conceptual understanding, literacy, empowerment and ethical awareness. Educational Technol. Soc. 26 (1), 16–30 (2023).
Awal, M. R. & Haque, M. E. Revisiting university students’ intention to accept AI-powered chatbot with an integration between TAM and SCT: a South Asian perspective. J. Appl. Res. High. Educ. 17 (2), 594–608 (2025).
Bouteraa, M. et al. Understanding the diffusion of AI-generative (ChatGPT) in higher education: does students’ integrity matter? Computers Hum. Behav. Rep. 14, 100402 (2024).
Chocarro, R., Cortiñas, M. & Marcos-Matás, G. Teachers’ attitudes towards chatbots in education: a technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Stud. 49 (2), 295–313 (2023).
Du, L. & Lv, B. Factors influencing students’ acceptance and use generative artificial intelligence in elementary education: an expansion of the UTAUT model. Educ. Inform. Technol. 1, 1–20 (2024).
Pusposari, D., Rachman, O. A. & Kusumadewi, A. W. The determinants of students’ intentions to use artificial intelligence-based mobile investment apps. Int. J. Acc. Bus. Soc. 32 (3), 249–256 (2024).
Author information
Authors and Affiliations
Contributions
S.H.A. and M.E.A. conceptualized the study and designed the research framework. M.H.A. conducted the data collection and analysis using SEM with AMOS. A.E.A.A. and A.F.A contributed to the literature review and methodology sections. S.H.A., M.E.A. and A.F.A wrote the main manuscript text. All authors reviewed and approved the final version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethical approval
All methods were carried out in accordance with relevant guidelines and regulations. The experimental protocol was reviewed and approved by the Institutional Review Board (IRB) at the University of Ha’il. Informed consent was obtained from all participants prior to their inclusion in the study.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Alshammari, S.H., Alrashidi, M.E., Alshammari, M.H. et al. Determinants of student adoption of artificial intelligence applications in higher education. Sci Rep 15, 35921 (2025). https://doi.org/10.1038/s41598-025-19851-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-025-19851-5