Introduction

Artificial Intelligence (AI) continues to transform many industry sectors such as healthcare, finance, transportation, and its influence is now increasingly visible in everyday life. In education, AI has the potential to enhance teaching and learning, streamline administrative processes, and support decision-making (Bozkurt et al., 2024; Bhullar et al., 2024). However, the integration of AI in higher education has been relatively slow and cautious, especially among educators (Cukurova et al., 2023; Kizilcec, 2023). This caution exists due to various concerns regarding the complexity of AI systems (Castelvecchi, 2016), trust issues (Gillespie et al., 2023), and insufficient institutional support (Duah & McGivern, 2024; Gimpel et al., 2023). In this study, the term ‘educators’ refers to faculty members, instructors, lecturers, and research staff who engage in teaching activities.

The arrival of Generative AI (GenAI) has intensified research and institutional interest in AI adoption within higher education, generating a mix of reactions that combined enthusiasm about potential benefits with concerns over risks and challenges (Crompton & Burke, 2024; Kasneci et al., 2023). Although GenAI has the capability to augment rather than replace educators’ expertise (Wang et al., 2024), its integration into higher education is still in its early stages (Lee et al., 2024; Ogunleye et al., 2024), mainly because of educators’ perceived benefits and concerns about AI (Viberg et al., 2024). Trust is thus an important yet underexamined factor influencing whether and how educators decide to integrate it into their teaching practices. Nevertheless, ethical issues and uncertainties often undermine trust in this technology (Dabis & Csáki, 2024; Holmes & Porayska-Pomsta, 2023).

Existing research indicates that trust in technology is a dynamic multidimensional concept shaped by personal experiences, technical attributes, and institutional support structures (Kaplan et al., 2021; McKnight et al., 2011). In the context of AI, trust extends beyond interpersonal relationships to include human-AI interaction (Glikson & Woolley, 2020) and serves as a mechanism for dealing with uncertainty and complexity (Bach et al., 2022; Choung et al., 2022). For educators, trust reflects their willingness to rely on AI in ways that are consistent with their professional judgement, pedagogical values, and emotional comfort (Chiu et al., 2023; Crompton & Burke, 2024). The educational context specifically calls for investigations into how institutional policies, leadership approaches, and support structures mediate trust development, a research area that remains largely unexplored (Niedlich et al., 2021). Although many higher education institutions (HEIs) have begun developing policies and support structures to address GenAI (Dabis & Csáki, 2024; Yan et al., 2024), few have implemented comprehensive support systems or incentives to encourage educators’ adoption (Duah & McGivern, 2024; Kamoun et al., 2024; Luo, 2024). However, these efforts reveal a deeper challenge: existing theoretical frameworks for understanding AI trust formation fail to capture the unique dynamics of higher education contexts.

Current trust in AI frameworks (Kaplan et al., 2021; Qin et al., 2020; Li et al., 2024; Yang & Wibowo, 2022; Lukyanenko et al., 2022) conceptualise the multidimensionality of trust in AI in terms of human, technical, and contextual factors but do not sufficiently explain how educators’ pedagogical orientations and institutional strategies interact in higher education (Nazaretsky et al., 2022). With the exception of Lukyanenko et al. (2022), most also neglect ethics as a distinct factor. Moreover, GenAI’s unique ability to generate human-like content raises novel issues of bias, academic integrity, and accountability that models do not fully address (Reinhardt, 2023; Wang et al., 2024).

These research gaps, combined with the absence of systematic studies examining educators’ trust in GenAI adoption, underscore the timeliness and importance of our investigation. Therefore, this study aims to examine how factors from broader AI trust frameworks apply within the distinctive context of GenAI in higher education through the introduction of a new conceptual model. Our model makes three key theoretical contributions: (1) integrating pedagogical elements as educator-specific trust factors, (2) positioning institutional strategies as foundational drivers, and (3) elevating socio-ethical concerns as a distinct analytical dimension. By incorporating these overlooked dimensions, the model captures the interplay between individual, institutional, and socio-ethical factors, offering a more comprehensive lens for analysing educators’ trust in GenAI. As a result, this research contributes a conceptual model of trust in GenAI tailored to higher education, bridging theoretical gaps in existing AI trust frameworks, and providing a foundation for future empirical validation.

Guided by this model, this systematic review aims to address the following research question: What are the factors and institutional strategies influencing educators’ trust in adopting GenAI for educational purposes? To answer this question, the following two specific sub-questions are explored:

  • RQ1. What are the trust factors influencing educators’ GenAI adoption for educational purposes?

  • RQ2. How do institutional strategies, including leadership support, policies, and training, interact with the educators’ trust factors in adopting GenAI in higher education?

The remainder of this paper reviews the literature on trust in AI and education before presenting our proposed conceptual model and systematic review methodology. We then report results addressing our research questions, discuss key findings with their theoretical and practical implications, and conclude by acknowledging study limitations and proposing recommendations for future research.

Literature review

The landscape of trust in AI research draws from diverse academic disciplines, including trust theory, technology acceptance, organisational systems, and educational psychology. Across these areas, scholars widely recognise that trust plays an important role in the acceptance and adoption of technology in real life (Afroogh et al., 2024; Kelly et al., 2023).

Trust in AI

To account for trust as a dynamic multidimensional concept, we use a system-based definition for trust in AI as “a human mental and physiological process that considers the properties of a specific AI-based system, a class of such systems or other systems in which it is embedded or with which it interacts, to control the extent and parameters of the interaction with these systems” (Lukyanenko et al., 2022, p. 12). We adopt this definition since it suggests that trust is a psychological mechanism that helps individuals manage uncertainty and optimise their interactions with AI systems (Lockey et al., 2021). At the institutional level, trust emerges through policies and support structures, which act as assurance mechanisms to manage uncertainty and risk (Mayer et al., 1995; McKnight et al., 2011). For instance, McKnight et al. (2011, p. 8) define structural assurances as “guarantees, contracts, support of other safeguards that exist in the general type of technology that make success likely” as a component of the “institution-based trust in technology”. In this paper, institutional strategies refer to leadership support, policies, guidelines, and professional support to represent the structural assurances for the institution-based trust in GenAI technology. At the individual level, educators’ trust in adopting GenAI for teaching is mediated by these institutional strategies while being shaped by personal experiences and professional judgement (Niedlich et al., 2021; Ofosu-Ampong, 2024). Additionally, trust is a dynamic concept that evolves gradually and changes with interactive experiences (Mayer et al., 1995). Latest research highlights the need to examine trust in AI across two dimensions: cognitive trust and emotional trust (Glikson & Woolley, 2020). Cognitive trust is based on the logical evaluation of AI functions. On the other hand, emotional trust encompasses the affective component of trust, which includes feelings of safety, ease of use, and confidence in technology (Yang & Wibowo, 2022). From the perspective of GenAI in education, we suggest that cognitive trust is built by the analysis of the system’s output and its accuracy with regard to pedagogical practices. In addition, the ease with which educators integrate GenAI into traditional teaching practices and the level of psychological safety they feel when trying out new GenAI tools indicate their emotional trust. Thus, emotional trust is developed over time through positive experiences within supportive institutional environments that address and respond to educators’ concerns.

Previous systematic reviews

Existing systematic reviews and meta-analyses on trust in AI were conducted before the launch of GenAI in November 2022. For instance, Kaplan et al.‘s (2021) meta-analysis of 65 empirical studies, with a revised version in 2023, introduced a tri-dimensional framework categorising trust factors into human (trustor), AI technology (trustee), and contextual categories, but largely overlooked institutional influences. In line with this, Bach et al.‘s (2022) review suggested the need to integrate ethical aspects into technical and individual characteristics. Afroogh et al. (2024) conceptualised trust in the current AI literature and investigated its impact on technology acceptance across various domains. However, none of these studies discussed building trust within the education domain. In the higher education context, Herdiani et al. (2024) conducted a focused narrative review on the technical, ethical, and societal factors that influence trust in AI-based educational systems without addressing specific GenAI issues. Also, Jameson et al. (2023) analysed the institutional dimension, examining trust issues among different categories of educators and staff members in higher education. Although their review did not directly address trust in GenAI, it reviewed how institutional roles and relationships affect trust formation in academic contexts.

To effectively explore how trust influences the adoption of AI in education, previous research has incorporated trust as a central element within technology adoption frameworks such as the Technology Acceptance Model (TAM) and Unified Theory of Acceptance and Use of Technology (UTAUT) (Choung et al., 2022; Wu et al., 2011). For example, Wu et al.’s (2011) meta-analysis explored the impact of trust on TAM utilitarian constructs such as perceived usefulness in the e-commerce context. The results showed that various individuals (trustors) and context types affect the level of trust in the use of new technology, and therefore, educational contexts need dedicated investigations. A recent review from Kelly et al. (2023) identified TAM as the most used framework by researchers across different fields, including education. In contrast, Scherer et al.‘s (2019) meta-analysis on TAM in educational contexts examined contradictory findings surrounding educators’ intentions to use technology.

Moreover, Celik (2023) and Choi et al. (2023) have pointed out that these models fail to capture the nuances of AI adoption in education because of limited pedagogical and ethical awareness. Thus, two pedagogical frameworks that have been considered to address these shortcomings include pedagogical beliefs (Pajares, 1992) and Technological Pedagogical Content Knowledge (TPACK) (Mishra & Koehler, 2006). In this regard, studies by Liu (2011) and Cambra-Fierro et al. (2024) have shown that educators’ constructivist beliefs enabled them to integrate AI and GenAI successfully. The TPACK framework identifies the different knowledge areas that educators need for the effective integration of digital technology into teaching and learning, and it has been found useful in explaining how educators view GenAI as compatible with their pedagogical approaches (Celik, 2023; Mishra et al., 2024).

Despite these valuable contributions, there is a lack of systematic understanding of how educators’ trust in GenAI develops within the unique pedagogical, ethical, and institutional contexts of higher education, highlighting the need for a focused, systematic literature review investigation.

Trust in GenAI: proposed conceptual model for analysis

This section presents our proposed conceptual model serving as the analytical framework guiding our systematic review methodology.

As mentioned before, several existing theoretical frameworks examine the concept of trust in AI from a multidimensional perspective across different domains. The “foundational trust framework” of Lukyanenko et al. (2022) explains how organisational assurances are important in building trust from a systems theory perspective. Kaplan et al. (2021) and Li et al. (2024) provide a tri-dimensional (trustor, trustee, context-related) framework, which Yang & Wibowo (2022) further develop to include organisational and social factors. Regarding educational systems, Qin et al. (2020) developed a tri-dimensional trust model which consisted of technological elements (such as system functionality), contextual factors (for example, the “benevolence of educational organisations” (p. 1702), and personal factors (such as familiarity and pedagogical beliefs) as key elements for building trust in educational AI systems.

Despite these contributions, existing frameworks exhibit three critical limitations in understanding GenAI trust in higher education: insufficient attention to pedagogical factors unique to educators, a limited examination of institutional strategies, and a failure to address GenAI-specific ethical concerns. Our proposed model builds on these frameworks while addressing these gaps, considering a tri-dimensional structure comprising Trustor/Educator, Socio-Ethical Context, and Trustee (GenAI technology), with Institutional Strategies underpinning these dimensions as enabling conditions (see Fig. 1). Although traditional trust models include trustee (technology) characteristics such as system competence and transparency (Glikson & Woolley, 2020; Gulati et al., 2017), our research questions focus specifically on trustor (educator) factors and institutional strategies. For this reason, the Trustee (GenAI technology) factors are considered outside the scope of this study.

Fig. 1
figure 1

Trust in GenAI proposed conceptual model used for coding and analysis.

This structure directly addresses our research questions by identifying the factors influencing educators’ trust in GenAI adoption (RQ1) and examining how institutional strategies interact with these factors (RQ2). Together, these dimensions highlight how individual orientations, socio-ethical considerations, and institutional factors intersect in building educators’ trust in using GenAI in their practice.

Institutional Strategies, including leadership support, policies and guidelines, and professional support, are introduced as a foundational component that provides the “structural assurances” for the trust in GenAI factors following McKnight et al.’s (2011) framework. This component aligns with the “foundational trust framework” by Lukyanenko et al. (2022) for exploring how institutional systems interact with other AI systems.

The Trustor (Educator) category builds on the frameworks proposed by Li et al. (2024) and Yang & Wibowo (2022), incorporating the following elements: cognitive trust characteristics (including familiarity, self-efficacy, and sense of control), emotional trust characteristics (such as positive/negative emotional experience and hedonic motivation), and psychological characteristics such as trust propensity. Both Li et al. (2024) and Yang & Wibowo (2022) studies included demographics, familiarity, sense of control, emotional experience, hedonic motivation, and trust propensity as ‘trustor’ factors. We extended this category by integrating pedagogical belief factors (Qin et al., 2020) and pedagogical knowledge represented by the TPACK framework (Mishra et al., 2024) since these are educator-specific factors overlooked by existing frameworks.

The Socio-Ethical Context category is based on the context-related and social factors suggested by Yang & Wibowo’s (2022) framework. For this study, we focus on utilitarian features, such as perceived ease of use and usefulness, and social influence factors, which are recognised as the primary drivers of AI adoption when using the TAM and UTAUT models (Kelly et al., 2023). In educational settings, these features are understood as the ways in which GenAI’s perceived capabilities bring value to educators’ teaching practices. The social influence factors refer to how colleagues, industrial leaders, and professional networks influence an educator’s decision to use GenAI. Noting that trust in GenAI adoption presents nuances that cannot be explained only by utilitarian and social factors, we include ethical use factors (Bach et al., 2022; Yan et al., 2024). In this study, these factors refer to educators’ ethical concerns related to academic integrity, plagiarism, and bias in GenAI-generated content (Bozkurt et al., 2024; Wang et al., 2024).

Conceptual framework comparisons

To clarify the distinct contribution of our study, Table 1 compares established trust frameworks with our model. This structured overview demonstrates how our proposed model addresses gaps and advances the field by embedding pedagogical orientations and institutional strategies that prior frameworks overlooked.

Table 1 Framework comparison table.

These framework differences have practical consequences for institutional GenAI adoption strategies. While traditional models might suggest focusing primarily on system usability and individual training, our framework predicts that institutional leadership engagement and pedagogical alignment are equally critical for trust development.

Methodology

The systematic review methodology used in this study follows the PRISMA 2020 guidelines (Page et al., 2021) and a structured, systematic approach based on best practices recommended by Alexander (2020), Gough et al. (2017), and Punch & Oancea (2014). This approach includes five main stages: searching, screening, organising, analysing, and reporting. The PRISMA workflow diagram displays the searching and screening stages (see Fig. 2).

Fig. 2
figure 2

PRISMA flow diagram for selecting studies for the review.

Searching

We conducted electronic searches across five major academic databases that cover multidisciplinary sources relevant to educational research: ERIC, EBSCOhost, Web of Science (WoS), ProQuest, and Scopus, using the following Boolean electronic search query: (“generative AI” OR “artificial intelligence” OR ChatGPT OR “large language models”) AND (“higher education” OR university) AND (trust OR trustworthy). These database searches, executed between July 28 and August 4, 2024, initially retrieved 1380 articles.

Only peer-reviewed journal articles were considered to ensure high confidence in the quality of the studies selected (Gough et al., 2017). Filters for language, publication year (2019-2024), and document type were applied for each database, reducing the results to 447 articles. All searches maintain the same conceptual structure while adapting to database-specific syntax requirements. For example, this query syntax with filters from the Scopus database retrieved 45 articles: (TITLE-ABS-KEY (“Generative AI” OR “Generative Artificial Intelligence” OR “ChatGPT” OR “Large Language Models” OR LLM) AND TITLE-ABS-KEY (trust OR trustworthy) AND TITLE-ABS-KEY (“Higher Education” OR university) AND PUBYEAR AFT 2019 AND (LIMIT-TO (DOCTYPE, “ar”)) AND (LIMIT-TO (LANGUAGE, “English”)). A list of search queries and strategies for each database is provided in Appendix A.

In addition to the electronic database search, a manual search was performed using Google Scholar to locate specific articles through “referential backtracking, researcher checking, and journal scouring or hand searching” (Alexander, 2020, p. 14). This step helped capture the latest research in this rapidly evolving field, yielding 77 relevant journal articles. All search results were exported from each database in RIS format, imported into Zotero and then exported to an Excel file to remove duplicate articles, resulting in 444 articles for further screening.

Screening

At this stage, the inclusion and exclusion criteria (see Table 2) were applied consistently to ensure alignment with our objectives and research questions. The screening stage was conducted in several steps and involved a collaborative and iterative process that included all authors. At every step, meetings were held to review and discuss decisions, ensuring consistency and alignment with the research questions. The four screening steps included (1) title and abstract screening, (2) eligibility assessment, (3) full-text screening, and (4) quality appraisal.

Table 2 Rationale for the inclusion and exclusion criteria.

First, the title and abstract screening step involved an initial screening done by the first author by reviewing titles and abstracts and classifying them into three groups: include, exclude, and to be reviewed. This step identified studies to be further screened for eligibility, which resulted in 266 articles being excluded due to missing keywords in the title and abstract. Second, for the eligibility assessment step, two independent researchers assessed the eligibility of the remaining articles (n = 178). A scoring mechanism was used to evaluate the quality and relevance of each article. A score of ‘1’ indicated high relevance, signifying a clear focus on trust or adoption for GenAI and higher education, and a score of ‘2’ indicated moderate relevance. Using this scoring system, the research team participated in two rounds of consensus-building discussions to resolve any disagreements. The inter-rater reliability was calculated using both percentage agreement and the chance-corrected reliability using Cohen’s Kappa coefficient. This resulted in a 92% agreement rate and substantial agreement beyond change of κ = 77 (Belur et al., 2021).

Third, the full-text screening step involved a third researcher examining the full-text articles with conflicting eligibility assessments (n = 44). This step involved a meticulous review to resolve disagreements, and 37 articles achieved 100% agreement. An example of an ambiguous case was Karkoulian et al. (2024), which examined faculty and student perceptions of ChatGPT and academic integrity. One researcher questioned its inclusion since the focus was on ethics rather than trust, but after discussion with the 3rd researcher, the team agreed to include it, recognising that integrity concerns are important to educators’ trust formation in GenAI.

Finally, for the quality appraisal step, the selected articles were examined using Gough et al.’s (2017) quality standards. The criteria used for this study included four key elements: a clear study objective or research question, a description of the methodology for empirical studies, results, discussion, concluding remarks, and limitations. Each criterion was associated with a score of (1 = yes; 0 = no), and the quality score for each publication was calculated by dividing the study’s score by 4 (maximum score). All 37 articles met the quality appraisal evaluation with a score higher than 0.75 and were included in the review. Only two conceptual papers (Crawford et al., 2023; Hall, 2024) lacked a dedicated methodology section, which lowered their appraisal scores. However, they were included because they offered critical ethical, pedagogical, and socio-political perspectives that directly informed debates on trust in AI in higher education.

Analysis

To analyse the final set of 37 included documents, we used a deductive approach based on the proposed conceptual model described above (Cruzes & Dyba, 2011). The coding scheme (see Appendix C) included operational definitions for all model elements, which are also provided in the description column of the results tables in the next section. The software programme ATLAS.ti v. 25 (available at www.atlasti.com) was used to organise the documents and the corresponding coding scheme needed for the deductive analysis (Paulus et al., 2017). Recent systematic reviews in education have also emphasised structured methodological designs and theory-driven coding. For example, Abuhassna et al. (2024) and Alhammad et al. (2024) demonstrate how the integration of theory and the use of transparent procedures enhance the robustness of SLRs, which informed our own review design and analysis.

Results

This section reports the systematic review findings. It begins with a descriptive overview of the selected studies, followed by a discussion of the findings for each research question.

Descriptive characteristics of included studies

The explosive growth of GenAI research in higher education is evident in our sample of 37 papers, with 73% (n = 27) published within the eight months leading up to August 2024. This surge in research output reflects the urgent need for understanding GenAI integration in higher education, as confirmed by previous studies (Baig & Yadegaridehkordi, 2024; Ogunleye et al., 2024). These studies were published in over 19 countries, with China and India ahead of the USA, Germany, and the UK. The emergence of multinational collaborations (19% of studies) shows an encouraging trend toward collaborative research in this area requiring global perspectives (Ivanova et al., 2024). Furthermore, five journals were found with more than two relevant articles (see Appendix B), including Education and Information Technologies, International Journal of Educational Technology in Higher Education (n = 3, 8%), and Computers and Education: Artificial Intelligence, TechTrends, and Education Sciences (n = 2, 5%).

Compared with earlier systematic reviews that focused largely on Western or technical contexts (Kaplan et al., 2021; Yang & Wibowo, 2022), our dataset shows that trust in GenAI is now being examined through diverse methodological lenses across underrepresented regions such as the Middle East, Africa, and Latin America.

Our methodological analysis revealed that despite the importance of understanding educators’ trust in GenAI adoption, no comprehensive literature review has addressed this specific angle. Of the 37 articles analysed, the majority (n = 24, 65%) were empirical studies, including quantitative (n = 14, 38%), qualitative (n = 6, 16%) and mixed-method approaches (n = 4, 10%).

As shown in Fig. 3, the empirical studies’ temporal distribution reveals an increase in the qualitative and mixed-method studies in 2024, addressing the call of several researchers (Baig & Yadegaridehkordi, 2024; Kizilcek, 2023) asking for more qualitative GenAI studies focused on educators and HEIs. The population numbers for these studies range from smaller numbers of less than 50 participants (Espartinez, 2024; Karkoulian et al., 2024) towards studies engaging 150 or more educators (Brandhofer & Tengler, 2024; Chan & Lee, 2023; Kamoun et al., 2024). This combination of deeper qualitative insights and broader quantitative reach suggests the increasing level of enquiry related to GenAI adoption in education. At the same time, clear regional and methodological patterns are visible with studies from India, Oman, and China predominantly using quantitative survey-based approaches, whereas research from Europe and the UK more often relied on qualitative or mixed-method designs (see Appendix B).

Fig. 3
figure 3

Empirical studies distribution per year.

Trust factors influencing educators’ GenAI adoption for educational purposes (RQ1)

Our analysis reveals a complex web of trust factors beyond simple technological acceptance. Based on our conceptual model, these factors cluster into two main categories: individual trustor/educator characteristics (see Table 3) and socio-ethical context factors (see Table 4).

Table 3 Trustor/Educator trust factors descriptions, counts, and references.
Table 4 Socio-Ethical trust factors descriptions, counts, and references.

Trustor (Educator) factors

Our findings indicate that trust in AI and GenAI varies significantly across demographic factors such as age, gender, and teaching experience, with contradictory results. While Kaplan et al. (2021) found that male users are more likely to trust AI than female users, Cabero-Almenara et al. (2024) reported contrasting results, suggesting that gender is not a decisive factor for educators’ AI acceptance. Regarding age, it has been found that younger people are more likely to trust GenAI than older groups (Chan & Lee, 2023; Jain & Raghuram, 2024), and with respect to teaching experience, Brandhofer & Tengler (2024, p.1110) revealed that the group least inclined to integrate AI are “teachers with between 30 and 39 years of teaching experience, followed by those with 0–9 years”. Our analysis indicates that additional research is needed to help educators and policymakers prioritise tailored training programmes to mitigate demographic trust issues (Jain & Raghuram, 2024; Ofosu-Ampong, 2024; Yusuf et al., 2024).

Based on these demographic patterns, our study found several studies suggesting that familiarity with GenAI (n = 9, 24%) and self-efficacy factors (n = 7, 19%) connect demographic characteristics and actual use and trust formation. For example, Brandhofer & Tengler (2024) and Chan & Lee (2023) discovered that familiarity is linked with teaching experience and generational differences. Likewise, research showed that individuals with higher self-efficacy in using GenAI tend to trust the technology and adopt it for their teaching practices (Bhaskar & Rana, 2024; Bhat et al., 2024), backing up previous research in this area (Nazaretsky et al., 2022; Wang et al., 2021).

Findings from several studies (Cambra-Fierro et al., 2024; Hyun Baek & Kim, 2023; Salah, 2024) suggest that the success of GenAI integration depends less on the technology itself and more on the educator’s perceived sense of control, mainly through human-in-the-loop approaches. For example, Salah’s (2024) empirical study revealed that feeling in control of GenAI has a big impact on both trust development and psychological well-being, particularly for individuals experiencing job anxiety. Supporting these findings, Yan et al.‘s (2024) systematic review indicated that the limited involvement of educators in GenAI development made them feel that they had less sense of control, hindering trust formation.

Although familiarity, self-efficacy and sense of control show the cognitive aspect of trust, our analysis looked at twenty-two (n = 22, 59.5%) studies (emotional experience, n = 7; hedonic motivation, n = 6; and trust propensity, n = 8) that explored how emotional and psychological factors affect educators’ decisions to adopt and trust GenAI. A comprehensive sentiment analysis of 3559 educators’ social media comments by Mamo et al. (2024) found that 40% expressed positive emotional responses toward ChatGPT, with trust and joy emerging as the main sentiments. These results align with studies on emotional factors in technology adoption (Fütterer et al., 2023; Ghimire et al., 2024), which have shown that feelings of safety, ease, and confidence play a crucial role in establishing trust. Also, hedonic motivation, a UTAUT construct referring to the perception of GenAI as enjoyable and engaging, is positively correlated with educators’ adoption (Bhat et al., 2024; Kelly et al., 2023). Furthermore, our findings show that trust propensity, a key psychological factor, differs among educators, with tech-savvy individuals and those with positive prior experiences showing a higher propensity to trust (Baig & Yadegaridehkordi, 2024; Wölfel et al., 2023). In their study, Wölfel et al. (2023) proved the importance of using teaching-specific data, like lecture materials, to create custom GenAI agents that can be trusted. These findings suggest that institutions can cultivate trust in GenAI by developing training programmes that not only teach technical skills but also create positive emotional experiences with the technology.

Our proposed conceptual model included pedagogical beliefs and pedagogical knowledge factors based on the adoption models proposed by Celik (2023) and Choi et al. (2023). In their empirical studies, Choi et al. (2023) and Cabero-Almenara et al. (2024) established that educators with constructivist teaching philosophies are more likely to integrate AI into their teaching practice. This suggests that constructivist educators may see GenAI as a tool that supports active learning and collaboration for students (Bozkurt et al., 2024). Also, Cabero-Almenara et al. (2024) pointed out that the difference between educators with constructivist and transmissive beliefs could be explained by their different expectations of how GenAI would impact or affect their roles. Although Jain & Raghuram (2024) confirmed that educators’ technological, pedagogical, and content knowledge (TPACK) significantly influences their trust in GenAI, our analysis suggests that traditional TPACK competencies may not be enough. Instead, as Mishra et al. (2024, p. 207) argue, GenAI requires educators to develop “new mindsets and contextual knowledge” that transcend traditional TPACK frameworks. Therefore, trust in GenAI requires an extension of existing pedagogical expertise to include AI literacy, defined as the need to develop the knowledge and skills to use GenAI responsibly and effectively in their teaching practices, including understanding the principles and ethics surrounding the use of AI (Long & Magerko, 2020; Yang et al., 2025).

Socio-Ethical Context factors

While individual characteristics and pedagogical factors create the foundation for trust, this review analysed three key socio-ethical context factors (Table 4).

Drawing from ten studies (n = 10, 27%), our analysis reveals that the TAM/UTAUT utilitarian factors can either amplify or diminish the impact of individual trust factors. For instance, Brandhofer & Tengler (2024) and Jain & Raghuram (2024) found that perceived usefulness was more likely to be associated with increased trust when aligned with educators’ existing pedagogical beliefs and sense of control. Additionally, the interaction between individual and utilitarian factors becomes even more evident when considering social influence, identified in eight studies (n = 8, 21.6%). Studies on GenAI adoption in higher education (Baig & Yadegaridehkordi, 2024; Ofosu-Ampong, 2024) indicate that peer influence and institutional policies play a more significant role compared to traditional educational technologies. This means that the experiential knowledge of trusted colleagues serves as evidence for managing GenAI challenges and concerns in actual classroom settings. The empirical results from (Bhaskar et al., 2024; Bhat et al., 2024; Camilleri, 2024) confirm this finding and reinforce the emphasis on social influence factors in existing AI trust frameworks (Yang & Wibowo, 2022).

Finally, our analysis revealed unprecedented scrutiny regarding the ethical use of GenAI in HEIs, driven by educators’ concerns (Bhaskar & Rana, 2024; Yusuf et al., 2024). These ethical concerns emerge as fundamental barriers to trust that cannot be resolved by social influence or ease of use alone. Yusuf et al. (2024) emphasise the need for educational strategies and ethical guidelines to accommodate cultural diversity, while Bhaskar & Rana (2024) highlight educators’ responsibility to prevent AI misuse in academic settings.

Institutional strategies influencing educators’ trust in adopting GenAI (RQ2)

Our analysis of institutional strategies (see Table 5) reveals that meaningful leadership engagement remains absent despite widespread policy development and training initiatives.

Table 5 Institutional Strategies descriptions, counts, and references.

Leadership support, including transparent communication and active involvement in GenAI-driven projects, was identified in only four (n = 4, 10%) out of 37 studies. This signals a fundamental weakness in current implementation approaches (Chan, 2023; Ofosu-Ampong, 2024) that undermines the factors identified in RQ1. For instance, Chan (2023) stressed that “senior management will be the initiator […] developing and enforcing policies, guidelines, and procedures that address the ethical concerns surrounding AI use in education” (p. 21). However, studies suggest this leadership role often remains conceptual rather than enacted, leaving educators uncertain about institutional support for GenAI adoption. Moreover, emotional and psychological factors, as well as pedagogical beliefs, require institutional validation through active leadership support (Crawford et al., 2023; Espartinez, 2024).

Despite the leadership gap, institutions are making substantial efforts in policy development. Several institutional policies and guidelines have emerged as a primary trust-building mechanism (Aler Tubella et al., 2024; Bannister et al., 2024; Wang et al., 2024). For example, Bannister et al. (2024) reported that “clear institutional policies and guidelines enhance trust by addressing educators’ ethical concerns, such as fears about bias and inequality in student evaluations and grading” (p. 14). Additionally, Aler Tubella et al. (2024) provided practical recommendations for both educators and policymakers on implementing AI trust principles. However, our findings from twelve studies (n = 12, 32%) reveal a collective call for HEIs to provide clear policies and guidelines to help educators cope with the GenAI rapid developments (Chan, 2023; Duah & McGivern, 2024). Furthermore, traditional institutional structures, characterised by slow-moving policy development processes, struggle to keep pace with technological changes, creating a disconnect between policy frameworks and practical needs, potentially undermining trust (Luo, 2024; Xiao et al., 2023). Therefore, a gradual implementation approach should be considered (Kurtz et al., 2024). While policies provide a foundation, our analysis reveals that educators require professional support and training to effectively integrate GenAI into their teaching (Kamoun et al., 2024; Kurtz et al., 2024), with seventeen selected studies (n = 17, 45%) reflecting this need. However, empirical evidence suggests an important implementation gap. For instance, Kamoun et al. (2024) found that “63.4% of surveyed faculty reported that they lack the requisite training and resources to integrate ChatGPT into their pedagogical practices” (p. 9). This difference between recognised need and actual implementation suggests that institutions should address educators’ AI literacy and pedagogical issues, such as authentic digital assessments (Lelescu & Kabiraj, 2024). Additionally, educators need to be proactive in finding GenAI training opportunities that align with their pedagogical goals (Chan, 2023). Nevertheless, our findings indicate that well-designed training programmes can positively influence educators’ trust factors, although future interdisciplinary research is needed to understand their effectiveness (Baig & Yadegaridehkordi, 2024; Mamo et al., 2024; Wang et al., 2024).

To complement the tabular data, Fig. 4 presents a heatmap-style treemap illustrating the frequency of trust factors across the three dimensions of our conceptual model.

Fig. 4
figure 4

Frequency of trust factors heatmap.

The visualisation highlights professional support and training (n = 17) and policies and guidelines (n = 12 emerged most frequently, whereas leadership support had the lowest mentions. The trustor (educator) factors show a balanced distribution with a total of 10 studies related to pedagogical aspects. Socio-ethical concerns remain prominent, with utilitarian value (n = 10), social influence (n = 8), and ethical use (n = 6) showing their relevance to educators’ trust formation.

Discussion

This systematic review aimed to identify the factors and institutional strategies influencing educators’ trust in adopting GenAI for educational purposes. Our analysis of 37 studies from 19 countries reveals multiple interactions between individual trust factors and institutional approaches that extend beyond conventional technology acceptance models, challenging how we conceptualise GenAI integration in higher education.

Summary of key findings

Our investigation reveals that educators’ trust in GenAI emerges from an interconnected dynamic system that includes individual characteristics, pedagogical values, socio-ethical contexts, and institutional strategies. Unlike previous research, our findings, which focused on higher education educators, demonstrate that trust formation is context-dependent and pedagogically mediated. This study managed to demonstrate that trust in GenAI involves a complex socio-technical system requiring ongoing interactions between human values, institutional structures, and technological capabilities.

Addressing our first research sub-question (RQ1) on the trust factors influencing educators’ adoption of GenAI, four categories are used to describe the individual trust factors. Demographic factors show contradictory patterns across studies, with inconsistent findings regarding gender influences (Cabero-Almenara et al., 2024; Kaplan et al., 2021) and complex relationships between teaching experience and trust (Brandhofer & Tengler, 2024). These contradictions suggest that demographic factors operate differently across institutional and cultural contexts, highlighting the need for context-specific approaches rather than broad demographic generalisations. Specifically, institutional context variations may play a significant role, as studies from institutions with strong AI support policies, such as those examined by Chan (2023) and Kurtz et al. (2024), showed weaker demographic effects compared to studies conducted in less supportive environments. Also, disciplinary differences matter considerably, with STEM-focused studies (Jain & Raghuram, 2024) exhibiting different demographic patterns than studies with multidisciplinary representation (Yusuf et al., 2024). Similarly, national policy contexts create additional variation, as studies from countries having established AI frameworks (Chan & Lee, 2023), like in China, showed more consistent trust patterns than those in Oman, for instance (Bhat et al., 2024).

Cognitive factors (familiarity, self-efficacy, sense of control) emerged as bridges between demographics and trust formation, with our findings revealing that a perceived sense of control through human-in-the-loop approaches significantly influences trust development (Salah, 2024). This suggests that preserving educators’ professional autonomy is very important for successful GenAI integration. The emotional-psychological factors of trust (examined in 59.5% of studies) demonstrated that trust-building strategies should incorporate opportunities for positive emotional engagement with technology, moving beyond purely technical training approaches (Bhat et al., 2024; Mamo et al., 2024). Particularly noteworthy is that pedagogical factors emerge as an important factor in trust formation, suggesting that institutional strategies should focus on aligning GenAI integration with educators’ pedagogical goals.

Furthermore, the socio-ethical context shapes educators’ trust formation, with utilitarian factors translating to increased trust only when aligned with pedagogical beliefs (Jain & Raghuram, 2024). This finding helps explain the contradictory results we observe across studies examining perceived usefulness as a trust predictor. Studies that explicitly measured pedagogical alignment alongside utilitarian perceptions (Choi et al., 2023; Jain & Raghuram, 2024) consistently found strong usefulness-trust relationships, while those treating usefulness as an independent factor (Bhat et al., 2024; Camilleri, 2024) showed weak or inconsistent effects. A particularly revealing example comes from Choi et al. (2023), who found that constructivist educators showed strong correlations between perceived usefulness and trust, while those with transmissive pedagogical beliefs showed minimal relationships despite having similar technology exposure. This pattern suggests that traditional technology acceptance models may produce misleading results when applied without considering the pedagogical factors that mediate the relationship between utility and trust.

Social influence mechanisms (identified in 21.6% of studies) suggest that peer networks function as trust intermediaries, while ethical considerations form fundamental barriers to trust that cannot be overcome through traditional adoption incentives alone (Bhaskar & Rana, 2024; Yusuf et al., 2024).

Regarding the second research sub-question (RQ2) on institutional strategies, our analysis identified several gaps in the interactions between institutional strategies and educators’ trust factors in GenAI adoption. Although leadership support is a critical trust element that can validate educators’ emotional experiences and reinforce pedagogical factors, it is not adequately addressed (Ofosu-Ampong, 2024). Institutional policies and guidelines aim to address academic misconduct and ethical concerns, which are particularly important for educators with lower trust propensity but fail to account for demographic differences and pedagogical diversity (Bannister et al., 2024; Jain & Raghuram, 2024). Finally, the training initiatives demonstrate the potential of several interactions with educators’ trust factors by enhancing familiarity, self-efficacy, and sense of control while creating positive emotional experiences with GenAI. However, the effectiveness of these interactions is compromised by significant implementation gaps (Kamoun et al., 2024).

Summing up, our findings support the relevance of the proposed conceptual framework and its novelty in addressing persistent gaps in existing AI trust models. While the deductive structure of the review reflects the framework’s coherence with current literature, its empirical validation remains a task for future studies. The prominence of pedagogical considerations (27% of studies) and the notable leadership gap (10.8% of studies) highlight dimensions that prior models inadequately address. These findings suggest that trust in educational AI contexts operates differently from general technology adoption, requiring domain-specific theoretical approaches that account for professional identity, pedagogical values, and institutional mediation.

Theoretical implications

From a theoretical perspective, this review challenges the adequacy of existing AI trust and acceptance models, suggesting the need for new conceptual models that better account for the role of pedagogical philosophy and institutional strategies in understanding trust in GenAI. This review contributes to trust theory by developing an integrated framework that addresses three critical limitations identified in existing AI trust models: insufficient attention to pedagogical factors, limited examination of institutional strategies and failure to address GenAI ethical concerns.

Unlike prior frameworks, which primarily investigated technical AI trust factors (e.g., competence, privacy, explainability), our conceptual model considers higher education institutional strategies as the key structural assurances (McKnight et al., 2011) that interact with educators’ trust factors in GenAI. Our evidence shows that professional support appears in 45.9% of studies, while leadership support appears in only 10.8% demonstrating that institutional strategies operate unevenly in practice. This gap between theoretical importance and empirical attention suggests that future trust models must account for implementation challenges within institutional hierarchies, not just the presence or absence of policies.

Our Trustor (Educator) category extended traditional frameworks by integrating pedagogical belief and knowledge as educator-specific factors overlooked by existing models (Cabero-Almenara et al., 2024; Choi et al., 2023). The finding that perceived usefulness correlates with trust only when aligned with pedagogical beliefs challenges the universal applicability of TAM/UTAUT models. This suggests that trust frameworks for professional contexts require integration of domain-specific identity factors that mediate relationships between utility perceptions and trust formation.

The socio-ethical context category, including social influence and ethical considerations, further extends existing theoretical frameworks by addressing GenAI in educational settings (Bhaskar et al., 2024; Yan et al., 2024). Unlike technical AI frameworks that treat ethics as system characteristics, our findings show that ethical concerns (16.2% of studies) operate as fundamental barriers that cannot be overcome through traditional adoption incentives.

These theoretical contributions suggest that trust in professional AI contexts requires frameworks that integrate professional identity, institutional mediation, and value alignment as core constructs rather than contextual considerations. However, our deductive approach limits claims about theoretical innovation. Future work should test whether this integrated framework provides better predictive power than existing models or whether the added complexity obscures more fundamental trust mechanisms.

Next, we discuss how higher education institutions and policymakers can translate these findings into strategies for trustworthy GenAI adoption.

Implications for policy and practice

To support institutions and educators, five possible key policy actions could be derived from the analysed articles: (1) strengthen institutional leadership and structural assurances by closing leadership gaps, fostering positive social influence mechanisms, and co-creating policies with educators to ensure pedagogical and ethical relevance (Bhat et al., 2024; Crawford et al., 2023; Yusuf et al., 2024); (2) adopt phased and inclusive GenAI integration strategies that allow gradual experimentation across disciplines while embedding safeguards for academic integrity, data ethics, and cultural inclusivity (Karkoulian et al., 2024; Dabis & Csáki, 2024); (3) promote active educator participation in governance by involving faculty in committees and policy design processes, which strengthens both individual trust and institutional legitimacy (Chan, 2023; Yan et al., 2024); (4) establish clear institutional policies on academic integrity and ethical GenAI use, including guidelines on plagiarism, citation of AI-generated content, and culturally sensitive data practices (Baig & Yadegaridehkordi, 2024; Dabis & Csáki, 2024); and (5) consider aligning higher education policy with global trustworthy AI frameworks and adapt them to local educational and cultural contexts (Aler Tubella et al., 2024).

The prominence of training initiatives mentioned in our analysis shows that building trust depends on educators’ capacity to evaluate, adapt, and responsibly apply GenAI in their teaching. Our findings suggest the need to develop dedicated AI literacy policies, as the UNESCO AI Competency Frameworks for Teachers (UNESCO, 2024) also recommends. Thus, the following actions might be considered: (1) develop AI ethics and bias awareness training for educators, mandated as part of professional development (Bozkurt et al., 2024; Cukurova et al., 2023); (2) integrate AI literacy into curricular standards and accreditation requirements to ensure that both staff and students achieve baseline competencies (Yang et al., 2025), and (3) encourage train-the-trainer models, where early adopters mentor peers and build collective capacity (Long & Magerko, 2020).

Conclusion, limitations, and future research

This systematic review examined trust factors influencing higher education educators’ adoption of GenAI through an analysis of 37 peer-reviewed studies published between 2019 and August 2024. Our research contributes to advancing a trust-centred approach to GenAI integration in HEIs by proposing a new conceptual model that extends existing AI frameworks, emphasising the interaction between educators’ trust factors and institutional strategies. This model positions trust as an ongoing relational process between educators, technology, and institutional context, challenging approaches that treat GenAI integration as merely a technical implementation challenge.

Study limitations

Despite the strengths of the proposed model, our study has limitations. First, the focus on English-language publications published after 2019 yielded several findings on ChatGPT, the most used GenAI application in education during this time (Wang et al., 2024). This approach may have excluded perspectives from non-English speaking contexts and specific findings about other GenAI tools (e.g., Google Gemini, Microsoft Copilot). Nevertheless, we believe this review’s conclusions apply to GenAI tools other than ChatGPT because our analysis was focused on trust factors rather than specific technical aspects of a particular tool. Second, our deductive coding methodology may have missed new trust factors that are particular to GenAI. Future research should employ inductive approaches to better capture educators’ GenAI perceptions and their impact on trust formation. Third, the contradictory demographic patterns we observed suggest our framework may oversimplify how individual characteristics operate across different institutional and cultural contexts. Our finding that leadership support appears in only 10.8% of studies, despite being theoretically positioned as foundational, raises questions about whether our theoretical emphasis matches empirical reality or reveals a significant research gap. Finally, a broader methodological limitation arises from the fast-evolving nature of GenAI itself. The rapid pace of technological innovation and institutional responses means that findings represent a snapshot in time and may need continual updating as new tools, policies, and practices emerge. This volatility highlights the importance of ongoing empirical research to ensure that trust frameworks remain relevant and responsive to the changing educational landscape.

Recommendations for future research

As previously mentioned, one important direction is to provide empirical evidence of the model’s robustness. This requires the identification and development of survey instruments that operationalise constructs such as familiarity, self-efficacy, sense of control, and ethical concerns across higher education. The quantitative insights should be complemented by qualitative case studies involving interviews and focus groups with educators and institutional leaders to understand how institutional policies, training programmes, and leadership approaches shape trust development over time. Given the leadership gap we identified, comparative case studies examining different institutional approaches to GenAI adoption would provide evidence about which combinations of leadership support, policies, and professional development most effectively build trust. Future research should investigate targeted interventions aimed at strengthening institutional capacity (Luo, 2024). For instance, evaluating the effectiveness of AI literacy initiatives (UNESCO, 2024), policy frameworks, and professional development programmes can provide evidence-based strategies for institutions to build sustainable trust in GenAI (Li et al., 2024). Furthermore, research should investigate how trust formation varies across disciplinary contexts, pedagogical approaches, and institutional types to determine whether our framework applies broadly or requires context-specific modifications. This disciplinary analysis would be particularly valuable for understanding how STEM versus humanities educators respond differently to GenAI integration.

An equally important direction involves ethnographic studies that examine how educators navigate challenges related to academic integrity, bias, and fairness in real classroom settings, providing insights into the practical operation of our socio-ethical context dimension. These studies should focus on documenting decision-making processes when educators encounter ethical dilemmas with GenAI, informing both theoretical refinement and evidence-based policy development that support trust-based GenAI integration.

These research priorities ultimately serve a broader purpose by enabling higher education to navigate GenAI integration as a thoughtful process of trust building rather than a race towards technological adoption. Our findings demonstrate that successful institutions are those that invest in building authentic trust through alignment with educators’ professional identities and institutional support. This trust-centred approach promises a fundamental reconceptualisation of GenAI’s role in education, transforming it from a mere tool into a collaborative partner in the educational process, guided by human wisdom and pedagogical purpose.