Introduction

We are at the forefront of a new era—the Artificial Intelligence (AI) Revolution (Kaplan et al. 2023). AI is broadly defined as the ability of a computer program, machine, or system to make human-like intelligent decisions and perform tasks autonomously (Russell and Norvig, 2021). The AI Revolution is distinguished by its ability to simulate aspects of human cognition—enabling machines to assess scenarios, learn from experience, and adaptively apply knowledge to decision-making and problem-solving. This marks a shift from earlier technologies that automated physical tasks to systems capable of tackling cognitive challenges. Accordingly, AI bridges the “knowledge and intelligence gap”—the divide between mechanical automation and human reasoning. While AI does not fully replicate human intelligence, it emulates key functions such as pattern recognition, adaptive learning, and contextual reasoning, enhancing the accessibility and capability of intelligent systems across diverse applications (Li et al. 2024). It presents unprecedented global opportunities across many areas within Industry 4.0 (Magd et al. 2022), including autonomous transportation (Qayyum et al. 2020), healthcare (Wiens and Shenoy, 2018), and military (Sligar, 2020), with resulting efficiency and productivity gains predicted to boost the global economy by an estimated $13 trillion by 2030 (Bughin et al. 2018).

As AI becomes increasingly integral to our lives, the realization that traditional notions of interpersonal trust applied to humans do not necessarily extend to AI poses significant risks to society (Sabherwal and Grover, 2024). Specifically, integrating AI into society raises ethical concerns and presents several grand challenges within society, including the risks of manipulation, misinformation, discrimination, displacement, misuse in warfare, and the potential loss of control over AI systems (Hancock, 2023). However, trust in AI technology fosters its adoption, thereby enhancing public acceptance (Li et al. 2024). Conversely, a lack of trust in AI and its subsequent impact on societal trust can lead to diminished efficiency, financial losses, stifled innovation, worsened social inequalities, and potential social unrest as AI nonetheless becomes central to our lives (Capraro et al. 2024). This lack of trust jeopardizes the beneficial applications of AI and ultimately undermines social cohesion (Putnam, 2000; Kramer, 1999; Hancock et al. 2023a).

We propose that these grand challenges cannot be addressed without collaborative efforts across academic disciplines and societal stakeholders within a transdisciplinary framework (Montuori, 2013). Indeed, our comprehensive bibliometric review of over 34,000 trust research articles from the past three decades indicates that although multi- and interdisciplinary studies are present, transdisciplinary efforts are scarce. Our review finds that collaboration between scientists and stakeholders is missing, a major characteristic of transdisciplinarity. Lacking the institutional stakeholders’ perspective hinders our understanding of AI trust issues, as existing research may not reach end users to build trust and might not offer solutions due to insufficiently integrated research. Therefore, our objective is to establish a transdisciplinary research agenda on trust, calling for enhanced synergy across academics and other stakeholders to amplify the quality and impact of trust research in the era of AI.

Trust dilemma: navigating grand challenges in the era of AI

Interpersonal trust is vital for human flourishing and economic growth (Zak and Knack, 2001), reducing collaboration costs (Dirks et al. 2011), generating wealth through specialization and exchange (Kim, 2023; Knack and Zeefer, 1997; Cook et al. 2005), and promoting welfare (McEvily, 2011; Bottom et al. 2006; Zahedi and Song, 2008). Without trust, the social fabric unravels, communication falters, and disorder ensues, making trust a decisive basis for human and societal progress (Redfern, 2009; Buchan et al. 2008). A widely recognized definition across multiple disciplines (Hardin, 2002; Baier, 1986; Simpson, 2012; Schoorman et al. 2007; Rousseau et al. 1998; Luhmann, 2017; Barber, 1983; Simpson, 2023) defines trust as one party’s willingness to be vulnerable to another, based on the belief that the other will perform a crucial action, even without monitoring (Mayer et al. 1995). As such, trust poses a dilemma (Lange et al. 2017), emphasized by its potential risks and benefits (Mislin et al. 2011): every human relationship-from dyadic to societal—entails inherent risks of exploitation, requiring trust evaluation and necessitating vulnerability to these risks (Bartz and Lydon, 2006; Holmes, 1991, 1981; Deutsch, 1958). People overcome trust constraints with strangers by developing initial trust (Lange et al. 2017), influenced by socialization (Schilke et al. 2021; Gächter et al. 2010), genetic factors (Shou et al. 2021; Riedl and Javor, 2012), hormones (Bartz et al. 2011), brain functionality (Krueger and Meyer-Lindenberg, 2019; Bellucci et al. 2017; Fehr, 2009), and neural development (Sijtsma et al. 2023; Krueger, 2021). Early personal experiences shape initial trust (McKnight et al. 1998; Simpson, 2007), which evolves through interactions over time, reflecting the dynamics of trust (Riedl et al. 2014; King-Casas et al. 2005). When thinking about trust, we often think of trust between individuals, but we also experience trust with non-human entities.

Adopting new technologies requires trust and triggers paradigm shifts that reshape the nature of trust, simultaneously addressing existing grand societal challenges and creating new ones. For example, transformative technologies like Gutenberg’s printing press, the steam engine, and the Internet—central to the Printing, Industrial, and Digital Revolutions—reshaped societal trust by bridging gaps in knowledge, power, and distance, challenging authorities, enhancing productivity, and democratizing information access (Werbach, 2018). Each era proved more disruptive than its predecessor, significantly shifting how information was disseminated, work was conducted, and societies organized themselves, invariably impacting the fabric of trust. As these previous revolutions resulted in major societal upheaval, the emergence of AI technology is unique in this regard since it challenges traditional concepts of interpersonal trust (Russell and Norvig, 2021). AI is often viewed from a social cognition perspective, making it difficult to see it merely as a machine and instead as an entity potentially deserving of trust (Williams et al. 2022). As AI advances, distinguishing between human and technological interactions will become increasingly challenging, and it is unclear whether trust evaluations target AI itself, the company that developed it, or both (Wingert and Mayer, 2024).

Building trust between human users (trustors) and AI systems (trustees) across various contexts is inherently complex and qualitatively different from trust between human agents (Kaplan et al. 2023; Hancock et al. 2023a). While interpersonal trust—typically defined as a willingness to be vulnerable based on positive expectations of another’s intentions and ability—provides a useful conceptual starting point, it must be adapted when applied to non-human entities. AI systems lack intentionality, emotional states, and moral agency, which are foundational elements in assessments of human trustworthiness. Nevertheless, many trust frameworks continue to draw on familiar dimensions—such as ability, benevolence, and integrity—even in the context of AI (Hancock et al. 2023a; Lyons et al. 2023; Yusuf and Baber, 2020). These dimensions, however, require reinterpretation (Thiebes et al. 2021; Asan et al. 2020): ability refers to the system’s technical performance (e.g., safety, reliability, accuracy, robustness) as demonstrated by empirical evidence and performance data. Benevolence (e.g., privacy, fairness) and integrity (e.g., explainability, accountability) are not intrinsic properties of the AI but are realized through system design, ethical programming, and regulatory safeguards (Schlicker and Langer, 2021). This reframing underscores that trust in AI is not merely a replication of interpersonal trust but a distinct socio-technical construct shaped by human interaction with technological features and institutional mechanisms. As such, understanding people’s trust in AI requires new conceptual tools that extend beyond social interaction in non-technological contexts.

The rapid advancement and increasing complexity of AI technologies represent a double-edged sword. They present not only societal opportunities but also grand challenges, as illustrated in the following examples:

Profiling. Machine learning employs supervised, unsupervised, and reinforcement learning algorithms to analyze vast datasets and identify complex patterns, thereby projecting future outcomes based on historical data in domains such as retail (Heins, 2023), marketing (Chintalapati and Pandey, 2022), and precision medicine (Mumtaz et al. 2023). However, these predictive algorithms carry serious risks, such as predictive profiling for consumers in online advertising platforms—including unwarranted data collection and invasive advertisingFootnote 1—which can negatively affect mental health and alter implicit self-perception. Such practices, particularly when conducted without transparent consent and oversight, undermine the trustworthiness principle of privacy and erode people’s trust in AI, affecting how individuals perceive reality and their vulnerability.

Misinformation. Computer vision AI technology, augmented by generative adversarial networks, has revolutionized image and video manipulation, delivering substantial benefits across fields, including security monitoring (Zhang et al. 2022), film and entertainment (Du and Han, 2021), and healthcare diagnostics (Kumar et al. 2022). However, this AI technology also presents risks, such as creating deepfakes—highly realistic, fabricated images or videos designed to spread misinformation across online media users, which can alter social media perceptions and damage reputations. This misuse undermines the principle of non-maleficence and erodes trust in AI, fundamentally altering how people trust within an interaction with an unknown entity (Laas, 2023; Vaccari and Chadwick, 2020).

Discrimination. Natural language processing and its associated large language models (LLMs, such as ChatGPT, Gemini, and Grok) drive revolutionary AI applications for sentiment analysis, personal assistants, and automated content generation across various areas, including customer service (Mariani and Borghi, 2023), finance (Ahmed et al. 2022), and e-commerce (Bawack et al. 2022). This AI technology, however, carries risks such as biases (Mehrabi et al. 2021), as LLMs trained on massive datasets reflecting real-world prejudices can perpetuate and amplify discrimination in different domains, such as in human recruitment (e.g., gender, age, and racial biases in job applications)Footnote 2 or judicial decision-making. These biased outputs compromise fairness and erode trust in AI applications, potentially diminishing trust from racial and ethical minorities (Sullivan et al. 2022; Zhou et al. 2021).

Job displacement. AI-powered robotics enhances machine capabilities like vision, touch, and autonomous decision-making, significantly broadening their applications across diverse fields such as autonomous driving (Gao and Bian, 2021), manufacturing assembly lines (Narkhede et al. 2024), and surgical healthcare procedures (King et al. 2023). However, this technology entails risks, like its autonomy, which can lead to the displacement of both blue- and white-collar jobs, such as truck drivers, factory workers, retail staff, and computer programmersFootnote 3. As machines assume more roles and displace jobs, concerns about accountability escalate (Blacklaws, 2018; Alam and Mueller, 2021), fueling a decline in trust in AI. This is particularly evident in the retail industry, hardest hit by rising unemployment and wealth inequality, further eroding trust in corporations.

Warfare: Deep learning, a subset of AI that utilizes neural networks with complex algorithms built on thousands of features and millions of parameters, enhances decision-making and problem-solving efficiency by providing real-time strategic guidance and improving tactical decisions in areas such as military operations (Pandey et al. 2024), defense systems (Qiu et al. 2019), and cybersecurity (Naik et al. 2022). This AI technology’s “black box” nature poses serious risks, including opaque military decision-making involving autonomous weaponry (Johnson, 2020), which can lead to unintended consequences such as civilian casualties and loss of control over critical systems (von Eschenbach, 2021). The complexity of AI systems challenges the principle of explainable AI (XAI) (Shaban-Nejad et al. 2021), encompassing transparency, interpretability, and explainability, and potential misalignments with moral, ethical, and legal principles erode trust in AI systems, jeopardizing trust among nations.

Singularity. Quantum-enhanced AI holds the potential for groundbreaking advancements in drug design (Nandi et al. 2024), climate modeling (Shaamala et al. 2024), and space exploration (Omar et al. 2021) by leveraging quantum computing capabilities to accelerate the computing processes of AI systems exponentially (Pérez et al. 2023). Still, the rapid AI evolution towards Artificial General Intelligence or Artificial Super Intelligence raises risks, such as AI achieving supremacy (Hurlburt, 2017), where it could surpass human knowledge and intelligence, leading to dire consequences for governance (e.g., loss of essential skills, diffusion of responsibility, decline of human agency) (Gordon, 2015). These developments could undermine human-centricity by exacerbating the AI alignment problem, eroding trust in AI, and jeopardizing trust in AI evolution (Gabriel, 2020).

These paradigmatic, yet not exhaustive, examples of grand societal challenges (e.g., profiling, misinformation, discrimination) highlight for different users (e.g., consumers, social media users, job applicants) the interplay of various elements shaping trust in AI. From a scientific perspective, AI systems’ inherently associated risks (e.g., prediction, deep fake, bias) pose potential adverse outcomes in various technological terrains (e.g., advertising, social media, job recruitment). From a societal perspective, AI’s inherent risks threaten its perceived trustworthiness (e.g., privacy, non-maleficence, fairness) and impact trust interactions within various societal spheres (e.g., self, strangers, ethical minorities). For a summary of key elements—trustworthiness, risk, user, sphere, and terrain—related to illustrated grand challenges, see Table 1.

Table 1 Illustrative grand challenges and their related trust key elements.

Given the interplay of these key elements, building trust in AI requires not only interdisciplinary collaboration across fields like engineering, computer science, sociology, psychology, neuroscience, ethics, philosophy, and law but also integrating diverse knowledge, methods, and perspectives to ensure the development of trustworthy AI (Thiebes et al. 2021) that balances technological advancement (science) and ethical considerations (society). Fostering a positive impact of AI on societal trust demands further collaboration beyond academia, involving stakeholders like developers, investors, suppliers, regulators, educators, policymakers, users, and the general public. This view is supported by a growing body of academic and applied literature that emphasizes that the risks of AI stem not from distant futures but from its current use in critical institutions, where it often reinforces social inequalities. Recognizing the limits of dominant ethical frameworks, scholars call for systemic analyses that consider the political, historical, and cultural contexts in which AI operates—crucial for managing real-world impacts (Crawford and Calo, 2016). Broader perspectives reveal AI’s entanglement in global systems of labor, data extraction, and environmental harm, highlighting the need for deeper scrutiny (Crawford, 2021). There is also a call for enforceable safeguards to protect rights, identity, and privacyFootnote 4—key to building trust in the face of unregulated AI, including neurotechnologies (Yuste et al. 2017). Together, these insights underscore the need to embed political, societal, and economic dimensions into transdisciplinary trust research, anchoring it in real-world institutional contexts. Ultimately, effective collaboration among scientists and stakeholders is crucial for tackling AI deployment’s theoretical, practical, and ethical considerations, ensuring technologies are technically sound and ethically aligned, ultimately fostering societal trust (Felzmann et al. 2019).

How do we understand and combat the emerging societal AI grand challenges to trust?

Addressing such a need requires a transdisciplinary trust research agenda that evaluates trust research through a combined bibliometric literature review and network analysis (see Supplementary Materials) (Fig. 1). Our bibliometric network analysis indeed reveals a notable absence of research articles that align with the core characteristics of a transdisciplinary research agenda (or even use its terminology). This deficiency demonstrates that prior research has largely failed to integrate knowledge, methods, and perspectives from diverse disciplines within a unified, holistic framework. Despite the involvement of various scientific disciplines in research, almost 99% of studies failed to incorporate the perspectives of institutional stakeholders (e.g., developers, policymakers, and the general public), indicating that academics and other stakeholders have not been equal partners in the research and intervention development process. The absence of the institutional stakeholders’ perspectives hampers our understanding of AI trust issues, as existing research may not address end users’ concerns to build trust and may lack integrated solutions.

Fig. 1: Evaluation of multi-, inter-, and transdisciplinary trust research.
figure 1

A Network visualization map. A bibliometric distance-based network map was created based on 34,459 research articles using VOSviewer (Van Eck and Waltman, 2010), displaying 98 nodes (i.e., research areas, RAs as categorized by the Web of Science schemaFootnote

https://incites.help.clarivate.com/Content/Research-Areas/wos-research-areas.htm

), 8 clusters (C1-C8, i.e., research domains displayed in different colors), and 1314 edges (i.e., links measured in total link strengths of co-occurrence) between nodes and clusters in a two-dimensional (2D) space (via x and y coordinates [arbitrary units] indicating relative distance). Larger labels and circles indicate higher RA occurrence; thicker edges indicate greater link strength between RAs, and smaller distances between RAs indicate higher relatedness. B Bar graph with mean total link strength (± standard error of the mean, s.e.m.) in clusters. Multidisciplinarity is identifiable by distinct clusters, each representing separate research domains characterized by various RAs and their total link strength. Although clusters were internally cohesive, the total link strength differed significantly among them (\({{\rm{\chi }}}_{7}^{2}=19.86\), P < 0.005). This suggests that some clusters, e.g., cluster 8 (including computer science, indicated in brown) and cluster 6 (including business & economics, indicated in turquoise), have stronger total link strength than others, potentially indicating interdisciplinarity characterized by inter-cluster links. C Bar graph with mean link ratio in clusters. Interdisciplinarity can only be partially identified because the ratio of the number of links between RAs staying within the same cluster and going out to other clusters differed significantly across clusters (P < 0.001). This indicates that some clusters, particularly cluster 6 (including business & economics), had more links to other clusters. D Bar graph with cluster relatedness in map coordinates (mean x and y coordinates, ± s.e.m.) as positioned in the 2D map. Transdisciplinarity cannot be identified due to the lack of strong inter-cluster links and a complex network structure with significant overlap and integration. This results in blurred cluster boundaries and shorter distances between them. Clusters differed significantly in location, with x (\({\chi }_{7}^{2}=85.77\), P < 0.0001) and y (\({\chi }_{7}^{2}=75.68\), P < 0.0001) coordinates, indicating, for example, that clusters on the left side (i.e., C1, C2, C3, C5, C6) are more closely located and partially overlap, unlike those on the right side (i.e., C4, C7, C8). E Pie chart of co-authorship percentages based on organizational affiliation. The history of trust research is predominantly driven by scientific discourse (98.7% of publications by authors affiliated with Academic and Research Institutions, ARI), compared to collaborative science and societal discourse (0.8% with at least one author from Institutional Stakeholders, IS) and solely societal discourse (0.5% by IS).

To surpass the limitations of multi- and interdisciplinary perspectives, omitting the perspectives of institutional stakeholders, we propose a transdisciplinary framework to solve grand challenges and provide solutions to enhance trust in AI and its impact on societal trust (Fig. 2A). Our comprehensive framework builds on the model for transdisciplinary research processes (Jahn et al. 2012), based on the fundamental idea that grand societal challenges must be connected to existing scientific knowledge gaps to develop successful practical solutions. It views societal advancement and scientific progression as knowledge-driven systems that feed into a comprehensive knowledge integration system (Fig. 2B). This process, guided by ongoing discourses between stakeholders and scientists, unfolds in three phases. During the problem transformation phase, a grand societal challenge is identified within the societal system, linked to existing scientific knowledge as a grand scientific challenge within the scientific system, and redefined as a common research objective within the integrative system. The roles of scientists and stakeholders are delineated in the production of new, connectable knowledge phase. An integration concept is developed and implemented across five key elements of trust—trustworthiness, risk, user, sphere, and terrain—that are central to addressing social grand challenges related to trust in AI and its impact on societal trust (cf., Table 1). The transdisciplinary integration phase evaluates the integrated results and compiles outputs for societal and scientific communities. Across these phases, two distinct transdisciplinary pathways emerge: a real-world pathway focusing on practical societal solutions and an intra-scientific pathway aimed at empirical research and discovery. Our framework integrates diverse perspectives from scientific and societal domains to support trustworthy AI, providing a structured approach for unifying insights across disciplines and stakeholder contexts. For example, in the case of explainability, our framework connects technical transparency from computer science with insights from ethics and sociology about how explanations shape user perceptions, cognitive processing, and institutional trust. This enables a holistic treatment of explainability as both a technical and socially embedded feature.

Fig. 2: Transdisciplinary trust research.
figure 2

A Transdisciplinary Research Agenda. Transdisciplinarity emphasizes collaboration between scientists and stakeholders, integrating knowledge to address grand challenges and producing practical solutions for society and science. The figure shows examples of major stakeholders and relevant scientific disciplines, though these are not exhaustive. B Transdisciplinary Research Framework. The transdisciplinary framework considers societal advancement and scientific progression as knowledge-focused systems providing input into a knowledge-integration system, each undergoing three stages: problem, discourse, and result. Guided by ongoing discourses between stakeholders and scientists, this process unfolds in three phases: problem formation, production of new, connectable knowledge, and transdisciplinary integration. Across these phases, two distinct transdisciplinary pathways unfold, encompassing a real-world pathway prioritizing practical societal solutions and an intra-scientific pathway aimed at empirical study and discovery. At the core of the framework, new, connectable knowledge is developed and implemented across five key elements of trust: trustworthiness, risk, user, sphere, and terrain. The user is the central focus of the framework, playing a key role in the discourses on both societal and scientific knowledge. Societal knowledge encompasses stakeholders’ practices and criteria for evaluating AI’s impact on societal trust, assessed across various ecological layers (e.g., individual, relationship, community). Scientific knowledge encompasses scientists’ methods and theories for researching trust in AI, examined across various measurement levels (e.g., biological, neural, physiological). Trustworthiness and sphere are grounded in the societal knowledge system: Trustworthiness is essential for addressing societal challenges, as perceptions of AI’s reliability significantly influence its acceptance and effectiveness. Sphere, integral to societal praxis, refers to various trust interactions within ecological layers that AI technologies impact. Risk and terrain are grounded in the scientific knowledge system: Risk is integral to the scientific challenge of AI development, encompassing unforeseen dangers and potential adverse outcomes that require thorough scientific assessment and exploration. Terrain, a critical aspect of scientific praxis, refers to various environments where AI technologies are applied.

Leveraging the framework provides a tool to determine future research directions and uncover new solutions for identifying, exploring, and creating targeted strategies, measures, and interventions to enhance trust in AI and its impact on societal trust. Placing the user at the center prioritizes human rights, justice, and dignity within human-AI interactions, ensuring all other elements align with this core principle. This integrated approach is crucial for designing, developing, and deploying AI technologies that maximize their impact within societal contexts and contribute to scientific progress.

To illustrate the practical utility of our framework, Table 2 presents a detailed application to the domain of autonomous vehicles. It uses a recent real-world case—when U.S. authorities revoked the operational permit of the “Cruise” self-driving taxi service in San Francisco due to safety incidentsFootnote 6—as a context for demonstrating how our framework can guide the diagnosis of trust failures and inform targeted, cross-sectoral interventions.

Table 2 Example Application of the TrustNet Framework.

The proposed framework builds upon theoretical foundations from previous literature reviews and meta-analyses (Kaplan et al. 2023; Li et al. 2024; Hancock et al. 2023b; Afroogh et al. 2024), with the mission to solve grand societal challenges related to trust in AI and its impact on societal trust, all while keeping the end-user at its core. It addresses various elements of trust, enabling it to effectively respond to the evolving nature of AI technology and societal norms, thus maintaining its relevance over time. Importantly, AI technologies increasingly occupy a paradoxical position: they are both sources of new ethical risks and tools designed to mitigate them. This is especially evident in safety-critical domains such as autonomous driving and medical diagnostics, where AI ensures accountability and minimizes human error while simultaneously introducing new uncertainties. Such contradictions highlight the limits of single-discipline solutions and reinforce the need for a transdisciplinary framework that can reconcile technical performance with ethical governance. Addressing this duality is essential to building resilient and trustworthy AI systems.

Further, adopting a transdisciplinary research approach emphasizes the evolving interconnectedness of its elements from science and society, fostering a holistic and relatable understanding of trust for stakeholders and scientists. It offers a richer, more nuanced understanding of trust in AI, guiding the development of innovations that align with human values and needs and enhancing public acceptance and adoption of AI technology. To reflect the diverse range of AI applications, our framework is intentionally designed to be adaptable across different terrains. Trust in AI is not uniform. Contexts such as healthcare, public administration, military, or consumer technology all involve distinct trust relationships and ethical concerns. By structuring our analysis around multiple key elements and various terrains, we allow the framework to capture the nuances of each use case, supporting context-sensitive evaluations of trust in AI systems.

Finally, it is a preventive framework, proactively identifying and addressing potential risks and ethical concerns to foster a trustworthy AI environment. It provides an integrative view of trust in AI, focusing on theoretical constructs and practical implications, while recent governmental initiatives establish a legal framework for safe and ethical AI deployment. For example, multiple countries and international organizations have issued guidelines to develop trustworthy AI: the European Union issued the Ethics Guidelines for Trustworthy AIFootnote 7, China released the Governance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artificial IntelligenceFootnote 8, the former US President Biden signed an executive order on the safe, secure, and trustworthy development and use of artificial intelligenceFootnote 9, and the Waag Institute in the Netherlands has advanced participatory, transdisciplinary approaches to AI governance through public engagement and co-creation initiativesFootnote 10. Overall, both approaches, theoretical constructs and practical implications, are crucial for the responsible development and use of AI technologies, working together in complementary conceptual and regulatory realms.

Implementing such an evolving transdisciplinary research agenda offers significant benefits but also comes with implementation challenges (Vasbinder et al. 2010; de Oliveira et al. 2019). First, the often-prohibitive hurdles of disciplinary boundaries and entrenched biases must be overcome through open collaboration, raising awareness, and promoting the value of the proposed transdisciplinary research agenda. Second, transdisciplinary communication barriers necessitate shared frameworks and skill-building through transdisciplinary training programs. Third, the rigid structures and mechanisms in funding institutions often hinder transdisciplinary endeavors, mandating institutional commitments to innovative funding and reward systems. Fourth, the complexity of integrating methodologies and data across disciplines demands time, strategic resource allocation, and standardized protocols. Finally, public skepticism of non-traditional approaches can be mitigated through effective science communication and engagement strategies. These steps are not only theoretically grounded but also actionable measures that institutions, research teams, and policy bodies can adopt to foster meaningful transdisciplinary collaboration in AI trust research.

Overcoming these obstacles is crucial for unlocking the full potential of transdisciplinary trust research, fostering collective efforts to address revolutionary societal challenges that leave scientists and stakeholders no choice but to work together to understand and help combat the enormous threats to trust in AI that societies worldwide face. Perhaps more than ever, scientists and stakeholders need each other to restore, protect, and build trust in AI, which is essential for the resilience and security of societies and the effective functioning of global systems. Looking ahead, emerging trust dynamics between AI systems themselves—and between AI and humans in both directions—demand new conceptual approaches. Future trust frameworks must consider not only how humans trust AI, but also how AI systems might evaluate and respond to human reliability or even establish forms of AI-to-AI trust in networked and automated environments. These hybrid trust systems challenge traditional, anthropocentric definitions and call for a post-humanist expansion of trust theory. Integrating this perspective enhances our transdisciplinary framework’s adaptability and future relevance. As we navigate the uncharted territories of AI, we recognize that trust in AI not only shapes our interpersonal trust relationships but also prompts a profound exploration of our essence. This journey is not just about technological advancement; it reflects our role as creators whose visions shape our destiny. It reminds us that we are active participants, called to reflect on our ultimate purpose in a rapidly evolving world.