Fig. 2: Transdisciplinary trust research. | Humanities and Social Sciences Communications

Fig. 2: Transdisciplinary trust research.

From: A call for transdisciplinary trust research in the artificial intelligence era

Fig. 2

A Transdisciplinary Research Agenda. Transdisciplinarity emphasizes collaboration between scientists and stakeholders, integrating knowledge to address grand challenges and producing practical solutions for society and science. The figure shows examples of major stakeholders and relevant scientific disciplines, though these are not exhaustive. B Transdisciplinary Research Framework. The transdisciplinary framework considers societal advancement and scientific progression as knowledge-focused systems providing input into a knowledge-integration system, each undergoing three stages: problem, discourse, and result. Guided by ongoing discourses between stakeholders and scientists, this process unfolds in three phases: problem formation, production of new, connectable knowledge, and transdisciplinary integration. Across these phases, two distinct transdisciplinary pathways unfold, encompassing a real-world pathway prioritizing practical societal solutions and an intra-scientific pathway aimed at empirical study and discovery. At the core of the framework, new, connectable knowledge is developed and implemented across five key elements of trust: trustworthiness, risk, user, sphere, and terrain. The user is the central focus of the framework, playing a key role in the discourses on both societal and scientific knowledge. Societal knowledge encompasses stakeholders’ practices and criteria for evaluating AI’s impact on societal trust, assessed across various ecological layers (e.g., individual, relationship, community). Scientific knowledge encompasses scientists’ methods and theories for researching trust in AI, examined across various measurement levels (e.g., biological, neural, physiological). Trustworthiness and sphere are grounded in the societal knowledge system: Trustworthiness is essential for addressing societal challenges, as perceptions of AI’s reliability significantly influence its acceptance and effectiveness. Sphere, integral to societal praxis, refers to various trust interactions within ecological layers that AI technologies impact. Risk and terrain are grounded in the scientific knowledge system: Risk is integral to the scientific challenge of AI development, encompassing unforeseen dangers and potential adverse outcomes that require thorough scientific assessment and exploration. Terrain, a critical aspect of scientific praxis, refers to various environments where AI technologies are applied.

Back to article page