Abstract
Artificial intelligence (AI) and robotics are rapidly transforming geriatric psychiatry, offering powerful tools for early detection, personalized treatment, and enhanced care delivery. As the global population ages, these technologies promise not only greater efficiency but also new avenues for delivering scalable, accessible mental health support. However, as AI increasingly engages with domains once considered uniquely human—emotional intelligence, decision-making, and interpersonal connection—it raises deeper questions about the boundary between AI-simulated interaction and authentic human connection. This review examines the intersection of computational precision and existential complexity, emphasizing how theoretical frameworks such as the Theory of Computing (TOC) and Theory of Mind (ToM) can guide ethical and human-centered integration. While AI systems may convincingly simulate empathy or companionship, they cannot share subjective experience, vulnerability, or existential depth. By contrasting computational precision with the irreducible aspects of human complexity, we advocate for a transdisciplinary approach that embraces both the transformative potential of technology and the irreplaceable richness of human connection—especially in the later stages of life, when questions of purpose, mortality, and selfhood become most profound.
Similar content being viewed by others
Introduction
Recent advancement of AI in geriatric psychiatry
There is no question that recent advances in computational tools, particularly artificial intelligence (AI), have significant impact on both medical research and everyday life. In our globally aging society, AI offers hope—guiding us towards managing societal costs and enabling personalized and precision medicine. As we embrace unprecedented technological advancements, important questions arise: Where are these powerful tools taking us? Are we heading in the right direction? Are we investing our efforts in research questions that hold genuine meaning for humanity? What clinical value do we gain from a modest improvement, such as a 5% increase in predictive accuracy for cognitive decline in Alzheimer’s disease (AD) models? AI is increasingly demonstrating emotional intelligence, previously believed to be a uniquely human trait central to our humanity. For example, AI can be a supportive tool for caregivers of AD patients, not only by providing relevant medical information, but also emotional support [1]. However, does this imply that human interactions, essential for addressing existential concerns about meaning, connection, and purpose, will eventually be replaced by AI? This review explores transdisciplinary* (Transdisciplinary science (Reynolds and Weissman, 2022 [2]): an integrative approach that transcends traditional disciplinary boundaries involves the co-creation of knowledge across various fields. While interdisciplinary and multidisciplinary science utilize methods and insights from multiple disciplines to address complex problems, transdisciplinary research aims to synthesize and transform perspectives into entirely new frameworks of understanding and application.) perspectives on these critical questions by examining the relationships between computational precision and holistic human experiences.
Juxtaposition – precision to imperfect mind
As the global geriatric population grows rapidly, there is an urgent need to accelerate our understanding of late-life mental health and develop innovative approaches to address its challenges. A critical gap in current research lies in reconciling the inherently subjective and undecidable nature of existential meaning (e.g., “If I lose my memory, who am I?”, “Why do I feel so empty about my life?”) with deterministic, biomarker-based models of disease. For instance, computational approaches can enhance the early detection of slight increases in amyloid plaques at earlier stages [3, 4], enabling timely interventions with amyloid-targeted infusion therapies [5, 6]. These early-stage applications hold promise for allowing individuals to benefit from treatments before amyloid accumulation becomes excessive and irreversible—a factor contributing to the current modest effectiveness of such therapies in slowing disease progression. As a contrast to such an advanced computational approach, personalized attention and behavioral management demonstrate a transformative impact on quality of life, aligning more closely with existential psychology by addressing the emotional, social, and existential dimensions of living with Alzheimer’s disease (AD) [7, 8]. This juxtaposition between computational precision and the holistic human experience highlights a broader paradox: while AI excels at analyzing data and identifying patterns, it lacks the deeply human capacity for an interpersonal existential relationship. These fundamental differences between AI’s computational power and human experience underscore the need for a structured framework to navigate such complexities.
New avenue to geriatric psychiatry
The Theory of Computing (TOC) provides a powerful language to address these dualities and contradictions. Rooted in the set theory of mathematics and the propositional logic of philosophy, it offers a robust foundation for integrating computational approaches with human-centered care. By bridging these paradigms, we can better navigate the multifaceted challenges of geriatric mental health, ensuring that technological advancements are harmonized with the deeply human aspects of aging. These relationships will be examined through a conceptual framework highlighting intersections between theory and practice across human (mind/body) and machine (computer/AI domains), evaluating how theoretical constructs translate into practical advances for late-life mental health care, as well as generating new perspectives of geriatric psychiatry by contrasting two approaches. The aim of this paper is to develop a conceptual framework, grounded in the Theory of Mind (ToM) and TOC, for understanding both the capabilities and limitations of AI in addressing the psychological and existential dimensions of geriatric psychiatry.
Structure of this review
We begin by examining the unique psychological and existential dimensions of geriatric psychiatry through the lens of ToM, emphasizing the fundamental difference between authentic human connection and AI-simulated interaction. Next, we apply TOC to explore how computational principles can translate the complexity of human cognition and emotion into tractable, operational models. Building on this foundation, we transition from theory to practice by discussing the role of robotics and AI-based clinical applications in geriatric psychiatry. Figures 1 and 2 serve not only as summaries but also as conceptual guides for navigating the review. Furthermore, evaluating the performance of artificial emotional intelligence against human capacities lies beyond the scope of this paper. Instead, we provide a conceptual platform for critically engaging with this emerging landscape from transdisciplinary perspectives. Here, “adequacy” is not defined as the replication of all human experiences, but as the capacity of AI to meet the specific therapeutic goals of geriatric psychiatry, particularly in addressing the psychological and existential dimensions of late-life mental health. Throughout, we highlight both the promise and the limitations of computational tools in geriatric psychiatry, where existential concerns often intensify with age.
Conceptual framework illustrating the qualitative differences between authentic human connection (left) and AI-simulated interaction (right) in the context of geriatric psychiatry. Human connection includes existential and relational dimensions (I–Thou), whereas AI operates within algorithmic limits, offering computational precision but lacking lived experience, reciprocity, and authentic empathy.
Framework connecting concepts from Theory of Mind (ToM) and Theory of Computing (ToC) to their implications in geriatric psychiatry. *NP problems: P problems can be solved quickly; NP problems can be verified quickly but may be much harder to solve. Whether P equals NP remains an open question. “P” stands for polynomial time, and “NP” stands for nondeterministic polynomial time, meaning they can be solved in polynomial time on a hypothetical nondeterministic machine—essentially, by guessing a solution and then verifying it.
Theory of mind in geriatric psychiatry
Defining and navigating a journey of existential questions
Theory of Mind (ToM) offers a foundational cognitive ability that underlies emotional intelligence and human connections [9]. While ToM is often discussed in the context of understanding others’ mental states (i.e., intentions, desires, beliefs, and emotions) [10, 11], the ability to recognize and interpret one’s own mental states [12, 13] is equally critical for meaningful interpersonal relationships and well-being in general. This self-awareness represents a form of meta-cognition (thinking about one’s own thinking) and allows individuals to observe, evaluate, and monitor their internal experiences. Such awareness becomes particularly relevant to mental health in aging, which presents profound philosophical challenges, often taking the form of existential questions. These inquiries manifest in various forms, including questions about the meaning of life (“What is the purpose of my/our existence?”), identity (“What defines my existence?”), mortality (“What is the meaning of death in life?”), and authenticity (“What does being true to myself mean?”). While existential questions may arise throughout life, they often intensify in later years, shaped by the unique experiences and transitions of aging.
These existential reflections rely on ToM; without the ability to represent and examine one’s own (and others’) mental states, such complex, reflective questions may not arise or be meaningfully explored [14]. Navigating these timeless existential queries can lead to a deep sense of fulfillment and an appreciation for life, yet it may also precipitate emotional and psychological struggles. In this sense, ToM is not only essential for the formation of existential questions—it also serves as a tool for processing and potentially “resolving” them. While ToM is central to the generation and resolution of existential questions, the growing presence of AI invites a reexamination of these uniquely human capacities. Some forms of emotional intelligence—particularly emotional awareness, such as identifying emotions, articulating associated thoughts and physical sensations, and understanding their underlying causes—have been increasingly modeled in AI systems, in some cases even outperforming the general population in emotional labeling tasks [15]. However, unlike humans, AI does not possess subjective experience or embodiment and thus does not engage with existential questions in the same way (Fig. 1). AI cannot feel the emotions it identifies, nor can it engage with meaning, mortality, or authenticity.
Nevertheless, the capacity of AI to emulate certain aspects of human emotional and cognitive functions serves as a reflective medium, enabling a deeper comprehension of human nature. Engaging with machines that replicate human-like mental processes, yet lack consciousness or existential depth, compels us to reassess and revalue our own abilities for self-reflection, emotional complexity, and existential exploration. Hence, AI not only poses challenges but also augments our appreciation of the uniquely human role that ToM plays in addressing life’s profound questions.
Existential questions in mind-body medicine
Irvin Yalom, a pioneering figure in existential psychotherapy, has profoundly shaped our understanding of how existential questions intersect with psychological well-being. His therapeutic framework helps individuals confront and process these human’s ultimate concerns – death, freedom, isolation, and meaning -- with clarity and resilience. In his seminal book Existential Psychotherapy (Yalom, 1980) [16], Yalom wrote “Death and life are interdependent: though the physicality of death destroys us, the idea of death saves us (p. 879)” His approach not only offers significant insights and guidance for psychotherapists but also resonates its relevance beyond the clinical environment, encouraging both therapists and non-therapists to engage in a deeper contemplation of what it means to live authentically amidst uncertainty.
To encompass Yalom’s approach, a key tool for exploring existential questions is ToM — the capacity to engage with our internal experiences and embrace uncertainty. ToM enables us to observe, question, and reframe our own thoughts and beliefs. This process of introspection is essential when confronting existential concerns and accompanying uncertainty. It involves reflecting not only on one’s personal history and values but also extending beyond the self — to consider connections with family, friends, ancestors, future generations we may never meet, and ultimately, the universe itself (i.e., gerotranscendence [17]). This expanded perspective helps cultivate meaning and continuity, enabling individuals to tolerate ambiguity and fear in the face of mortality and isolation. It reframes later life not as a period of decline, but as a unique opportunity for personal growth, the deepening of wisdom, and the attainment of inner peace. In this way, ToM functions as a bridge between cognitive insight and emotional resilience, supporting both the psychological and philosophical dimensions of mind-body medicine.
“Some day soon, perhaps in forty years, there will be no one alive who has ever known me. That’s when I will be truly dead - when I exist in no one’s memory. I thought a lot about how someone very old is the last living individual to have known some person or cluster of people. When that person dies, the whole cluster dies, too, vanishes from the living memory. I wonder who that person will be for me. Whose death will make me truly dead?” ― Love’s Executioner and Other Tales of Psychotherapy (p. 191), Yalom (1990) [18]
Integrating Yalom’s principles into geriatric psychiatry not only fosters an appreciation for the broader meaning of life for older adults but also addresses mental health challenges in late life. For instance, how can we objectively define existential concerns as they relate to late-life depression, a condition profoundly shaped by subjective experience? These discussions are vital for shaping diagnostic frameworks and treatment strategies in geriatric psychiatry. Practical steps, such as operational descriptions, offer clear and measurable approaches to address these challenges. For example, treatment protocols for late-life depression might incorporate assessments of depressive symptoms alongside structured pharmacological and psychotherapeutic objectives.
Recent advancements in AI have increasingly influenced mind-body medicine. AI-related methodologies (e.g., processing theory, cognitive psychology, robotics) have provided invaluable insights into psychological processes by modeling elements of human cognition, emotion, and behavior. However, we are now entering a different phase—one that requires us to pause and reevaluate the nature of AI itself and its distinctions from human experience. While AI can simulate certain emotional expressions or cognitive patterns, it does not share the existential concerns or personal narratives. The relational “we” does not emerge in human-AI interactions. Philosopher Martin Buber described this profound human connection as the “I–Thou” relationship—a dialogical bond rooted in mutual presence and recognition [19]. Such a relationship cannot exist with AI, no matter how advanced. It is therefore essential to remain mindful of this fundamental difference between engaging with a person and interacting with a machine. AI can serve as a powerful tool, but it must not be mistaken for a replacement for human connection. The (partial) dissolution of self-boundaries to form a unified relational entity—what we often describe as “we” (or I-Thou) —is a cornerstone of deep interpersonal experience. Recognizing this human-AI distinction invites ongoing reflection on how we integrate AI into mental health care—without losing sight of what makes human connection irreplaceable.
The human-AI distinction highlighted particularly pertinent within therapeutic contexts. Drawing upon Irvin Yalom’s seminal work to group psychotherapy [20], as well as Polster and Polster’s Gestalt Therapy (1973) [21], it becomes evident that shared vulnerability and existential exploration are integral to the healing process. Yalom’s work reminds us that the curative power of therapy often lies not in offering the “right” answer or prediction, but in the authenticity of interpersonal encounters. This notion is intimately connected to the concept of the therapeutic alliance, a fundamental component in clinical practice that denotes the trust, rapport, and collaborative relationship between therapist and client [22]. A robust therapeutic alliance is consistently associated with favorable treatment outcomes. While AI may offer emotionally validating responses—such as providing comforting words or mirroring emotional states for temporary relief —it lacks the capacity to deliver the profound psychological challenges that trained psychotherapists, peers in group therapy, or existential experiences, such as confronting mortality, can present. These challenges, albeit often discomforting, are essential opportunities for growth, transformation, and enduring well-being, rather than merely transient emotional comfort.
In this context, the application of AI in mind-body medicine should be approached with careful consideration and deliberate intent. Similar to other technological advancements, its value is not solely determined by its complexity, but rather by the manner in which it is utilized—as an enhancement of human capabilities, rather than a replacement. A more detailed discussion of AI applications in geriatric psychiatry practice is provided later in this review. Adopting this viewpoint enables us to leverage the strengths of AI while simultaneously fostering a deeper understanding of the essence of human existence.
Theory of computing provides a theory of mind
The theory of computing (TOC) originates from mathematical logic and theoretical computer science, grounded in foundational work by Alan Turing, Alonzo Church, and others in the early 20th century. It encompasses the mathematical and algorithmic principles underlying computation and information processing. This theoretical framework addresses critical questions about the nature of computability, the efficiency of computational processes, and the categories of problems that computational systems can resolve [23,24,25]. In recent years, TOC has increasingly been applied to cognitive and mental health research, particularly through its influence on computational psychiatry, cognitive modeling, and machine learning. In mental health research, the TOC serves as a foundation for designing computational models, analyzing complex data, and uncovering patterns in mental health phenomena. By leveraging these principles, psychiatry research can advance in several ways: 1) enhancing the accuracy of predictions related to symptoms and conditions, 2) interpreting high-dimensional and multi-modal datasets more effectively, and 3) developing models to simulate and predict patient outcomes.
Geriatric mental health exists at the intersection of mind and body, where biological changes and psychological experiences converge. Existential questions – about life’s purpose, individual identity, and authenticity, are brought into sharper focus by the reality mortality, as previously discussed. TOC offers a compelling framework for examining the complex interplay between mind and body by conceptualizing them as a unified and abstract computational system [23]. In this framework, the mind and brain are not separate entities, but different perspectives (i.e., languages) for the same computational system. Both the psychological mechanisms of the mind and the neural processing of the brain can be described as functions or algorithms. Just like there are multiple ways to express the same function, mind and brain can be thought of as different expressions (or representations) of the same underlying person. Key relationships between TOC and mental health are listed in Fig. 2. In this section, we apply concepts from mathematical logic—such as Cantor’s diagonalization and Gödel’s incompleteness theorem (introduced in detail later)—not to evaluate empirical performance, but to use these theorems as conceptual tools for delineating the structural limits of formal systems, including computational models of the mind. Grounded in the traditions of theoretical computer science and analytic philosophy, this approach positions such proofs as boundary markers of what is possible in principle, rather than as empirical findings.
TOC has its roots in the work of George Cantor, a 19th-century mathematician who introduced the distinction between countable and uncountable infinities, such as the difference between natural numbers (e.g., 1, 2, 3) and real numbers (e.g., −3.4, √49, π) [26, 27]. He used an elegant argument, known as ‘diagonalization,’ to show that no counting scheme can capture all real numbers, revealing that there are multiple levels of infinity. These abstract ideas help us to conceptualize how personal experiences and emotions can be vast and immeasurable. Diagonalization serves as a metaphor for the way subjective human experiences (emotions, memories, or existential concerns) cannot be fully captured or predicted by computational or algorithmic models—just as not all real numbers can be listed by an algorithm. This idea also parallels Martin Buber’s (originally published in 1921) [19] distinction between the ‘I–Thou’ and ‘I–It’ relationships—where the infinite, transcendent quality of genuine human connection (I–Thou) stands in contrast to the countable, transactional nature of objectified interaction (I–It). In this sense, diagonalization metaphorically illustrates the limits of algorithmic reasoning in capturing the emergent, infinite depth of human connection that defines I–Thou relationships. This framework underscores the idea that AI and computational tools, no matter how advanced, have limits when it comes to modeling the richness and variability of human mental life. That variability expands even further in the realm of interpersonal relationship and our application of the psychological complexity of others. This concept resonates deeply with mental health care, where subjective experiences often defy quantification and objective measurement.
Gödel’s incompleteness theorem [28], developed in the 1930’s, showed a limitation of formal systems by demonstrating that certain true but unprovable statements, known as undecidable statements, exist within any sufficiently complex system (e.g., arithmetic or set theory). Using a diagonalization technique, Gödel showed that a system cannot prove its own consistency, highlighting the inherent incompleteness of logic (see also Turing’s Halting Problem [29], which shows no algorithm can universally determine whether another program will finish running or loop forever). Gödel’s theorem serves as a powerful metaphor for psychiatry: just as formal systems cannot capture every truth, computational models fall short in representing the full complexity of human behavior. In geriatric mental health, existential concerns parallel the concept of undecidable statement – questions about meaning, purpose, and mortality resist definitive resolution, regardless of computational power. In late life, as a person’s sense of self and life meaning naturally evolve, this underscores the importance of attending to individual lived experiences that cannot be reduced to data or predictions. People are complex systems and therefore inherently unpredictable, shaped by relationships, history, and existential concerns. Gödel’s insight urges us to embrace a more human-centered approach that respects what lies beyond algorithms.
The book Gödel, Escher, Bach (often referred to as GEB), originally published in 1979 by Douglas Hofstadtler [30] explores how Godel’s meditations on patterns and infinity also relate to music (Bach) and visual art (Escher). GEB introduced many to the ideas of Alan Turing and Noam Chomsky, highlighting overlaps between computation, cognition, and creativity. Chomsky’s hierarchy maps types of language to the complexity of machines needed to process them—from simple regular expressions handled by finite-state automata (i.e., machines with no memory that operate state-by-state) to context-free languages requiring memory (i.e., pushdown automata, which use a stack to track and process hierarchical or nested elements in language). This hierarchy mirrors the complexity of abstract Turing machines – theoretical machines that define the limits of what can be computed algorithmically. Importantly, Chomsky’s language theory framework helps explain how the brain organizes language, shaping mental representations and personal narratives. When individuals confront existential questions and concerns, their meaning-making involves interpreting their past, present, and future, and this process resembles high-level languages in Chomsky’s hierarchy: languages with access to context that accept unrestricted self-referential input. This capacity, supported by functions of the prefrontal cortex, enables infinite language expressivity—setting us apart from other species and highlighting the challenge of translating lived experience into computational algorithms.
The TOC provides offers a foundational framework through the Turing Machine, which models how a simple set of rules (a finite controller) can interact with an open-ended set of possibilities (an infinite context). According to the Church-Turing Hypothesis [31], any process that can be described algorithmically can, in principle, be simulated by this type of machine. Some of these computational models are nondeterministic—meaning they can follow multiple potential paths to reach a solution, rather than a single, predictable route. This concept nicely transcribes inherently unpredictable nature of human mind, where thoughts, emotions, and decisions rarely proceed along a single fixed path. In geriatric psychiatry, such non-determinism resonates with the way individuals navigate existential concerns, where no single “correct” resolution exists. Client-Therapist dynamic work in this context often embraces heuristics rather than fixed algorithms, adapting approaches to the client’s evolving life narrative, values, and circumstances. Understanding human cognition as a system that generates flexible strategies for intractable problems underscores the need for therapeutic models that can accommodate ambiguity, multiplicity, and change over time.
While machine learning and artificial intelligence present unique challenges—discussed earlier in this review—TOC remains a powerful lens for examining both the capabilities and limits of these systems. Beyond its technical applications, TOC also provides philosophical and mathematical scaffolding for rethinking the structure, function, and representation of the mind itself, as explored through the lens of Theory of Mind (ToM). Crucially, TOC helps illuminate the inherent value of the I–Thou relationship—an authentic, mutual form of human connection that exists outside of countable, transactional frameworks. This serves as a reminder that not all forms of meaning or connection can be encoded, modeled, or predicted. In geriatric psychiatry, TOC concepts such as non-determinism and undecidability help clarify the limits of computational prediction. While AI may accurately forecast the likelihood of cognitive decline or depressive symptoms, it cannot determine how such changes will affect an individual’s sense of meaning, purpose, or peace in the final stages of life. These concerns are deeply personal, context-dependent, and often inherently unknowable—mirroring the logical principle of undecidability.
AI and robotics for geriatric psychiatry
The integration of AI and robotics into geriatric psychiatry offers innovative tools for assessment, monitoring, and intervention, yet demands critical examination through the lens of computational precision versus holistic human experience. AI, particularly machine learning, excels at analyzing complex datasets to identify risk factors and create “digital biomarkers” for conditions like depression and dementia [32,33,34,35,36]. While this algorithmic approach enhances early detection and objective tracking, its foundation in TOC means it processes data devoid of subjective, lived experience. This precision operates within an “I-It” paradigm, offering valuable transactional insights but lacking a full comprehension of the existential questions central to aging and the “imperfect mind”.
From a theoretical standpoint, the deployment of AI in geriatric psychiatry reflects the intersection of computational psychiatry, emotional intelligence, and algorithmic decision-making. Diagnostic models operate within formal frameworks derived from computer science—such as probabilistic automata and neural networks—and are constrained by data quality and algorithmic complexity [33, 34]. These tools do not replicate human cognition but approximate clinical reasoning through pattern recognition and statistical inference. At the same time, emotionally intelligent (EI) agents reflect computational implementations of EI theory, with the goal of approximating empathy and responsiveness in machine form [37, 38].
Machine learning and natural language processing (NLP) enable data-driven diagnosis and monitoring by analyzing speech, sensor data, and health records. Studies have shown that AI models can identify depression in older adults with moderate to high accuracy, using inputs like sleep, frailty, voice, and Wi-Fi-based motion features [36, 39]. These digital biomarkers provide objective, continuous assessments that augment traditional clinical judgment, especially when deployed in passive sensing environments such as the home [40]. Speech-based AI offers a natural, low-burden modality for detecting affective disorders, aligning with growing interest in unobtrusive, ecological approaches to mental health [41, 42].Conversational agents and AI-based digital therapeutics offer new tools for managing loneliness, depression, and anxiety among older adults. Chatbots built with NLP and sentiment analysis have shown high usability and engagement in older psychiatric outpatients, with some trials reporting reduced loneliness and improved emotional well-being [43,44,45].
These agents can be used asynchronously and at home, making them well-suited for seniors who face mobility or stigma-related barriers. AI also powers features in voice assistants and health apps, enabling emotion-aware interactions and daily monitoring. These systems increasingly draw on emotional intelligence (EI) theory, attempting to simulate empathetic responses through tone, dialogue, and adaptive feedback—a practical application of EI in software [46, 47].
Socially assistive robots (SAR) extend these benefits by providing embodied interaction. Companion robots like Paro and Joy for All pets have demonstrated reductions in depression and loneliness, especially in patients with dementia [46, 48, 49]. Robots with more human-like morphologies, such as Pepper, have been used to guide reminiscence therapy and cognitive stimulation activities [50, 51]. In these cases, the robot serves as a therapeutic interface—engaging seniors in memory tasks, conversations, or music—which has been associated with increased attention and improved mood.
Existing AI and robotic systems attempt to mimic aspects of ToM by recognizing emotional cues and responding in understanding ways, offering a perspective of connection that can be comforting for individuals. However, this simulated ToM operates fundamentally differently from genuine human empathy, which is rooted in shared lived experience and mutual vulnerability. As explored previously, AI does not share in the existential concerns of aging—it does not “feel” the emotions it identifies or engage in the I-Thou relationship. SARs can offer transient comfort or re-engage older adults with memories, yet they are unsuitable for profound psychological challenges or authentic interpersonal encounters essential for deep therapeutic growth, as emphasized by Yalom. Furthermore, human therapists may not share the exact lived experiences of their patients; however, both parties are grounded in the shared reality of being human—that is, they possess emotional embodiment rather than simulated empathy. While their existential concerns may differ in form or context, they are mutually subject to the same inescapable dimensions of human existence.
Current applications of SARs suggest that physical embodiment, even in simple robotic forms, can amplify emotional and cognitive engagement in geriatric care. Robots with anthropomorphic or animal-like features often elicit greater rapport and engagement from users than utilitarian devices [46, 52]. EI behavior, such as recognizing a user’s tone or facial expression and adjusting responses accordingly, enhances interaction quality. These observations align with broader psychological theories of trust and motivation, suggesting that emotionally responsive systems can more effectively meet the social-emotional needs of older adults.
However, the path forward requires a judicious integration of these technologies, recognizing them as valuable adjuncts rather than replacements for human-centered care. The clinical evidence shows promise for specific outcomes (e.g., improving cognitive function, reducing loneliness, alleviating depressive symptoms) [46, 48, 53] and for providing respite to human caregivers [46]. However, their application must be guided by an acknowledgement of their strengths alongside their current limitations. A collective goal in the research community is to leverage AI and robotics to support and enhance the holistic, empathetic, and existentially attuned care that addresses the full spectrum of human experience in geriatric psychiatry.
Ethical considerations and recommendations
From a mental health services perspective, AI has the potential to transform access, efficiency, and personalization, particularly for populations traditionally underserved by existing systems, including older adults. AI-based technologies offer scalable solutions to address mental health workforce shortages and extend care into resource-constrained settings [54]. However, their adoption raises complex ethical, clinical, and social challenges that require careful scrutiny to ensure safe, equitable, and effective use.
Key ethical concerns include data privacy and consent, bias and equitable access, lack of transparency in data usage, empathy simulation, and unclear accountability. There are also concerns about the potential for AI to unintentionally deceive users, displace human care, or exacerbate existing disparities in access [55,56,57,58,59]. Without thoughtful design and implementation, these technologies may produce inconsistent results, reinforce discrimination, or lead to violations of privacy and conflicts over data ownership.
Data privacy and consent
Recent advancements in machine learning, deep learning, large language models (LLM), NLP, and generative AI, process highly sensitive personal data, including intimate conversations in which users reveal deeply personal emotions, behaviors and thoughts [60,61,62] allowing for more personalized, adaptive, and sophisticated responses to mental health needs [63, 64]. These systems raise complex ethical and legal challenges related to privacy, data ownership, and informed consent, particularly among older adults and minoritized populations who may struggle to navigate conventional consent procedures and are at heightened vulnerability to data breaches and surveillance [65]. Deep learning models, in particular, are highly data-intensive, which sometimes involves pooling data from multiple institutions or countries, and have contributed to a growing demand for patient-derived datasets [58, 65, 66]. Moreover, the widespread use of cloud-based platforms for processing such data increases the risk of unauthorized access, underscoring the need for secure infrastructure [58, 67]. Therefore, ensuring robust data security and privacy protections is paramount.
Transparent data governance frameworks and stringent informed consent processes must be prioritized to safeguard against misuse or unintended disclosure of personal health information. The ethical stakes are heightened by the nature of the data: a standardized measure, such as the Montgomery-Åsberg Depression Rating Scale (MADRS) score [68], reflects symptom severity in structured form, whereas a personal narrative conveys the lived meaning of those symptoms—the relationships, cultural context, and existential concerns that shape a patient’s mental world. While safeguarding any form of data is essential in mental health research and clinical practice, narratives warrant particular protection. From a TOC perspective, they resemble high-level languages in Chomsky’s hierarchy—context-rich, self-referential, and resistant to full algorithmic capture. Misappropriation of such narratives can erode the trust and psychological safety, undermining the therapeutic alliance and the authentic, reciprocal connection described by Buber as the I–Thou relationship—both of which are critical in addressing late-life existential concerns. AI systems must therefore be explicitly designed to handle sensitive data responsibly and to avoid storing information on insecure or vulnerable servers. Additionally, the use of anonymized or de-identified data does not fully resolve ethical concerns, as questions about consent, data ownership, and secondary use remain, and legal frameworks governing these issues vary widely across jurisdictions [69].
Bias and inequitable access
AI models trained on historically biased or non-representative datasets risk perpetuating and amplifying existing health disparities. Deep learning models, in particular, rely on existing datasets that often contain both implicit and explicit biases [58]. When training data disproportionately represent certain demographic groups, the resulting models may perform inadequately for individuals from underrepresented populations, leading to inaccurate diagnoses and treatment recommendations. This has been observed, for example, in the overdiagnosis of schizophrenia and underdiagnosis of affective disorders among African Americans, Afro-Caribbeans, and Latinos due to clinician prejudice and lack of contextual diagnostic analysis [70,71,72,73,74,75,76]. Similarly, NLP tools may misinterpret dialects, non-standard grammar, or culturally specific communication styles, increasing the risk of misdiagnosis or inappropriate clinical responses [77]. In geriatric psychiatry in particular, existential concerns may be perceived and expressed in diverse ways across cultures and languages, shaping how symptoms are experienced, narrated, and understood. AI systems that fail to account for this variation risk misinterpreting contextually meaningful expressions of distress or resilience, thereby limiting diagnostic accuracy and deepening existing inequities. Thus, ensuring cultural and linguistic sensitivity in both data and model design is essential for developing AI that serves the full spectrum of aging populations with equity and respect.
Facial recognition algorithms have also shown significantly lower accuracy among individuals with darker skin tones, particularly women, raising serious concerns about bias and inequitable performance across demographic groups [78]. Older adults who do not speak English may be excluded from engaging with AI-powered technologies, such as social robots, leading to missed opportunities for emotional support and further marginalization, particularly among immigrant and minoritized populations already at risk of social isolation [65]. The ubiquity of AI tools may also lead to clinicians over-relying on algorithmic outputs without critical evaluation. Because AI systems are often designed to reduce human error and enhance patient safety, this overreliance may diminish the likelihood that providers will question incorrect results, further compounding diagnostic or treatment inaccuracies [62]. These limitations underscore the critical need for inclusive data practices, rigorous fairness audits, and transparent evaluation standards throughout the AI development lifecycle, particularly when applied to diverse and vulnerable populations.
These challenges highlight that model accuracy alone is not a sufficient metric of value in geriatric mental health. The “right” answer as determined by an algorithm may not always be the answer a patient most needs—especially when it fails to account for the lived experience, cultural context, or existential meaning behind a person’s expression of distress. AI systems operate on patterns of past data, but care must accommodate unpredictable human nature. Just as mathematical proofs such as Cantor’s diagonalization or Turing’s halting problem reveal inherent limits of formal systems, these constraints should remind us that no amount of optimization can fully capture the complexity of human needs. In the context of aging and psychiatry, overreliance on such systems without reflection risks not only technical failure but ethical erosion. Some aspects of current AI can blur the distinction between human-human and human-computer relationships. Truly equitable AI must, therefore, be designed with a recognition of AI’s technical and existential limits.
Empathy simulation
Mental health care is fundamentally relational (i.e., I-Thou relationship), relying on sensitivity to emotional cues such as tone of voice, facial expression, and body language [65]. Accurate diagnosis also depends on a thorough patient history and clear, detailed descriptions of symptoms [79]. Many psychiatrists have raised concerns about AI’s impact on the therapeutic relationship. While some see AI as a nonjudgmental tool that could reduce costs and improve access, its inability to convey genuine empathy limits its effectiveness [65]. Therapeutic rapport is built on emotional attunement and trust – qualities AI cannot replicate. Though AI can interpret some behavioral cues and can simulate empathy through affective language and responsive dialogue, these tools fundamentally lack emotional comprehension; remain inadequate for accurate diagnosis; and may increase the risk of clinical error [65].
A helpful analogy arises in the ethical and moral debates surrounding human reproductive cloning [80, 81]. Many see the pursuit of creating individuals through reproductive cloning as a “crime against the human species” [82]. Just as a clone may replicate genetic material without reproducing individual’s self-ness (i.e., ToM), AI can simulate (or more precisely “mimic”) empathy without truly experiencing it. This challenge is compounded when AI systems (particularly generative models) lack transparency in how responses are generated, making it difficult for humans to evaluate the authenticity or appropriateness of the simulated interaction. As emphasized throughout this review, the core of healing in psychotherapy lies not in verbal simulation but in the emergence of an I–Thou relationship between therapist and client, grounded in shared existential experience. Using AI as a surrogate for human connection risks blurring the line between simulation and authenticity, raising serious concerns about the erosion of what makes human care distinct, relationally meaningful, and ethically grounded.
AI tools, therefore, should not be viewed as substitutes for human care. Rather, they must be integrated into mental health care in ways that complement and enhance human interaction. This is especially important for socially disadvantaged or cognitively impaired individuals, whose care must center on equity, dignity, and individualized support. Ethical integration of AI into these settings requires person-centered principles that safeguard agency, respect cultural and linguistic diversity, and respond to the unique preferences and needs of each individual. When deployed relationally and with appropriate human facilitation, social robots can support meaningful engagement while preserving the core human relationships that underlie effective mental health care. Without this nuance, AI may fail to meet the psychosocial needs of patients and risk exacerbating health inequities.
Unclear accountability
The increasing use of AI systems in mental health care raises critical ethical and practical questions, including who is accountable for the decisions made by these technologies, such as chatbots. When deployed in therapeutic contexts, AI must be able to appropriately handle emotional interactions, as strong emotional reactions, can lead to patient harm or compromise safety [69]. Designers have an ethical responsibility to monitor and regulate emotional engagement, mitigate the risk of overattachment, and ensure a safe and supportive environment. It is equally important to consider how AI-mediated therapeutic relationships should be appropriately concluded to avoid emotional distress [69]. Moreover, the competence of intelligent machines must be carefully evaluated. Providing care beyond the limits of their training or intended scope may expose patients to harm. Highly autonomous systems must demonstrate proficiency in interpersonal communication, treatment and safety protocols, and cultural competence to be considered ethically and clinically viable in mental health settings [69].
Toward ethical and equitable AI in mental health
To promote responsible integration of AI into mental health care for older adults and underserved populations, we propose a framework informed by six core ethical principles commonly cited in healthcare AI literature: fairness, transparency, trustworthiness, accountability, privacy, and empathy [83,84,85]. Operationalizing these principles requires sustained investment in governance, multidisciplinary collaboration, and inclusive design practices. Specific recommendations include:
-
1.
Human-in-the-Loop Design: AI tools should incorporate mechanisms for clinician oversight and allow clinicians to interpret or override outputs ensuring that AI supports rather than supplants clinical judgment. Clinicians must be trained to understand AI’s strengths, limitations, and ethical implications to ensure safe, informed use and effective advocacy.
-
2.
Inclusive and Culturally Responsive Development: Diverse racial, ethnic, linguistic, and age groups should be engaged in the design, testing, and evaluation of AI systems. Community-based participatory methods can enhance cultural relevance and acceptability.
-
3.
Enhanced Consent Protocols: Informed consent should be an iterative, accessible process, adapted to the cognitive, linguistic, and cultural needs of users. Visual aids and plain language can facilitate meaningful consent. Developers should use interpretable algorithms and provide user-friendly explanations to support informed decision-making.
-
4.
Governance and Accountability Structures: Multidisciplinary oversight bodies that include clinicians, ethicists, technologists, and patients should review algorithmic performance, conduct equity audits, and delineate liability for adverse outcomes.
In sum, AI in mental health should be designed to complement, not replace, professional judgment. Even when full algorithmic transparency is limited, as is the case with generative AI, oversight can be maintained through independent auditing, post-deployment monitoring, escalation of complex cases to experts, and ongoing collaboration with culturally competent clinicians to detect and correct errors while reducing bias [86,87,88,89,90]. Moreover, ethical deployment requires making the algorithmic decision-making process more understandable to patients and providers through the principles of transparency and explainability. Transparency details the components of the datasets and the algorithmic decision trees so that an external expert can review them and understand what has taken place while explainability communicates the process of how inputs lead to outputs in ways patients and providers can understand [91, 92]. These are essential for safe oversight, informed consent, and equitable care, even when full algorithmic disclosure is not possible.
Conclusions
In conclusion, this study underscores the increasing significance of artificial intelligence in the domain of geriatric mental health, encompassing theoretical frameworks, empirical research, and clinical applications. Existential questions, particularly those related to the end of life and the finite nature of human physical existence, lie at the heart of psychological theories of aging. In referencing these uniquely human existential concerns, we have discussed AI’s role as fundamentally distinct from human interpersonal connection (see Fig. 1). While AI has provided us with sophisticated form of “artificial” intelligence capable of simulating companionship, it cannot replace the authentic relational depth of human-to-human interaction precisely because it does not share our existential experiences. In clinical practice, there have remarkable advances in AI that allow for enhanced diagnostic accuracy and identifying patterns across complex datasets. However, a central paradox persists: what AI can predict is often not aligned with what matters most to the individual. This paradox can be illuminated through the lens of Theory of Computing (TOC) -- the TOC not only forms the foundation of AI but also provokes a reimagining of our mind’s dynamic system, where meaning, cognition, uncertainty, and existential concerns converge—offering a transformative bridge to the principles of Theory of Mind (ToM).
Future research directions
Future research should explore how the limits of AI can be constructively acknowledged to leverage its strengths without compromising the uniquely human dimensions of geriatric care—particularly those tied to existential concerns that deepen with age. A transdisciplinary research agenda is needed to investigate how computational precision can be integrated with the meaning-driven, imperfect realities of the human mind. Rather than pursuing precision as an end in itself, future work should prioritize what matters most for late-life mental health, including emotional depth and interpersonal connection. This includes empirical studies examining how older adults perceive, interact with, and are affected by AI-driven systems in real-world clinical and caregiving settings.
References
Hasan WU, Zaman KT, Wang X, Li J, Xie B, Tao C. Empowering Alzheimer’s caregivers with conversational AI: a novel approach for enhanced communication and personalized support. npj Biomed Innov. 2024;1:1–10.
Reynolds CF, Weissman MM. Transdisciplinary science and research training in psychiatry: a robust approach to innovation. JAMA Psychiatry. 2022;79:839–40.
Momota Y, Bun S, Hirano J, Kamiya K, Ueda R, Iwabuchi Y, et al. Amyloid-β prediction machine learning model using source-based morphometry across neurocognitive disorders. Sci Rep. 2024;14:7633.
Shah S, Shah M. The effects of machine learning algorithms in magnetic resonance imaging (MRI), and biomarkers on early detection of Alzheimer’s disease. Adv Biomarker Sci Technol. 2024;6:191–208.
Sims JR, Zimmer JA, Evans CD, Lu M, Ardayfio P, Sparks J, et al. Donanemab in early symptomatic Alzheimer disease: the TRAILBLAZER-ALZ 2 randomized clinical trial. Jama. 2023;330:512–27.
Van Dyck CH, Swanson CJ, Aisen P, Bateman RJ, Chen C, Gee M, et al. Lecanemab in early Alzheimer’s disease. N Engl J Med. 2023;388:9–21.
Isaacson RS, Hristov H, Saif N, Hackett K, Hendrix S, Melendez J, et al. Individualized clinical management of patients at risk for Alzheimer’s dementia. Alzheimers Dement. 2019;15:1588–602.
Reisberg B, Shao Y, Golomb J, Monteiro I, Torossian C, Boksay I, et al. Comprehensive, individualized, person-centered management of community-residing persons with moderate-to-severe Alzheimer disease: a randomized controlled trial. Dement Geriatr Cogn Disord. 2017;43:100–17.
Premack D, Woodruff G. Does the chimpanzee have a theory of mind? Behav Brain Sci. 1978;1:515–26.
Baron-Cohen S. Precursors to a theory of mind: understanding attention in others. In: Whiten A, editor Natural theories of mind: evolution, development and simulation of everyday mindreading. Oxford: Basil Blackwell, 1991:233–251.
Singer, T. & Tusche, A. In: Glimcher PW, Fehr E, editors. Neuroeconomics. Elsevier: 2014:513–32.
Happé F. Theory of mind and the self. Ann N Y Acad Sci. 2003;1001:134–44.
Vogeley K, Bussfeld P, Newen A, Herrmann S, Happé F, Falkai P, et al. Mind reading: neural mechanisms of theory of mind and self-perspective. Neuroimage. 2001;14:170–81.
Bering JM. The existential theory of mind. Rev Gen Psychol. 2002;6:3–24.
Elyoseph Z, Hadar-Shoval D, Asraf K, Lvovsky M. ChatGPT outperforms humans in emotional awareness evaluations. Front Psychol. 2023;14:1199058.
Yalom, I. D. Existential psychotherapy. New York: Basic Books; 2020.
Tornstam L. Gerotranscendence: the contemplative dimension of aging. J Aging Stud. 1997;11:143–54.
Pressman, P. & van Denburg, E. American Psychiatric Association; 1990.
Buber, M. I and Thou (trans. Kaufmann W, Cullen, P). Scribner, New York; 1958.
Yalom ID, Houts PS, Zimerberg SM, Rand KH. Prediction of improvement in group therapy: an exploratory study. Arch Gen Psychiatry. 1967;17:159–68.
Polster E, Polster M. Gestalt therapy integrated: contours of theory and practice. New York: Brunner/Mazel; 1973.
Horvath AO, Luborsky L. The role of the therapeutic alliance in psychotherapy. J Consulting Clin Psychol. 1993;61:561–73.
Hopcroft JE, Motwani R, Ullman JD. Introduction to automata theory, languages, and computation. Acm Sigact N. 2001;32:60–65.
Marks-Tarlow T, Shapiro Y, Wolf KP, Friedman HL. A fractal epistemology for a scientific psychology: bridging the personal with the transpersonal. Newcastle upon Tyne: Cambridge Scholars Publishing; 2020.
Pitt L. Introduction: Special issue on computational learning theory. Mach Learn. 1990;5:117–20. https://doi.org/10.1007/BF00116033.
Ewald WB, Ewald W. From Kant to Hilbert volume 1: a source book in the foundations of mathematics. Vol. 1, OUP Oxford; 1996.
Gray R. Georg Cantor and transcendental numbers. Am Math Monthly. 1994;101:819–32.
Gödel K. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik Phys. 1931;38:173–98.
Turing AM. On computable numbers, with an application to the Entscheidungsproblem. Proc Lond Math Soc. 1937;s2-42:230–65. https://doi.org/10.1112/plms/s2-42.1.230.
Hofstadter DR. Gödel, Escher, Bach: an eternal golden braid. New York: Basic Books; 1999.
Turing AM. On computable numbers, with an application to the Entscheidungsproblem. A correction. Proc Lond Math Soc. 1938;2:544–6.
DeSouza DD, Robin J, Gumus M, Yeung A. Natural language processing as an emerging tool to detect late-life depression. Front Psychiatry. 2021;12:719125.
Hatton CM, Paton LW, McMillan D, Cussens J, Gilbody S, Tiffin PA. Predicting persistent depressive symptoms in older adults: a machine learning approach to personalised mental healthcare. J Affect Disord. 2019;246:857–60.
Lin Y, Liyanage BN, Sun Y, Lu T, Zhu Z, Liao Y, et al. A deep learning-based model for detecting depression in senior population. Front Psychiatry. 2022;13:1016676.
Sau A, Bhakta I. Predicting anxiety and depression in elderly patients using machine learning technology. Healthc Technol Lett. 2017;4:238–43.
Song YLQ, Chen L, Liu H, Liu Y. Machine learning algorithms to predict depression in older adults in China: a cross-sectional study. Front Public Health. 2025;12:1462387.
Abdollahi H. Artificial emotional intelligence in socially assistive robots, University of Denver. IEEE Trans Affect Comput. 2023;14:2020–32.
Montemayor C, Halpern J, Fairweather A. In principle obstacles for empathic AI: why we can’t replace human empathy in healthcare. AI Soc. 2022;37:1353–9.
Nejadshamsi S, Karami V, Ghourchian N, Armanfard N, Bergman H, Grad R, et al. Development and feasibility study of HOPE model for prediction of depression among older adults using Wi-Fi-based motion sensor data: machine learning study. JMIR Aging. 2025;8:e67715.
Alberdi A, Weakley A, Schmitter-Edgecombe M, Cook DJ, Aztiria A, Basarab A, et al. Smart home-based prediction of multidomain symptoms related to Alzheimer’s disease. IEEE J Biomed Health Inform. 2018;22:1720–31.
Cummins N, Matcham F, Klapper J, Schuller B. In Artificial Intelligence in Precision Health. San Diego, CA: Elsevier; 2020:231–55.
Low DM, Bentley KH, Ghosh SS. Automated assessment of psychiatric disorders using speech: a systematic review. Laryngoscope Investig Otolaryngol. 2020;5:96–116.
Chou Y-H, Lin C, Lee S-H, Lee Y-F, Cheng L-C. User-friendly Chatbot to mitigate the psychological stress of older adults during the COVID-19 pandemic: development and usability study. JMIR Formative Res. 2024;8:e49462.
Denecke K, Vaaheesan S, Arulnathan A. A mental health chatbot for regulating emotions (SERMO)-concept and usability test. IEEE Trans Emerg Top Comput. 2020;9:1170–82.
Oh, K.-J., Lee, D., Ko, B. & Choi, H.-J. In 2017 18th IEEE international conference on mobile data management (MDM). IEEE: 2017:371–5.
Lee H, Chung MA, Kim H, Nam EW. The effect of cognitive function health care using artificial intelligence robots for older adults: systematic review and meta-analysis. JMIR Aging. 2022;5:e38896.
Schuller D, Schuller BW. The age of artificial emotional intelligence. Computer. 2018;51:38–46.
Fogelson DM, Rutledge C, Zimbro KS. The impact of robotic companion pets on depression and loneliness for older adults with dementia during the COVID-19 pandemic. J Holist Nurs. 2022;40:397–409.
Kang HS, Makimoto K, Konno R, Koh IS. Review of outcome measures in PARO robot intervention studies for dementia care. Geriatr Nurs. 2020;41:207–14.
De Carolis, B. et al. In Proceedings of the ACHI-the thirteenth international conference on advances in computer-human interactions. Nice, France: IARIA; 2020:452–7.
Tanioka T, Yokotani T, Tanioka R, Betriana F, Matsumoto K, Locsin R, et al. Development issues of healthcare robots: compassionate communication for older adults with dementia. Int J Environ Res Public health. 2021;18:4538.
Roesler E, Manzey D, Onnasch L. A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction. Sci Robot. 2021;6:eabj5425.
Tanveer M, Richhariya B, Khan RU, Rashid AH, Khanna P, Prasad M, et al. Machine learning techniques for the diagnosis of Alzheimer’s disease: a review. ACM Trans Multimed Comput Commun Appl. 2020;16:Article 30–35. https://doi.org/10.1145/3344998.
Lovejoy CA. Technology and mental health: the role of artificial intelligence. Eur Psychiatry. 2019;55:1–3.
Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23:689.
Gaonkar B, Cook K, Macyszyn L. Ethical issues arising due to bias in training AI algorithms in healthcare and data sharing as a potential solution. AI Ethics J. 2020;1. https://doi.org/10.47289/AIEJ20200916.
Graham S, Depp C, Lee EE, Nebeker C, Tu X, Kim HC, et al. Artificial intelligence for mental health and mental illnesses: an overview. Curr Psychiatry Rep. 2019;21:1–18.
Khanna S, Srivastava S. Patient-centric ethical frameworks for privacy, transparency, and bias awareness in deep learning-based medical systems. Appl Res Artif Intell Cloud Comput. 2020;3:16–35.
Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: addressing ethical challenges. PLoS Med. 2018;15:e1002689.
Cartwright-Smith L, Gray E, Thorpe JH. Health information ownership: legal theories and policy implications. Vand J Ent Tech L. 2016;19:207–597.
Doraiswamy PM, Blease C, Bodner K. Artificial intelligence and the future of psychiatry: insights from a global physician survey. Artif Intell Med. 2020;102:101753.
Safdar NM, Banja JD, Meltzer CC. Ethical considerations in artificial intelligence. Eur J Radiol. 2020;122:108768.
Adamopoulou E, Moussiades L. In Artificial intelligence applications and innovations: 16th IFIP WG 12.5 International Conference, AIAI. Springer, Cham, 2020:373–83.
Koutsouleris N, Hauser TU, Skvortsova V, De Choudhury M. From promise to practice: towards the realisation of AI-informed mental health care. Lancet Digital Health. 2022;4:e829–e840.
Hung L, Zhao Y, Alfares H, Shafiekhani P. Ethical considerations in the use of social robots for supporting mental health and wellbeing in older adults in long-term care. Front Robot AI. 2025;12:1560214.
Alhuwaydi AM. Exploring the role of artificial intelligence in mental healthcare: current trends and future directions–a narrative review for a comprehensive insight. Risk Manag Healthc Policy. 2024;17:1339–48.
Hall MA, Schulman KA. Ownership of medical information. JAMA. 2009;301:1282–4.
Montgomery SA, Asberg M. A new depression scale designed to be sensitive to change. Br J Psychiatry. 1979;134:382–9. https://doi.org/10.1192/bjp.134.4.382.
Singh V, Sarkar S, Gaur V, Grover S, Singh OP. Clinical practice guidelines on using artificial intelligence and gadgets for mental health and well-being. Indian J Psychiatry. 2024;66:S414–S419.
Bell CC, Mehta H. The misdiagnosis of black patients with manic depressive illness. J Natl Med Assoc. 1980;72:141–5.
Coleman D, Baker F. Misdiagnosis of schizophrenia in older, black veterans. J Nerv Ment Dis. 1994;182:527–8.
DeSouza F, Parker CB, Spearman-McCarthy EV, Duncan GN, Black RMM. Coping with racism: a perspective of COVID-19 church closures on the mental health of African Americans. J Racial Ethn Health Disparities. 2021;8:7–11. https://doi.org/10.1007/s40615-020-00887-4.
Hollar MC. The impact of racism on the delivery of health care and mental health services. Psychiatr Q. 2001;72:337–45.
Jimenez DE, Park M, Rosen D, Joo JH, Garza DM, Weinstein ER, et al. Centering culture in mental health: differences in diagnosis, treatment, and access to care among older people of color. Am J Geriatr Psychiatry. 2022;30:1234–51. https://doi.org/10.1016/j.jagp.2022.07.001.
Sashidharan SP, Francis E. Racism in psychiatry necessitates reappraisal of general procedures and Eurocentric theories. BMJ. 1999;319:254.
Williams DR, Williams-Morris R. Racism and mental health: the African American experience. Ethn Health. 2000;5:243–68. https://doi.org/10.1080/713667453.
Baclic O, Tunis M, Young K, Doan C, Swerdfeger H, Schonfeld J. Challenges and opportunities for public health made possible by advances in natural language processing. Can Commun Dis Rep. 2020;46:161–8. https://doi.org/10.14745/ccdr.v46i06a02.
Grother P, Ngan M, Hanaoka K. Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects. National Institute of Standards and Technology Interagency/Internal Report 8280, Gaithersburg, MD 2019.
Kim G, Aguado Loi CX, Chiriboga DA, Jang Y, Parmelee P, Allen RS. Limited English proficiency as a barrier to mental health service use: a study of Latino and Asian immigrants with psychiatric disorders. J Psychiatr Res. 2011;45:104–10. https://doi.org/10.1016/j.jpsychires.2010.04.031.
Ayala FJ. Cloning humans? Biological, ethical, and social considerations. Proc Natl Acad Sci USA. 2015;112:8879–86. https://doi.org/10.1073/pnas.1501798112.
Kass LR. The wisdom of repugnance: why we should ban the cloning of humans. N Repub. 1997;216:17–26.
Spurgeon B. France bans reproductive and therapeutic cloning. BMJ. 2004;329:130 https://doi.org/10.1136/bmj.329.7458.130-d.
Bukowski M, Farkas R, Beyan O, Moll L, Hahn H, Kiessling F, et al. Implementation of eHealth and AI integrated diagnostics with multidisciplinary digitized data: are we ready from an international perspective? Eur Radio. 2020;30:5510–24. https://doi.org/10.1007/s00330-020-06874-x.
Floridi L, Luetge C, Pagallo U, Schafer B, Valcke P, Vayena E, et al. Key ethical challenges in the European medical information framework. Minds Mach. 2019;29:355–71. https://doi.org/10.1007/s11023-018-9467-4.
Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. J Am Med Inf Assoc. 2020;27:491–7. https://doi.org/10.1093/jamia/ocz192.
Fanni R, Steinkogler VE, Zampedri G, Pierson J. Enhancing human agency through redress in Artificial Intelligence Systems. AI Soc. 2023;38:537–47. https://doi.org/10.1007/s00146-022-01454-7.
Love CS. Just the facts Ma’am”: moral and ethical considerations for artificial intelligence in medicine and its potential to impact patient autonomy and hope. Linacre Q. 2023;90:375–94. https://doi.org/10.1177/00243639231162431.
Saeidnia HR, Hashemi Fotami SG, Lund B, Ghiasi N. Ethical considerations in artificial intelligence interventions for mental health and well-being: ensuring responsible implementation and impact. Soc Sci. 2024;13:381.
Thakkar A, Gupta A, De Sousa A. Artificial intelligence in positive mental health: a narrative review. Front Digit Health. 2024;6:1280235. https://doi.org/10.3389/fdgth.2024.1280235.
Tiribelli S. The AI ethics principle of autonomy in health recommender systems. Argumenta. 2023;16:1–18.
Matheny ME, Whicher D, Israni ST. Artificial intelligence in health care: a report from the National Academy of Medicine. Jama. 2020;323:509–10.
Tavory T. Regulating AI in mental health: ethics of care perspective. JMIR Ment Health. 2024;11:e58493.
Acknowledgements
The authors would like to thank colleagues and collaborators in the Geriatric Psychiatry Imaging Program at the University of Pittsburgh Department of Psychiatry for their valuable insights and feedback. We are especially grateful to Dr. George Stetten and his students for their thoughtful discussions on computational psychiatry and aging. This work was supported in part by funding from the National Institute on Aging (R37AG025516, P50AG005133, P01AG025204, R01AG082157) and the National Institute of Mental Health (T32MH019986, T32MH119168, P30MH133399), the National Institute on Minority Health and Health Disparities (R01MD012610), and National Institute of Diabetes and Digestive and Kidney Diseases (P30DK111024).
Funding
This work was supported in part by funding from the National Institute on Aging (R37AG025516, P50AG005133, P01AG025204, R01AG082157) and the National Institute of Mental Health (T32MH019986, T32MH119168, P30MH133399), the National Institute on Minority Health and Health Disparities (R01MD012610), and National Institute of Diabetes and Digestive and Kidney Diseases (P30DK111024).
Author information
Authors and Affiliations
Contributions
Drs. Mizuno and Aizenstein conceptualized and led the manuscript development. Dr. Mizuno contributed to the Theory of Mind discussion and drafted the initial and final versions of the manuscript. Dr. Erickson provided expertise on AI and robotics, contributed to the interpretation of technological applications in geriatric psychiatry, and reviewed and edited the manuscript. Dr. Jimenez contributed clinical and cultural perspectives on geriatric mental health and aging and assisted in refining the discussion of applied and ethical considerations. Dr. Aizenstein provided overall supervision, contributed to the integration of computational and clinical perspectives, and critically revised the manuscript for intellectual content. All authors reviewed and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mizuno, A., Erickson, Z., Jimenez, D.E. et al. AI in geriatric psychiatry: precision meets human experience. Neuropsychopharmacol. (2026). https://doi.org/10.1038/s41386-026-02328-y
Received:
Revised:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41386-026-02328-y




