Introduction

Recent advancement of AI in geriatric psychiatry

There is no question that recent advances in computational tools, particularly artificial intelligence (AI), have significant impact on both medical research and everyday life. In our globally aging society, AI offers hope—guiding us towards managing societal costs and enabling personalized and precision medicine. As we embrace unprecedented technological advancements, important questions arise: Where are these powerful tools taking us? Are we heading in the right direction? Are we investing our efforts in research questions that hold genuine meaning for humanity? What clinical value do we gain from a modest improvement, such as a 5% increase in predictive accuracy for cognitive decline in Alzheimer’s disease (AD) models? AI is increasingly demonstrating emotional intelligence, previously believed to be a uniquely human trait central to our humanity. For example, AI can be a supportive tool for caregivers of AD patients, not only by providing relevant medical information, but also emotional support [1]. However, does this imply that human interactions, essential for addressing existential concerns about meaning, connection, and purpose, will eventually be replaced by AI? This review explores transdisciplinary* (Transdisciplinary science (Reynolds and Weissman, 2022 [2]): an integrative approach that transcends traditional disciplinary boundaries involves the co-creation of knowledge across various fields. While interdisciplinary and multidisciplinary science utilize methods and insights from multiple disciplines to address complex problems, transdisciplinary research aims to synthesize and transform perspectives into entirely new frameworks of understanding and application.) perspectives on these critical questions by examining the relationships between computational precision and holistic human experiences.

Juxtaposition – precision to imperfect mind

As the global geriatric population grows rapidly, there is an urgent need to accelerate our understanding of late-life mental health and develop innovative approaches to address its challenges. A critical gap in current research lies in reconciling the inherently subjective and undecidable nature of existential meaning (e.g., “If I lose my memory, who am I?”, “Why do I feel so empty about my life?”) with deterministic, biomarker-based models of disease. For instance, computational approaches can enhance the early detection of slight increases in amyloid plaques at earlier stages [3, 4], enabling timely interventions with amyloid-targeted infusion therapies [5, 6]. These early-stage applications hold promise for allowing individuals to benefit from treatments before amyloid accumulation becomes excessive and irreversible—a factor contributing to the current modest effectiveness of such therapies in slowing disease progression. As a contrast to such an advanced computational approach, personalized attention and behavioral management demonstrate a transformative impact on quality of life, aligning more closely with existential psychology by addressing the emotional, social, and existential dimensions of living with Alzheimer’s disease (AD) [7, 8]. This juxtaposition between computational precision and the holistic human experience highlights a broader paradox: while AI excels at analyzing data and identifying patterns, it lacks the deeply human capacity for an interpersonal existential relationship. These fundamental differences between AI’s computational power and human experience underscore the need for a structured framework to navigate such complexities.

New avenue to geriatric psychiatry

The Theory of Computing (TOC) provides a powerful language to address these dualities and contradictions. Rooted in the set theory of mathematics and the propositional logic of philosophy, it offers a robust foundation for integrating computational approaches with human-centered care. By bridging these paradigms, we can better navigate the multifaceted challenges of geriatric mental health, ensuring that technological advancements are harmonized with the deeply human aspects of aging. These relationships will be examined through a conceptual framework highlighting intersections between theory and practice across human (mind/body) and machine (computer/AI domains), evaluating how theoretical constructs translate into practical advances for late-life mental health care, as well as generating new perspectives of geriatric psychiatry by contrasting two approaches. The aim of this paper is to develop a conceptual framework, grounded in the Theory of Mind (ToM) and TOC, for understanding both the capabilities and limitations of AI in addressing the psychological and existential dimensions of geriatric psychiatry.

Structure of this review

We begin by examining the unique psychological and existential dimensions of geriatric psychiatry through the lens of ToM, emphasizing the fundamental difference between authentic human connection and AI-simulated interaction. Next, we apply TOC to explore how computational principles can translate the complexity of human cognition and emotion into tractable, operational models. Building on this foundation, we transition from theory to practice by discussing the role of robotics and AI-based clinical applications in geriatric psychiatry. Figures 1 and 2 serve not only as summaries but also as conceptual guides for navigating the review. Furthermore, evaluating the performance of artificial emotional intelligence against human capacities lies beyond the scope of this paper. Instead, we provide a conceptual platform for critically engaging with this emerging landscape from transdisciplinary perspectives. Here, “adequacy” is not defined as the replication of all human experiences, but as the capacity of AI to meet the specific therapeutic goals of geriatric psychiatry, particularly in addressing the psychological and existential dimensions of late-life mental health. Throughout, we highlight both the promise and the limitations of computational tools in geriatric psychiatry, where existential concerns often intensify with age.

Fig. 1: Conceptual model contrasting authentic human connection and AI-simulated interaction.
figure 1

Conceptual framework illustrating the qualitative differences between authentic human connection (left) and AI-simulated interaction (right) in the context of geriatric psychiatry. Human connection includes existential and relational dimensions (I–Thou), whereas AI operates within algorithmic limits, offering computational precision but lacking lived experience, reciprocity, and authentic empathy.

Fig. 2: Integrating theory of mind and theory of computing to inform geriatric psychiatry.
figure 2

Framework connecting concepts from Theory of Mind (ToM) and Theory of Computing (ToC) to their implications in geriatric psychiatry. *NP problems: P problems can be solved quickly; NP problems can be verified quickly but may be much harder to solve. Whether P equals NP remains an open question. “P” stands for polynomial time, and “NP” stands for nondeterministic polynomial time, meaning they can be solved in polynomial time on a hypothetical nondeterministic machine—essentially, by guessing a solution and then verifying it.

Theory of mind in geriatric psychiatry

Defining and navigating a journey of existential questions

Theory of Mind (ToM) offers a foundational cognitive ability that underlies emotional intelligence and human connections [9]. While ToM is often discussed in the context of understanding others’ mental states (i.e., intentions, desires, beliefs, and emotions) [10, 11], the ability to recognize and interpret one’s own mental states [12, 13] is equally critical for meaningful interpersonal relationships and well-being in general. This self-awareness represents a form of meta-cognition (thinking about one’s own thinking) and allows individuals to observe, evaluate, and monitor their internal experiences. Such awareness becomes particularly relevant to mental health in aging, which presents profound philosophical challenges, often taking the form of existential questions. These inquiries manifest in various forms, including questions about the meaning of life (“What is the purpose of my/our existence?”), identity (“What defines my existence?”), mortality (“What is the meaning of death in life?”), and authenticity (“What does being true to myself mean?”). While existential questions may arise throughout life, they often intensify in later years, shaped by the unique experiences and transitions of aging.

These existential reflections rely on ToM; without the ability to represent and examine one’s own (and others’) mental states, such complex, reflective questions may not arise or be meaningfully explored [14]. Navigating these timeless existential queries can lead to a deep sense of fulfillment and an appreciation for life, yet it may also precipitate emotional and psychological struggles. In this sense, ToM is not only essential for the formation of existential questions—it also serves as a tool for processing and potentially “resolving” them. While ToM is central to the generation and resolution of existential questions, the growing presence of AI invites a reexamination of these uniquely human capacities. Some forms of emotional intelligence—particularly emotional awareness, such as identifying emotions, articulating associated thoughts and physical sensations, and understanding their underlying causes—have been increasingly modeled in AI systems, in some cases even outperforming the general population in emotional labeling tasks [15]. However, unlike humans, AI does not possess subjective experience or embodiment and thus does not engage with existential questions in the same way (Fig. 1). AI cannot feel the emotions it identifies, nor can it engage with meaning, mortality, or authenticity.

Nevertheless, the capacity of AI to emulate certain aspects of human emotional and cognitive functions serves as a reflective medium, enabling a deeper comprehension of human nature. Engaging with machines that replicate human-like mental processes, yet lack consciousness or existential depth, compels us to reassess and revalue our own abilities for self-reflection, emotional complexity, and existential exploration. Hence, AI not only poses challenges but also augments our appreciation of the uniquely human role that ToM plays in addressing life’s profound questions.

Existential questions in mind-body medicine

Irvin Yalom, a pioneering figure in existential psychotherapy, has profoundly shaped our understanding of how existential questions intersect with psychological well-being. His therapeutic framework helps individuals confront and process these human’s ultimate concerns – death, freedom, isolation, and meaning -- with clarity and resilience. In his seminal book Existential Psychotherapy (Yalom, 1980) [16], Yalom wrote “Death and life are interdependent: though the physicality of death destroys us, the idea of death saves us (p. 879)” His approach not only offers significant insights and guidance for psychotherapists but also resonates its relevance beyond the clinical environment, encouraging both therapists and non-therapists to engage in a deeper contemplation of what it means to live authentically amidst uncertainty.

To encompass Yalom’s approach, a key tool for exploring existential questions is ToM — the capacity to engage with our internal experiences and embrace uncertainty. ToM enables us to observe, question, and reframe our own thoughts and beliefs. This process of introspection is essential when confronting existential concerns and accompanying uncertainty. It involves reflecting not only on one’s personal history and values but also extending beyond the self — to consider connections with family, friends, ancestors, future generations we may never meet, and ultimately, the universe itself (i.e., gerotranscendence [17]). This expanded perspective helps cultivate meaning and continuity, enabling individuals to tolerate ambiguity and fear in the face of mortality and isolation. It reframes later life not as a period of decline, but as a unique opportunity for personal growth, the deepening of wisdom, and the attainment of inner peace. In this way, ToM functions as a bridge between cognitive insight and emotional resilience, supporting both the psychological and philosophical dimensions of mind-body medicine.

“Some day soon, perhaps in forty years, there will be no one alive who has ever known me. That’s when I will be truly dead - when I exist in no one’s memory. I thought a lot about how someone very old is the last living individual to have known some person or cluster of people. When that person dies, the whole cluster dies, too, vanishes from the living memory. I wonder who that person will be for me. Whose death will make me truly dead?” ― Love’s Executioner and Other Tales of Psychotherapy (p. 191), Yalom (1990) [18]

Integrating Yalom’s principles into geriatric psychiatry not only fosters an appreciation for the broader meaning of life for older adults but also addresses mental health challenges in late life. For instance, how can we objectively define existential concerns as they relate to late-life depression, a condition profoundly shaped by subjective experience? These discussions are vital for shaping diagnostic frameworks and treatment strategies in geriatric psychiatry. Practical steps, such as operational descriptions, offer clear and measurable approaches to address these challenges. For example, treatment protocols for late-life depression might incorporate assessments of depressive symptoms alongside structured pharmacological and psychotherapeutic objectives.

Recent advancements in AI have increasingly influenced mind-body medicine. AI-related methodologies (e.g., processing theory, cognitive psychology, robotics) have provided invaluable insights into psychological processes by modeling elements of human cognition, emotion, and behavior. However, we are now entering a different phase—one that requires us to pause and reevaluate the nature of AI itself and its distinctions from human experience. While AI can simulate certain emotional expressions or cognitive patterns, it does not share the existential concerns or personal narratives. The relational “we” does not emerge in human-AI interactions. Philosopher Martin Buber described this profound human connection as the “I–Thou” relationship—a dialogical bond rooted in mutual presence and recognition [19]. Such a relationship cannot exist with AI, no matter how advanced. It is therefore essential to remain mindful of this fundamental difference between engaging with a person and interacting with a machine. AI can serve as a powerful tool, but it must not be mistaken for a replacement for human connection. The (partial) dissolution of self-boundaries to form a unified relational entity—what we often describe as “we” (or I-Thou) —is a cornerstone of deep interpersonal experience. Recognizing this human-AI distinction invites ongoing reflection on how we integrate AI into mental health care—without losing sight of what makes human connection irreplaceable.

The human-AI distinction highlighted particularly pertinent within therapeutic contexts. Drawing upon Irvin Yalom’s seminal work to group psychotherapy [20], as well as Polster and Polster’s Gestalt Therapy (1973) [21], it becomes evident that shared vulnerability and existential exploration are integral to the healing process. Yalom’s work reminds us that the curative power of therapy often lies not in offering the “right” answer or prediction, but in the authenticity of interpersonal encounters. This notion is intimately connected to the concept of the therapeutic alliance, a fundamental component in clinical practice that denotes the trust, rapport, and collaborative relationship between therapist and client [22]. A robust therapeutic alliance is consistently associated with favorable treatment outcomes. While AI may offer emotionally validating responses—such as providing comforting words or mirroring emotional states for temporary relief —it lacks the capacity to deliver the profound psychological challenges that trained psychotherapists, peers in group therapy, or existential experiences, such as confronting mortality, can present. These challenges, albeit often discomforting, are essential opportunities for growth, transformation, and enduring well-being, rather than merely transient emotional comfort.

In this context, the application of AI in mind-body medicine should be approached with careful consideration and deliberate intent. Similar to other technological advancements, its value is not solely determined by its complexity, but rather by the manner in which it is utilized—as an enhancement of human capabilities, rather than a replacement. A more detailed discussion of AI applications in geriatric psychiatry practice is provided later in this review. Adopting this viewpoint enables us to leverage the strengths of AI while simultaneously fostering a deeper understanding of the essence of human existence.

Theory of computing provides a theory of mind

The theory of computing (TOC) originates from mathematical logic and theoretical computer science, grounded in foundational work by Alan Turing, Alonzo Church, and others in the early 20th century. It encompasses the mathematical and algorithmic principles underlying computation and information processing. This theoretical framework addresses critical questions about the nature of computability, the efficiency of computational processes, and the categories of problems that computational systems can resolve [23,24,25]. In recent years, TOC has increasingly been applied to cognitive and mental health research, particularly through its influence on computational psychiatry, cognitive modeling, and machine learning. In mental health research, the TOC serves as a foundation for designing computational models, analyzing complex data, and uncovering patterns in mental health phenomena. By leveraging these principles, psychiatry research can advance in several ways: 1) enhancing the accuracy of predictions related to symptoms and conditions, 2) interpreting high-dimensional and multi-modal datasets more effectively, and 3) developing models to simulate and predict patient outcomes.

Geriatric mental health exists at the intersection of mind and body, where biological changes and psychological experiences converge. Existential questions – about life’s purpose, individual identity, and authenticity, are brought into sharper focus by the reality mortality, as previously discussed. TOC offers a compelling framework for examining the complex interplay between mind and body by conceptualizing them as a unified and abstract computational system [23]. In this framework, the mind and brain are not separate entities, but different perspectives (i.e., languages) for the same computational system. Both the psychological mechanisms of the mind and the neural processing of the brain can be described as functions or algorithms. Just like there are multiple ways to express the same function, mind and brain can be thought of as different expressions (or representations) of the same underlying person. Key relationships between TOC and mental health are listed in Fig. 2. In this section, we apply concepts from mathematical logic—such as Cantor’s diagonalization and Gödel’s incompleteness theorem (introduced in detail later)—not to evaluate empirical performance, but to use these theorems as conceptual tools for delineating the structural limits of formal systems, including computational models of the mind. Grounded in the traditions of theoretical computer science and analytic philosophy, this approach positions such proofs as boundary markers of what is possible in principle, rather than as empirical findings.

TOC has its roots in the work of George Cantor, a 19th-century mathematician who introduced the distinction between countable and uncountable infinities, such as the difference between natural numbers (e.g., 1, 2, 3) and real numbers (e.g., −3.4, √49, π) [26, 27]. He used an elegant argument, known as ‘diagonalization,’ to show that no counting scheme can capture all real numbers, revealing that there are multiple levels of infinity. These abstract ideas help us to conceptualize how personal experiences and emotions can be vast and immeasurable. Diagonalization serves as a metaphor for the way subjective human experiences (emotions, memories, or existential concerns) cannot be fully captured or predicted by computational or algorithmic models—just as not all real numbers can be listed by an algorithm. This idea also parallels Martin Buber’s (originally published in 1921) [19] distinction between the ‘I–Thou’ and ‘I–It’ relationships—where the infinite, transcendent quality of genuine human connection (I–Thou) stands in contrast to the countable, transactional nature of objectified interaction (I–It). In this sense, diagonalization metaphorically illustrates the limits of algorithmic reasoning in capturing the emergent, infinite depth of human connection that defines I–Thou relationships. This framework underscores the idea that AI and computational tools, no matter how advanced, have limits when it comes to modeling the richness and variability of human mental life. That variability expands even further in the realm of interpersonal relationship and our application of the psychological complexity of others. This concept resonates deeply with mental health care, where subjective experiences often defy quantification and objective measurement.

Gödel’s incompleteness theorem [28], developed in the 1930’s, showed a limitation of formal systems by demonstrating that certain true but unprovable statements, known as undecidable statements, exist within any sufficiently complex system (e.g., arithmetic or set theory). Using a diagonalization technique, Gödel showed that a system cannot prove its own consistency, highlighting the inherent incompleteness of logic (see also Turing’s Halting Problem [29], which shows no algorithm can universally determine whether another program will finish running or loop forever). Gödel’s theorem serves as a powerful metaphor for psychiatry: just as formal systems cannot capture every truth, computational models fall short in representing the full complexity of human behavior. In geriatric mental health, existential concerns parallel the concept of undecidable statement – questions about meaning, purpose, and mortality resist definitive resolution, regardless of computational power. In late life, as a person’s sense of self and life meaning naturally evolve, this underscores the importance of attending to individual lived experiences that cannot be reduced to data or predictions. People are complex systems and therefore inherently unpredictable, shaped by relationships, history, and existential concerns. Gödel’s insight urges us to embrace a more human-centered approach that respects what lies beyond algorithms.

The book Gödel, Escher, Bach (often referred to as GEB), originally published in 1979 by Douglas Hofstadtler [30] explores how Godel’s meditations on patterns and infinity also relate to music (Bach) and visual art (Escher). GEB introduced many to the ideas of Alan Turing and Noam Chomsky, highlighting overlaps between computation, cognition, and creativity. Chomsky’s hierarchy maps types of language to the complexity of machines needed to process them—from simple regular expressions handled by finite-state automata (i.e., machines with no memory that operate state-by-state) to context-free languages requiring memory (i.e., pushdown automata, which use a stack to track and process hierarchical or nested elements in language). This hierarchy mirrors the complexity of abstract Turing machines – theoretical machines that define the limits of what can be computed algorithmically. Importantly, Chomsky’s language theory framework helps explain how the brain organizes language, shaping mental representations and personal narratives. When individuals confront existential questions and concerns, their meaning-making involves interpreting their past, present, and future, and this process resembles high-level languages in Chomsky’s hierarchy: languages with access to context that accept unrestricted self-referential input. This capacity, supported by functions of the prefrontal cortex, enables infinite language expressivity—setting us apart from other species and highlighting the challenge of translating lived experience into computational algorithms.

The TOC provides offers a foundational framework through the Turing Machine, which models how a simple set of rules (a finite controller) can interact with an open-ended set of possibilities (an infinite context). According to the Church-Turing Hypothesis [31], any process that can be described algorithmically can, in principle, be simulated by this type of machine. Some of these computational models are nondeterministic—meaning they can follow multiple potential paths to reach a solution, rather than a single, predictable route. This concept nicely transcribes inherently unpredictable nature of human mind, where thoughts, emotions, and decisions rarely proceed along a single fixed path. In geriatric psychiatry, such non-determinism resonates with the way individuals navigate existential concerns, where no single “correct” resolution exists. Client-Therapist dynamic work in this context often embraces heuristics rather than fixed algorithms, adapting approaches to the client’s evolving life narrative, values, and circumstances. Understanding human cognition as a system that generates flexible strategies for intractable problems underscores the need for therapeutic models that can accommodate ambiguity, multiplicity, and change over time.

While machine learning and artificial intelligence present unique challenges—discussed earlier in this review—TOC remains a powerful lens for examining both the capabilities and limits of these systems. Beyond its technical applications, TOC also provides philosophical and mathematical scaffolding for rethinking the structure, function, and representation of the mind itself, as explored through the lens of Theory of Mind (ToM). Crucially, TOC helps illuminate the inherent value of the I–Thou relationship—an authentic, mutual form of human connection that exists outside of countable, transactional frameworks. This serves as a reminder that not all forms of meaning or connection can be encoded, modeled, or predicted. In geriatric psychiatry, TOC concepts such as non-determinism and undecidability help clarify the limits of computational prediction. While AI may accurately forecast the likelihood of cognitive decline or depressive symptoms, it cannot determine how such changes will affect an individual’s sense of meaning, purpose, or peace in the final stages of life. These concerns are deeply personal, context-dependent, and often inherently unknowable—mirroring the logical principle of undecidability.

AI and robotics for geriatric psychiatry

The integration of AI and robotics into geriatric psychiatry offers innovative tools for assessment, monitoring, and intervention, yet demands critical examination through the lens of computational precision versus holistic human experience. AI, particularly machine learning, excels at analyzing complex datasets to identify risk factors and create “digital biomarkers” for conditions like depression and dementia [32,33,34,35,36]. While this algorithmic approach enhances early detection and objective tracking, its foundation in TOC means it processes data devoid of subjective, lived experience. This precision operates within an “I-It” paradigm, offering valuable transactional insights but lacking a full comprehension of the existential questions central to aging and the “imperfect mind”.

From a theoretical standpoint, the deployment of AI in geriatric psychiatry reflects the intersection of computational psychiatry, emotional intelligence, and algorithmic decision-making. Diagnostic models operate within formal frameworks derived from computer science—such as probabilistic automata and neural networks—and are constrained by data quality and algorithmic complexity [33, 34]. These tools do not replicate human cognition but approximate clinical reasoning through pattern recognition and statistical inference. At the same time, emotionally intelligent (EI) agents reflect computational implementations of EI theory, with the goal of approximating empathy and responsiveness in machine form [37, 38].

Machine learning and natural language processing (NLP) enable data-driven diagnosis and monitoring by analyzing speech, sensor data, and health records. Studies have shown that AI models can identify depression in older adults with moderate to high accuracy, using inputs like sleep, frailty, voice, and Wi-Fi-based motion features [36, 39]. These digital biomarkers provide objective, continuous assessments that augment traditional clinical judgment, especially when deployed in passive sensing environments such as the home [40]. Speech-based AI offers a natural, low-burden modality for detecting affective disorders, aligning with growing interest in unobtrusive, ecological approaches to mental health [41, 42].Conversational agents and AI-based digital therapeutics offer new tools for managing loneliness, depression, and anxiety among older adults. Chatbots built with NLP and sentiment analysis have shown high usability and engagement in older psychiatric outpatients, with some trials reporting reduced loneliness and improved emotional well-being [43,44,45].

These agents can be used asynchronously and at home, making them well-suited for seniors who face mobility or stigma-related barriers. AI also powers features in voice assistants and health apps, enabling emotion-aware interactions and daily monitoring. These systems increasingly draw on emotional intelligence (EI) theory, attempting to simulate empathetic responses through tone, dialogue, and adaptive feedback—a practical application of EI in software [46, 47].

Socially assistive robots (SAR) extend these benefits by providing embodied interaction. Companion robots like Paro and Joy for All pets have demonstrated reductions in depression and loneliness, especially in patients with dementia [46, 48, 49]. Robots with more human-like morphologies, such as Pepper, have been used to guide reminiscence therapy and cognitive stimulation activities [50, 51]. In these cases, the robot serves as a therapeutic interface—engaging seniors in memory tasks, conversations, or music—which has been associated with increased attention and improved mood.

Existing AI and robotic systems attempt to mimic aspects of ToM by recognizing emotional cues and responding in understanding ways, offering a perspective of connection that can be comforting for individuals. However, this simulated ToM operates fundamentally differently from genuine human empathy, which is rooted in shared lived experience and mutual vulnerability. As explored previously, AI does not share in the existential concerns of aging—it does not “feel” the emotions it identifies or engage in the I-Thou relationship. SARs can offer transient comfort or re-engage older adults with memories, yet they are unsuitable for profound psychological challenges or authentic interpersonal encounters essential for deep therapeutic growth, as emphasized by Yalom. Furthermore, human therapists may not share the exact lived experiences of their patients; however, both parties are grounded in the shared reality of being human—that is, they possess emotional embodiment rather than simulated empathy. While their existential concerns may differ in form or context, they are mutually subject to the same inescapable dimensions of human existence.

Current applications of SARs suggest that physical embodiment, even in simple robotic forms, can amplify emotional and cognitive engagement in geriatric care. Robots with anthropomorphic or animal-like features often elicit greater rapport and engagement from users than utilitarian devices [46, 52]. EI behavior, such as recognizing a user’s tone or facial expression and adjusting responses accordingly, enhances interaction quality. These observations align with broader psychological theories of trust and motivation, suggesting that emotionally responsive systems can more effectively meet the social-emotional needs of older adults.

However, the path forward requires a judicious integration of these technologies, recognizing them as valuable adjuncts rather than replacements for human-centered care. The clinical evidence shows promise for specific outcomes (e.g., improving cognitive function, reducing loneliness, alleviating depressive symptoms) [46, 48, 53] and for providing respite to human caregivers [46]. However, their application must be guided by an acknowledgement of their strengths alongside their current limitations. A collective goal in the research community is to leverage AI and robotics to support and enhance the holistic, empathetic, and existentially attuned care that addresses the full spectrum of human experience in geriatric psychiatry.

Ethical considerations and recommendations

From a mental health services perspective, AI has the potential to transform access, efficiency, and personalization, particularly for populations traditionally underserved by existing systems, including older adults. AI-based technologies offer scalable solutions to address mental health workforce shortages and extend care into resource-constrained settings [54]. However, their adoption raises complex ethical, clinical, and social challenges that require careful scrutiny to ensure safe, equitable, and effective use.

Key ethical concerns include data privacy and consent, bias and equitable access, lack of transparency in data usage, empathy simulation, and unclear accountability. There are also concerns about the potential for AI to unintentionally deceive users, displace human care, or exacerbate existing disparities in access [55,56,57,58,59]. Without thoughtful design and implementation, these technologies may produce inconsistent results, reinforce discrimination, or lead to violations of privacy and conflicts over data ownership.

Data privacy and consent

Recent advancements in machine learning, deep learning, large language models (LLM), NLP, and generative AI, process highly sensitive personal data, including intimate conversations in which users reveal deeply personal emotions, behaviors and thoughts [60,61,62] allowing for more personalized, adaptive, and sophisticated responses to mental health needs [63, 64]. These systems raise complex ethical and legal challenges related to privacy, data ownership, and informed consent, particularly among older adults and minoritized populations who may struggle to navigate conventional consent procedures and are at heightened vulnerability to data breaches and surveillance [65]. Deep learning models, in particular, are highly data-intensive, which sometimes involves pooling data from multiple institutions or countries, and have contributed to a growing demand for patient-derived datasets [58, 65, 66]. Moreover, the widespread use of cloud-based platforms for processing such data increases the risk of unauthorized access, underscoring the need for secure infrastructure [58, 67]. Therefore, ensuring robust data security and privacy protections is paramount.

Transparent data governance frameworks and stringent informed consent processes must be prioritized to safeguard against misuse or unintended disclosure of personal health information. The ethical stakes are heightened by the nature of the data: a standardized measure, such as the Montgomery-Åsberg Depression Rating Scale (MADRS) score [68], reflects symptom severity in structured form, whereas a personal narrative conveys the lived meaning of those symptoms—the relationships, cultural context, and existential concerns that shape a patient’s mental world. While safeguarding any form of data is essential in mental health research and clinical practice, narratives warrant particular protection. From a TOC perspective, they resemble high-level languages in Chomsky’s hierarchy—context-rich, self-referential, and resistant to full algorithmic capture. Misappropriation of such narratives can erode the trust and psychological safety, undermining the therapeutic alliance and the authentic, reciprocal connection described by Buber as the I–Thou relationship—both of which are critical in addressing late-life existential concerns. AI systems must therefore be explicitly designed to handle sensitive data responsibly and to avoid storing information on insecure or vulnerable servers. Additionally, the use of anonymized or de-identified data does not fully resolve ethical concerns, as questions about consent, data ownership, and secondary use remain, and legal frameworks governing these issues vary widely across jurisdictions [69].

Bias and inequitable access

AI models trained on historically biased or non-representative datasets risk perpetuating and amplifying existing health disparities. Deep learning models, in particular, rely on existing datasets that often contain both implicit and explicit biases [58]. When training data disproportionately represent certain demographic groups, the resulting models may perform inadequately for individuals from underrepresented populations, leading to inaccurate diagnoses and treatment recommendations. This has been observed, for example, in the overdiagnosis of schizophrenia and underdiagnosis of affective disorders among African Americans, Afro-Caribbeans, and Latinos due to clinician prejudice and lack of contextual diagnostic analysis [70,71,72,73,74,75,76]. Similarly, NLP tools may misinterpret dialects, non-standard grammar, or culturally specific communication styles, increasing the risk of misdiagnosis or inappropriate clinical responses [77]. In geriatric psychiatry in particular, existential concerns may be perceived and expressed in diverse ways across cultures and languages, shaping how symptoms are experienced, narrated, and understood. AI systems that fail to account for this variation risk misinterpreting contextually meaningful expressions of distress or resilience, thereby limiting diagnostic accuracy and deepening existing inequities. Thus, ensuring cultural and linguistic sensitivity in both data and model design is essential for developing AI that serves the full spectrum of aging populations with equity and respect.

Facial recognition algorithms have also shown significantly lower accuracy among individuals with darker skin tones, particularly women, raising serious concerns about bias and inequitable performance across demographic groups [78]. Older adults who do not speak English may be excluded from engaging with AI-powered technologies, such as social robots, leading to missed opportunities for emotional support and further marginalization, particularly among immigrant and minoritized populations already at risk of social isolation [65]. The ubiquity of AI tools may also lead to clinicians over-relying on algorithmic outputs without critical evaluation. Because AI systems are often designed to reduce human error and enhance patient safety, this overreliance may diminish the likelihood that providers will question incorrect results, further compounding diagnostic or treatment inaccuracies [62]. These limitations underscore the critical need for inclusive data practices, rigorous fairness audits, and transparent evaluation standards throughout the AI development lifecycle, particularly when applied to diverse and vulnerable populations.

These challenges highlight that model accuracy alone is not a sufficient metric of value in geriatric mental health. The “right” answer as determined by an algorithm may not always be the answer a patient most needs—especially when it fails to account for the lived experience, cultural context, or existential meaning behind a person’s expression of distress. AI systems operate on patterns of past data, but care must accommodate unpredictable human nature. Just as mathematical proofs such as Cantor’s diagonalization or Turing’s halting problem reveal inherent limits of formal systems, these constraints should remind us that no amount of optimization can fully capture the complexity of human needs. In the context of aging and psychiatry, overreliance on such systems without reflection risks not only technical failure but ethical erosion. Some aspects of current AI can blur the distinction between human-human and human-computer relationships. Truly equitable AI must, therefore, be designed with a recognition of AI’s technical and existential limits.

Empathy simulation

Mental health care is fundamentally relational (i.e., I-Thou relationship), relying on sensitivity to emotional cues such as tone of voice, facial expression, and body language [65]. Accurate diagnosis also depends on a thorough patient history and clear, detailed descriptions of symptoms [79]. Many psychiatrists have raised concerns about AI’s impact on the therapeutic relationship. While some see AI as a nonjudgmental tool that could reduce costs and improve access, its inability to convey genuine empathy limits its effectiveness [65]. Therapeutic rapport is built on emotional attunement and trust – qualities AI cannot replicate. Though AI can interpret some behavioral cues and can simulate empathy through affective language and responsive dialogue, these tools fundamentally lack emotional comprehension; remain inadequate for accurate diagnosis; and may increase the risk of clinical error [65].

A helpful analogy arises in the ethical and moral debates surrounding human reproductive cloning [80, 81]. Many see the pursuit of creating individuals through reproductive cloning as a “crime against the human species” [82]. Just as a clone may replicate genetic material without reproducing individual’s self-ness (i.e., ToM), AI can simulate (or more precisely “mimic”) empathy without truly experiencing it. This challenge is compounded when AI systems (particularly generative models) lack transparency in how responses are generated, making it difficult for humans to evaluate the authenticity or appropriateness of the simulated interaction. As emphasized throughout this review, the core of healing in psychotherapy lies not in verbal simulation but in the emergence of an I–Thou relationship between therapist and client, grounded in shared existential experience. Using AI as a surrogate for human connection risks blurring the line between simulation and authenticity, raising serious concerns about the erosion of what makes human care distinct, relationally meaningful, and ethically grounded.

AI tools, therefore, should not be viewed as substitutes for human care. Rather, they must be integrated into mental health care in ways that complement and enhance human interaction. This is especially important for socially disadvantaged or cognitively impaired individuals, whose care must center on equity, dignity, and individualized support. Ethical integration of AI into these settings requires person-centered principles that safeguard agency, respect cultural and linguistic diversity, and respond to the unique preferences and needs of each individual. When deployed relationally and with appropriate human facilitation, social robots can support meaningful engagement while preserving the core human relationships that underlie effective mental health care. Without this nuance, AI may fail to meet the psychosocial needs of patients and risk exacerbating health inequities.

Unclear accountability

The increasing use of AI systems in mental health care raises critical ethical and practical questions, including who is accountable for the decisions made by these technologies, such as chatbots. When deployed in therapeutic contexts, AI must be able to appropriately handle emotional interactions, as strong emotional reactions, can lead to patient harm or compromise safety [69]. Designers have an ethical responsibility to monitor and regulate emotional engagement, mitigate the risk of overattachment, and ensure a safe and supportive environment. It is equally important to consider how AI-mediated therapeutic relationships should be appropriately concluded to avoid emotional distress [69]. Moreover, the competence of intelligent machines must be carefully evaluated. Providing care beyond the limits of their training or intended scope may expose patients to harm. Highly autonomous systems must demonstrate proficiency in interpersonal communication, treatment and safety protocols, and cultural competence to be considered ethically and clinically viable in mental health settings [69].

Toward ethical and equitable AI in mental health

To promote responsible integration of AI into mental health care for older adults and underserved populations, we propose a framework informed by six core ethical principles commonly cited in healthcare AI literature: fairness, transparency, trustworthiness, accountability, privacy, and empathy [83,84,85]. Operationalizing these principles requires sustained investment in governance, multidisciplinary collaboration, and inclusive design practices. Specific recommendations include:

  1. 1.

    Human-in-the-Loop Design: AI tools should incorporate mechanisms for clinician oversight and allow clinicians to interpret or override outputs ensuring that AI supports rather than supplants clinical judgment. Clinicians must be trained to understand AI’s strengths, limitations, and ethical implications to ensure safe, informed use and effective advocacy.

  2. 2.

    Inclusive and Culturally Responsive Development: Diverse racial, ethnic, linguistic, and age groups should be engaged in the design, testing, and evaluation of AI systems. Community-based participatory methods can enhance cultural relevance and acceptability.

  3. 3.

    Enhanced Consent Protocols: Informed consent should be an iterative, accessible process, adapted to the cognitive, linguistic, and cultural needs of users. Visual aids and plain language can facilitate meaningful consent. Developers should use interpretable algorithms and provide user-friendly explanations to support informed decision-making.

  4. 4.

    Governance and Accountability Structures: Multidisciplinary oversight bodies that include clinicians, ethicists, technologists, and patients should review algorithmic performance, conduct equity audits, and delineate liability for adverse outcomes.

In sum, AI in mental health should be designed to complement, not replace, professional judgment. Even when full algorithmic transparency is limited, as is the case with generative AI, oversight can be maintained through independent auditing, post-deployment monitoring, escalation of complex cases to experts, and ongoing collaboration with culturally competent clinicians to detect and correct errors while reducing bias [86,87,88,89,90]. Moreover, ethical deployment requires making the algorithmic decision-making process more understandable to patients and providers through the principles of transparency and explainability. Transparency details the components of the datasets and the algorithmic decision trees so that an external expert can review them and understand what has taken place while explainability communicates the process of how inputs lead to outputs in ways patients and providers can understand [91, 92]. These are essential for safe oversight, informed consent, and equitable care, even when full algorithmic disclosure is not possible.

Conclusions

In conclusion, this study underscores the increasing significance of artificial intelligence in the domain of geriatric mental health, encompassing theoretical frameworks, empirical research, and clinical applications. Existential questions, particularly those related to the end of life and the finite nature of human physical existence, lie at the heart of psychological theories of aging. In referencing these uniquely human existential concerns, we have discussed AI’s role as fundamentally distinct from human interpersonal connection (see Fig. 1). While AI has provided us with sophisticated form of “artificial” intelligence capable of simulating companionship, it cannot replace the authentic relational depth of human-to-human interaction precisely because it does not share our existential experiences. In clinical practice, there have remarkable advances in AI that allow for enhanced diagnostic accuracy and identifying patterns across complex datasets. However, a central paradox persists: what AI can predict is often not aligned with what matters most to the individual. This paradox can be illuminated through the lens of Theory of Computing (TOC) -- the TOC not only forms the foundation of AI but also provokes a reimagining of our mind’s dynamic system, where meaning, cognition, uncertainty, and existential concerns converge—offering a transformative bridge to the principles of Theory of Mind (ToM).

Future research directions

Future research should explore how the limits of AI can be constructively acknowledged to leverage its strengths without compromising the uniquely human dimensions of geriatric care—particularly those tied to existential concerns that deepen with age. A transdisciplinary research agenda is needed to investigate how computational precision can be integrated with the meaning-driven, imperfect realities of the human mind. Rather than pursuing precision as an end in itself, future work should prioritize what matters most for late-life mental health, including emotional depth and interpersonal connection. This includes empirical studies examining how older adults perceive, interact with, and are affected by AI-driven systems in real-world clinical and caregiving settings.