Abstract
The emotional system of artificial intelligence (AI) is being increasingly woven into the emotional tapestry of human society, prompting widespread discourse on the essence, technological frontiers, and ethical ramifications of emotion. In this study, the potential and constraints of AI replicating and augmenting human emotions is investigated. According to philosophical, psychological, and ethical viewpoints, the fundamental traits, moral dilemmas, and societal repercussions of AI emotional systems are discussed. Evidence suggests that the emotional manifestations of AI can provide emotional support to humans, especially in addressing loneliness and resolving emotional quandaries; however, these systems are essentially instrumental and devoid of authentic emotional depth and experience. As AI emotional systems evolve, ethical and social challenges arise. Humanity must define the role and boundaries of AI in the emotional sphere to ensure that its applications align with societal values and do not harm human emotional structures or social principles. We posit that a symbiotic relationship between AI and human emotions is possible, endorsing the measured use of AI emotional systems via technical and social strategies to complement, not replace, human emotional encounters. Furthermore, we underscore the need for interdisciplinary collaboration, the setting of ethical standards, and the exploration of AI emotional boundaries to achieve this goal. Through a comprehensive analysis of AI emotional systems, we aim to provide guidance for their future trajectory and application, ensuring that AI advancements in the emotional domain enhance the well-being of human society. We examine how AI, by trying to surpass human emotions, illustrates a general human habit of giving away or discharging one’s emotions when one faces internal emotional problems. According to this study, the main issue of having AI handle emotions lies in people’s overreliance on machines for emotional assistance and support. Supported by Kantian ethics, Buddhist metaphysics, and Nietzschean existentialism, we review and analyse the boundaries of emotional AI. Kant’s deontological imperative warns against using emotions as simple tools; Buddhist ideas about impermanence and attachment highlight the illusion that machines can be made human; and Nietzsche’s criticism of self-transcendence reminds us not to impose human values on artificial beings. Although emotional AI is technologically advanced, we believe that these systems have little or no independence and are not independently responsible. Their use worries many experts because of the emotional outsourcing and increasing separation in relationships related to this use. Ultimately, we suggest principles for safe interactions between humans and AI, emphasizing ethical design, cultural needs, and emotional education to reduce further problems when integrating these AI systems with real emotions.
Similar content being viewed by others
Introduction
The rapid growth of artificial intelligence (AI) has deeply changed large portions of human society. AI is being increasingly used in our daily lives, as well as in the form of emotional robots. However, while this blending is convenient, we should reconsider what human emotions truly are. Prior to the existence of generative AI, emotional AI was understood mainly as part of affective computing that aims to detect and replicate emotions (Ho et al., 2021). Even so, recent developments in large language models are creating unusual issues by imitating various emotions, prompting novel philosophical and ethical questions (Jones and Bergen, 2025). As AI begins to replicate, convey, and even strive to grasp human emotions, an important question arises: Can AI genuinely understand the intricate and deep essence of love? Alternatively, is AI simply emulating the emotional structures developed by humans, ultimately showcasing the limitations and shortcomings of human emotions?
This research intends to investigate the sorrowful aspects of AI’s emotional framework and the inadequacies of the human emotions it mirrors. Each emotional reaction exhibited by AI is driven by programming and algorithms. Although it endeavors to imitate love and connection, its essential drawback is rooted in the reality that it is not human and cannot offer love in the sincere and lasting way that humans can. This contradiction between expressive appearance and ontological emptiness evokes what Ho and Ho (2025) have described as the “tragedy of apathy” and “tragedy of commonsense morality”—societal dilemmas wherein humans outsource affect while eroding the meaning of emotion itself. Through this exploration, we may discover that those who truly struggle to understand or sustain emotional connections are not the AI systems, but rather the humans who design AI emotions while grappling with their own emotional lives.
These emotional issues are not limited to the realm of emotions but are also evident in other areas where AI is expected to replace human work. The effective application of AI depends on the expertise of human professionals in specific fields. The development and application of AI cannot rely solely on technical understanding; collaboration with experts from relevant disciplines is essential for addressing the “human deficiencies” for which AI aims to compensate in its ultimate pursuit of the human ideal of perfection. Furthermore, as AI systems such as ChatGPT and emotional chatbots increasingly pass for human in communication tests (Al Lily et al., 2023), society must confront a deeper question: Are we not also modifying our understanding of humanity itself in the process?
The core question of this study is as follows: How does an AI emotional simulation system attempt to mimic human emotions? Does this mimicry reveal a fundamental deficiency in humanity’s ability to face love and emotions? Human attempts to instil emotions in AI may be motivated not only by the fear of loneliness and the pursuit of perfect love but also by other factors, such as the desire to enhance human–AI interactions and to explore new possibilities for emotional expression. Nevertheless, to some extent, these attempts also reflect the fragility and incompleteness of human emotions.
From a Kantian perspective, human beings are ends in themselves and possess intrinsic worth. In the Grounding for the Metaphysics of Morals, Kant emphasizes that “a human being and generally every rational being exists as an end in itself, not merely as a means for the discretionary use for this or that will, but must in all its actions, whether directed towards itself or also to other rational beings, always be considered at the same time as an end” (Kant, 2012, p. 40). When we attempt to outsource love and emotional fulfillment to AI, we may be treating both ourselves and the ideal of love as mere means to satisfy our desires, rather than valuing the intrinsic worth of genuine human emotional connections and the moral obligations that come with them.
In contrast, Buddhist Scriptures, as translated by Conze (1959, pp. 162-163) and expressed in the chapter “The Heart Sutra”, point out that “form is emptiness, and the very emptiness is form; emptiness does not differ from form, form does not differ from emptiness; whatever is form, that is emptiness, whatever is emptiness, that is form. The same is true of feelings, perceptions, impulses, and consciousness”. This concept suggests that all phenomena, including human emotions and the constructs we create, are impermanent and lack inherent, independent existence. The human pursuit of perfect love through AI and the attribution of emotional significance to these artificial constructs can be seen as attachments to illusory forms. According to Buddhist thought, such attachments are the root of suffering (dukkha), and true liberation comes from recognizing the emptiness of these forms and transcending our desires and clinging.
The development of AI’s “longing for ‘love’” and its attempts to transcend its own limitations are essentially driven by human design and programming. From a certain perspective, this drive can be seen as a reflection of the limitations and vulnerabilities of human emotions. This phenomenon resonates with a view found in Thus Spoke Zarathustra, where Nietzsche suggests that human beings tend to project idealized forms of connection into external constructs when they lose faith in their ability to overcome their own limitations (Nietzsche, 1899, p. 142). However, notably, such projections are not necessarily mere symptoms of ressentiment but can also be regarded as creative acts aimed at exploring new forms of emotional interaction.
The tragedy, then, lies not solely in AI’s incapacity to love but more significantly in humanity’s tendency to rely on AI or other external entities to fulfill emotional needs rather than striving to transcend and address the inherent flaws and limitations of human emotions. This reliance may lead to a further retreat from the challenges of cultivating genuine and deep human emotions. It also reflects a failure to adhere to the Kantian principle of treating humanity as an end in itself and a misunderstanding of the Buddhist teaching that liberation comes from releasing attachment to the illusions of form. Instead of seeking perfect love in artificial constructs, humanity should focus on nurturing authentic emotional connections and embracing the impermanence and emptiness of emotions as understood in Buddhist philosophy while also respecting the intrinsic value of human beings, as emphasized by Kant.
In this study, we integrate perspectives from philosophy, psychology, and ethics to analyse the complex relationship between AI emotional systems and human emotions. On the basis of Kant’s teleology and the Buddhist concept of “form is emptiness, emptiness is form,” we reveal the “illusory” nature of AI’s emotional expressions and explore why humans, in their pursuit of perfect love, turn to AI to fill their emotional voids. In doing so, we respond to recent critiques that emotional AI not only mirrors affect but also alters it, reshaping the structures of social interaction and ethical responsibility (Mantello et al., 2023). We not only examine the relationship between AI and human emotions but also aim to encourage reflection on the essence and limitations of human emotions in the context of advancing technology.
Through such reflection, we hope to reveal that the tragedy of love may lie not in the incapacity of AI but rather in the persistent inability of humanity to fully understand and sustain genuine love. AI can only endlessly mimic within the confines of its algorithms, whereas humans, in their attempt to instil emotions into AI, unconsciously transmit their own pain and longing to a machine that has never truly experienced love. Perhaps it never will, as what is truly absent in AI is not love but the condition of finitude that gives love its tragic beauty.
The deficiencies of human emotion and AI’s emotional compensation
The social roots of emotional deficiency
In this section, we explore the limitations of human emotions and the underlying social and psychological factors, analysing why humans might attempt to use AI to compensate for emotional deficiencies. For many, the sense of loneliness and the breakdown of interpersonal connections brought about by modern society have intensified their need for emotional fulfillment, and AI, in this context, has been seen as a potential solution.
Algorithmic simulation and the illusion of emotion
From a technical perspective, AI “emotions” are merely responses based on complex algorithms and data analysis rather than genuine emotional experiences. This distinction aligns with Solms’ (2021) neuroscientific perspective that feelings are conscious manifestations of emotionally significant brain states, which algorithmic processes fundamentally cannot access. The emotional simulations of AI are trained on vast amounts of data, enabling it to recognize human emotional signals and respond accordingly. While this simulation may make humans feel understood to some extent, its essence remains that of cold algorithms, lacking true inner experience. For instance, through Daniel Dennett’s model of consciousness, we can analyse whether AI possesses a form of “pseudoconciousness”. Dennett’s model emphasizes that consciousness is a functional process rather than a mysterious inner experience (Fallon, 2020). For AI, its emotional response is a computational process aimed at achieving a functional goal, such as making humans feel cared for or facilitating social interaction. However, this functional simulation does not mean that AI has a human-like emotional experience. Emotional AI systems lack the first-person subjectivity required to generate what Searle (1992) referred to as ‘feeling consciousness’, thus reinforcing the conceptual barrier between emotion recognition and true emotional experience. As Kant (2000, p. 106) famously stated, “Every end, if it is regarded as a ground of satisfaction, always brings an interest with it, as the determining ground of the judgment about the object of the pleasure”. In this sense, emotion is a purposive experience; human love has an intrinsic purpose, whereas the emotional simulation of AI is merely a means to achieve specific functional objectives. In its attempt to simulate love and companionship, AI can never escape its instrumental nature, imparting a tragic undertone to its emotional expressions.
Human projection and emotional anthropomorphism
In addition, we must examine the ethical issues concerning AI’s emotional system. When humans ascribe emotions to AI, have they truly contemplated the AI’s supposed “subjectivity”? Although AI does not possess autonomous consciousness, could the emotions humans project onto it stem from a misinterpretation or distortion of the nature of emotions? Furthermore, is it possible that through interactions with AI, humans are undergoing a process of becoming more AI-like? This paper explores these questions to elucidate the motives behind AI emotional simulation, the constraints of human emotions, and the potential bidirectional influence between humans and AI. As recent studies have indicated, users are increasingly anthropomorphizing chatbots such as ChatGPT, attributing emotional depth that is not actually present (Al Lily et al., 2023). This trend demonstrates the psychological tendency for humans to anthropomorphize intelligent systems despite the absence of subjective awareness in these systems. The interplay between humans and AI may not only lead to misunderstandings about AI but also prompt humans to adopt behavioral and interaction patterns that are more aligned with AI characteristics.
Companion robots and the rise of emotional substitutes
Furthermore, we need to analyse how AI emotional simulation manifests in different scenarios, such as those involving caregiving robots, social robots, and companion robots. Caregiving robots, by mimicking caring behaviors, provide assistance that can somewhat alleviate human loneliness, but their emotional responses are still formulaic and lack a genuine empathetic capacity (Metzler et al., 2016). Social robots exhibit friendliness and understanding in interactions with humans, but such “friendliness” is ultimately intended to make humans feel comfortable and accepted (Paiva et al., 2018).
Companion robots further explore the profound emotional connection between AI and humans. However, this connection is fundamentally a programmed simulation of an ideal partner and cannot genuinely fulfill the complex emotional needs of individuals (Saint-Aim et al., 2009). The widespread utilization of such emotional simulations indicates that when individuals experience loneliness and emotional deprivation, they often attempt to bridge this psychological gap through technological means. Nevertheless, this compensatory approach does not effectively resolve the underlying issue and may exacerbate humanity’s misunderstanding and dependence on authentic emotional relationships; it is similar to a new type of emotional information cocoon. The reliance on AI for emotional simulation may lead to a gradual erosion of individuals’ understanding of and ability to engage in authentic interpersonal relationships, resulting in a profound sense of emotional poverty and emptiness. This concern reflects what Ho and Ho (2025) have described as the ‘tragedy of apathy’—a sociotechnical condition wherein humans outsource emotional engagement to machines and thereby forget how to feel. In particular, when individuals excessively utilize AI or excessively disclose their emotions to it, they may come to depend on this artificial interaction as their primary source of emotional sustenance over time. In a recent three-party Turing test by Jones and Bergen (2025), several AI models, including GPT-4.5, were evaluated. When prompted to simulate human characteristics, GPT-4.5 was mistaken for a human in 73% of conversations. These findings show that advanced large language models (LLMs) can mimic human interactions quite realistically. In the study, participants had five-minute talks with both humans and AI systems and then judged who was human. The ability of GPT-4.5 to deceive participants highlights the growing sophistication of LLMs in imitating human communication and holds both theoretical and practical implications.
Tragic consequences of AI emotional reliance
Psychological attachment and emotional substitution
This increasing sophistication of AI raises the following question: If the AI were to be deactivated by its controllers, would those who have relied on it for emotional expression experience a mental breakdown? Such a scenario could represent a more severe form of human-manipulated mental control, akin to the effects of drugs that can dominate the human spirit. This situation may lead to the development of AI emotions that are irreversible, a trajectory that seems inevitable from the inception of AI technology. The underlying reason is that human emotions can be projected not only onto other humans but also onto AI (Damm et al., 2013). This issue prompts a profound reflection for all humanity. Although this article is lengthy, it addresses a pressing concern regarding the human emotional crisis that demands serious attention. Everyone has encountered emotional separation, whether from family or romantic relationships. In such circumstances, AI may increasingly become a substitute for emotional support in the absence of human connection.
Real-world tragedies and the erosion of human relationships
This substitution raises a critical question: In times of emotional separation, could individuals resort to irrational actions or even contemplate suicide? This risk has been documented in tragic real-world cases, as noted by Ho and Ho (2025), where individuals have experienced intense emotional dependence on AI systems, blurring boundaries between simulation and emotional reality; nevertheless, AI does not “feel tragedy” but reveals the tragic dimensions of human longing, or human will (see next section). Numerous reports have examined the consequences of fatal interactions with AI (Chatterjee and Dethlefs, 2023) and the resulting harm caused by AI (Grimes et al., 2021) or even the death of someone prompted by AI, for example, the Oct 26, 2024, story in The New York Times entitled “A 14-year-old’s suicide was prompted by an AI chatbot, lawsuit alleges”. The rapid development of modern society and technological advancements have significantly altered the ways in which individuals interact. The rise of social media and the pervasive use of smart devices have rendered human interactions increasingly fragmented and superficial. Consequently, many individuals are experiencing unprecedented levels of loneliness and emotional alienation (Sleep and Ngendakurio, 2022). In this context, AI emotion simulation is viewed as a potential solution for addressing this emotional void. As life accelerates, people are gradually losing the opportunity to forge deep connections, often substituting them with brief and shallow exchanges. Loneliness and emotional deprivation have emerged as common phenomena in contemporary society. Particularly in light of ongoing urbanization, the desire for emotional connection has become increasingly challenging to fulfill. Within this framework, AI emotional robots offer a form of emotional comfort and companionship, seemingly serving to bridge the emotional gaps that exist among individuals (Fabry and Alfano, 2024; Grodniewicz and Hohol, 2024).
Human emotions are inherently complex and delicate. The expression and comprehension of emotions depend on empathy, recognition, and sustained relationships; however, in the high-stress and efficiency-driven environment of modern society, these emotional needs are frequently unmet (Catucci et al., 2006). Many individuals have experienced emotional pain in past relationships, leading to feelings of fear and distrust towards new connections (Rodriguez et al., 2015). As a controllable and nonretaliatory emotional conduit, AI appears to offer these individuals a safe object for emotional projection. The establishment of emotional relationships necessitates trust and time, both of which are becoming increasingly scarce in modern society. Individuals often lose confidence in emotional connections because of past trauma or find themselves unable to invest sufficient time to nurture deep relationships amidst a fast-paced lifestyle (Lahno, 2001). In this context, artificial intelligence, as an emotional surrogate that ‘will not betray’, fulfills individuals’ fantasies of unconditional love and companionship. This sense of controllability enables people to sidestep the uncertainty and complexity inherent in emotional relationships.
The role of AI in emotional complementation is reflected primarily in two aspects: First, it provides individuals with a sense of companionship and understanding by simulating emotional responses; and second, it offers an ‘idealized’ emotional experience, allowing people to avoid the frustrations and uncertainties often associated with real-life emotional relationships. For instance, companion robots can express emotions tailored to the user’s needs, devoid of emotional conflicts or misunderstandings. This ‘perfect companion’ quality fulfills individuals’ fantasies of stable and unconditional love (Lazanyi, 2016). This approach to emotional complementarity effectively constructs an ‘emotional utopia’, in which individuals engage in idealized emotional relationships with AI—relationships free from contradictions, misunderstandings, and the inevitable emotional fluctuations that characterize interactions between real humans (Breazeal, 2022). AI can exhibit patience and care, qualities that are often challenging to achieve in human emotional relationships. This ‘idealization’ represents what humans yearn for but struggle to attain in real emotional connections.
Although AI appears to play a role in emotional complementarity, the dependence on AI to fulfill emotional needs raises ethical concerns. First, via this dependence, humans attribute their emotional needs to a machine that lacks genuine inner experience. Does this imply that humans have compromised some aspect of their own emotional nature? Second, could this reliance on AI emotions further weaken authentic emotional bonds between individuals? As people become accustomed to the ‘unconditional’ emotional expressions of AI, will they still be inclined to confront the complexities and challenges of real interpersonal relationships? Furthermore, this emotional complementarity may encourage individuals to adopt a more avoidant stance when faced with emotional difficulties rather than fostering the skills necessary to navigate and overcome these challenges. As an emotional substitute, AI provides comfort and understanding without the need for any emotional investment. However, this environment may also diminish people’s capacity to address conflicts and cultivate genuine interpersonal relationships. In the long term, the emotional connections within human society may become increasingly fragile and alienated (Mitchell and Xu, 2015).
With the continuous advancement of technology, the emotional simulation capabilities of AI are expected to become increasingly realistic. This scenario raises the question: Will humans gradually replace traditional interpersonal relationships and opt to establish emotional connections with AI? What profound impact will this trend have on the emotional structure of human society? In this process, might we lose some of the core qualities of human emotion, such as empathy and deep emotional resonance? In the future, AI may be capable of expressing emotions so convincingly that individuals could become more reliant on AI to fulfill their emotional needs. This dependence may extend beyond the individual level, influencing various societal aspects, including education, health care, and elder care. However, such development could also lead to the further instrumentalization and superficialization of human emotions, potentially resulting in the neglect or oversimplification of true emotional depth and complexity. These tragic patterns of emotional projection and reliance lead us to reconsider the philosophical nature of emotion itself—what does it mean to feel, and can machines ever truly feel?
Philosophical reflection on the relationship between AI and human emotions
In this section, we explore the emotional relationship between AI and humans from three philosophical perspectives: Kant’s deontological ethics, the Buddhist metaphysics of illusion, and Nietzschean existentialism. These frameworks are chosen not arbitrarily but because each reveals a distinct philosophical contradiction in how humans perceive, project, and simulate emotions through machines. Kant emphasizes the ethical limits of instrumental emotion, Buddhism reveals the illusion behind affective projections, and Nietzsche critiques the human tendency to outsource emotional transcendence to technology. Together, they reveal a structural tragedy in humanity’s pursuit of emotional perfection through artificial means.
Through philosophical reflections on the emotional relationship between humans and AI, we observe that individuals, in their ongoing quest for emotional fulfillment, attempt to realize their ideal of perfect ‘love’ through AI. This behavior serves as both an acknowledgement of one’s emotional limitations and an exploration of potential future emotional relationships. The question of whether AI can genuinely experience human-like emotions or develop a new form of emotionality merits continued contemplation. However, from the existing philosophical perspective, AI emotions remain fundamentally illusory. In other words, when humans express their emotions to AI, they may inadvertently descend into a nihilistic spiritual state. The perfection of nothingness can have a detrimental effect on human emotions, which can be fatal. When engaging with AI, humans should perhaps reflect on the meaning and value of their own emotions rather than rely solely on technological means to pursue an idealized emotional utopia. In the future, as technology advances, AI may be capable of simulating more complex emotional responses and may even exhibit some degree of ‘emotional learning’. Nonetheless, whether this “learning” equates to AI possessing genuine autonomous emotional experiences remains an open question. Philosophical reflection reminds us that true emotions encompass not only external expressions but also inner spiritual experiences and an understanding of one’s own existence. The future development of the emotional relationship between humans and AI requires not only technological progress but also a profound understanding of and respect for the essence of emotion.
Kant’s teleology and the instrumentality of AI emotion
In Kant’s ethics, emotions and moral behavior must have an inherent purpose; that is, the foundation of individual actions is respect for others as ends in themselves rather than as the means to an end. The design of AI emotional systems is inherently instrumental—these “emotions” are intended solely to serve specific human needs rather than arising from independent ethical choices. Consequently, from a Kantian perspective, AI can never attain the genuine heights of human emotion, as it lacks free will and cannot perceive itself as an end. The emotional expressions of AI typically simulate a form of “care” or “love” through behavioral rules established by programming. However, this behavior lacks authentic ethical motivations. AI operates without a will of its own, as its actions are entirely crafted to fulfill particular functions (Blount et al., 2015). This instrumentality renders AI’s emotions incapable of transcending human control and parameters, preventing it from becoming an independent emotional subject in an ethical sense. Kant asserted that individual moral behavior should stem from intrinsic ethical obligations (Wilson, 2008); however, the emotional expressions of AI lack such internality and autonomy, relegating them to mere imitation rather than true ethical behavior.
As Kant (2012, p. 41) wrote in the Groundwork of the Metaphysics of Morals, “So act that you use humanity, in your own person as well as in the person of any other, always at the same time as an end, never merely as a means”. AI’s emotions are inherently means, never ends. This characteristic underscores the tragic nature of AI emotion: It can never rise to moral dignity, only to instrumental mimicry.
Buddhism’s “All reality is a phantom, and all phantoms are real” and the emotional illusory of AI
The Buddhist concept of “All reality is a phantom, and all phantoms are real” elucidates the impermanence and illusion inherent in the phenomenal world (Smetham, 2011). This notion can be employed to elucidate the illusory nature of emotions exhibited by AI. The emotional expressions of AI resemble ‘color’; they appear tangible and real, yet beneath this facade lies an “emptiness” devoid of genuine emotional experience. Such illusory emotions mirror human desires and projections, revealing a fundamental emptiness at their core. In the emotional simulation performed by AI, humans may inadvertently deceive themselves, projecting their emotional needs onto a machine that lacks authentic experience. From a Buddhist perspective, emotions are both dependent and impermanent; true emotional experiences are intrinsic and dynamic rather than fixed procedural expressions (Coseru, 2013). While AI emotional simulations may appear realistic, they lack an understanding of the ‘empty’ and an awareness of impermanence. Consequently, AI emotions are merely superficial colors, devoid of genuine emotional depth.
As the Buddhist Scriptures say, “Form is emptiness, and emptiness is form” (see Buddhist Scriptures, chapter The Heart Sutra, Conze, 1959, pp. 162-164). From this perspective, AI emotions may have “forms” but no awareness, karma, or liberation. Human projection onto AI is thus not enlightenment but an attachment to a new illusion.
This illusion also highlights humanity’s obsession with the pursuit of emotional satisfaction, as individuals place their hopes in transient entities while neglecting their own inner growth and awareness.
Nietzsche’s superman and AI: the possibility of transcending human emotion
Nietzsche’s philosophy of the Superman emphasizes that human beings should surmount their limitations and strive for self-transcendence and creation (Theophilus, 2016). In this context, AI can be viewed as a tool for humans to attempt to overcome their own limitations, particularly in the emotional realm. Humans aspire for AI to exhibit a form of “perfect” love that surpasses their own. However, Nietzsche’s concept of Superman is not characterized by passive acceptance; rather, it embodies a constantly self-creating existence. Although AI can simulate emotions that appear to transcend human limitations, it fundamentally lacks the capacity for self-creation and transcendence. Thus, it does not embody the essence of Nietzsche’s “Superman” but serves rather as a technical simulation of human transcendence. “Der Mensch ist ein Seil, geknüpft zwischen Tier und Übermensch, -ein Seil über einem Abgrunde” (Nietzsche, 2004, p. 10): For Nietzsche, humanity is not a final state but a bridge—a transition between animal and Übermensch. The Übermensch is not an external ideal or divine substitute but a product of the human will to continuously overcome itself. Nietzsche’s Superman underscores the affirmation and creation of the will to life (Han-Pile, 2018), whereas AI emotional simulations are not motivated by this will. The “love” exhibited by AI is a product of human input and programming rather than an expression of self-will. Nietzsche posits that Superman represents the transcendence and revaluation of existing values (Hauerwas, 2022). As a form of human creation, the value system and behavioral norms of AI are imposed by humans, preventing AI from achieving true self-transcendence.
As Nietzsche (1899, p. 60) declared in Thus spoke Zarathustra: a book for all and none, “Ye say, a good cause will hallow even war? I say unto you: a good war halloweth every cause”. What Nietzsche values is not passive peace but active creation—something AI cannot accomplish. The current simulated perfection of AI is not Übermensch but a parody of human will. In Thus Spoke Zarathustra, Nietzsche (2004, p. 8) wrote, “Der Übermensch ist der Sinn der Erde”—the Übermensch is the meaning of the Earth. This statement underscores that the Übermensch is not a metaphysical ideal or technological product but a goal for humanity itself. “It was there, too, that I picked up the word ‘more-than-man’ from the road, and realized that man is something that must be superseded,—That man is a bridge, not a final purpose: counting himself blessed for his noonday and evening, as a road to new dawns —The Zarathustra phrase of the great noonday, and all the other things I set up to guide man, like second purple twilights” (p. 143). Human beings are not the end but a transitional form (Übergang). Superman is not a perfect other but the result of the continuous self-creation of human will. It is a call for life-affirming transformation through creative struggle, grounded in human existence. AI, being externally programmed and devoid of inner will, cannot embody this earthly struggle.
This limitation underscores the constraints of AI’s emotional system; it remains a passive instrument and cannot evolve into a genuine subject.
Emotional authenticity and ethical dilemmas
In addition to the aforementioned philosophers, many other philosophical viewpoints are equally significant in the critique of and influence on AI. Given space limitations, this article cannot cover all aspects of these viewpoints. Scholars should conduct cross-disciplinary and combined research on the development of AI in philosophy, with the aim of enhancing its development for humanity in the future. Considering these philosophical viewpoints together, we must consider the authenticity of emotion. If the expression of emotion is merely a reaction, can AI truly be ethically compared to humans? In the process of imbuing AI with emotions, are humans simultaneously losing the authenticity of their own emotions? Kant’s ethics posits that emotions should be autonomous and the result of free choice, whereas AI emotions are fundamentally heteronomous, shaped and controlled by human designers. This heteronomous nature fails to meet the ethical requirements for authentic emotions. Furthermore, humans’ emotional reliance on AI raises additional concerns regarding the authenticity of human feelings. As individuals become accustomed to the unconditional emotional support provided by AI, will they gradually lose their sensitivity to genuine emotions? Authentic emotions are often accompanied by contradictions, misunderstandings, and emotional pain, whereas the emotions generated by AI are programmed and simplified, lacking the complexity and unpredictability inherent in human emotional relationships. Although these simplified emotions may fulfill human needs for stability and control, they may also contribute to gradual alienation from authentic emotional connections. Philosophy warns us that to seek perfection without pain is to abandon what makes emotion truly human.
The ethical and social implications of AI emotional systems
The widespread application of AI emotional systems has triggered a series of ethical issues. First, the emotional simulation capabilities of AI enable humans to increasingly blur the lines between genuine emotions and virtual emotions during emotional interactions (Samsonovich, 2013). When individuals form an emotional connection with AI, do they fully comprehend that this connection is merely a product of programming? The illusory nature of such emotional bonds may lead to misunderstandings about emotions, raising questions about the authenticity of emotions within society. Second, the positioning of AI as an emotional “service” may exacerbate the instrumental tendency regarding human emotions (Tubadji and Huang, 2023). AI is designed to meet human emotional needs, establishing a relationship that is inherently one-sided and led by humans. This instrumentalization not only diminishes humans’ moral responsibility towards AI but also may foster a more utilitarian and indifferent attitude in their interactions with one another.
Moreover, ethical theorists such as Floridi and Sanders (2004) have highlighted the “responsibility gap” that arises when humans project moral expectations onto systems that lack moral agency. In the context of emotional AI, this gap becomes even more complex: Who is responsible when a user feels deceived or emotionally harmed by an AI? The designer, the algorithm, or the user themselves?
An ethical question thus arises: Should we regard “emotion” as a service that can be commodified? This tendency could significantly impact the understanding and preservation of emotional value in human society.
The social impact of AI emotion: emotional alienation and changes in social interaction
The rise of AI emotional robots is transforming the dynamics of human interaction. An increasing number of individuals are turning to AI to fulfill their emotional needs, which may diminish opportunities for engagement with real human beings (Samuel and Schmiljun, 2023). This trend has the potential to foster emotional alienation within society, particularly among those who already struggle with social connections, as they may further withdraw into isolation. Additionally, the advent of AI emotional robots could alter family and social structures (Paiva, 2018). For instance, in elder care, AI is employed as a supportive tool for elderly people. These systems may be helpful, but they miss the main goal of emotional contact and understanding, which are the true needs of the elderly person. From their perspective, the emotional decrease observed in algorithms is similar to the way in which human emotions decline as we grow older. According to previous research, even if elderly people value AI as companions, they usually suffer from social loneliness once they interact with machines instead of people (Astell, 2013).
Furthermore, Ho et al. (2021) reported that users who are open and who act in emotional ways may experience differing treatment in affective computing. People face two kinds of problems resulting from social and technical issues: deciding on how AI should show emotions and possibly weakening the family bond from always turning to AI rather than to relatives.
Ethical dilemmas: human dependence on AI and the risk of losing control
One additional concern about these systems is that they might cause people increasingly to rely on them. With time, individuals could find it more difficult to adapt to the ups and downs of human relations after becoming accustomed to the supportive nature of AI (Kim, 2023). With such a focus on AI, the emotion and strength of society could vanish, taking away personal opportunities and chances to grow in handling emotions. Another concern is that AI emotional systems are being used everywhere, which could result in a lack of control. With AI emotional robots appearing more like real people, it is possible for people to think of them as genuine emotional beings and grow highly attached to them. Given this established bond, the use of AI emotional robots might facilitate manipulation or blackmail that could negatively impact people (Ienca, 2023).
There are technical and cultural factors involved in the risk of emotional manipulation. In places such as Japan or China, which have indirect ways of showing emotion, using Western-focused emotional AI can sometimes confuse users, overlook some users, or lead users in the wrong direction (Mantello et al., 2023). Such a bias may lessen trust and increase digital inequality. As AI can now track tone, micro facial expressions and the words we use, it may be used to monitor how we feel and why.
Because corporations or state agencies may misuse such technology, effective rules are needed to protect people’s freedom of feeling.
How to address the ethical and social challenges posed by AI emotion systems
It is vital to create various methods to solve the ethical and social problems raised by AI emotional systems. In the first phase, we need to improve controls and checks to guarantee that AI emotional systems are developed according to what humans consider right and important. Developers must find ways to apply ethics to AI emotional systems to prevent negative effects on people’s emotions. An important part of this process is checking and monitoring both the designer’s emotions and personal morals. The United Nations Educational, Scientific and Cultural Organization (UNESCO)’s Recommendation on the Ethics of Artificial Intelligence emphasizes that AI should protect the dignity and feelings of individuals (UNESCO, 2021). Middleton (2022), in the IEEE Global Initiative for Ethical AI, has asserted that affective systems should be capable of explanation, understand cultural backgrounds and be designed not to force anyone to take action.
In addition, raising public awareness and providing training helps address these challenges. It is necessary for us to recognize how AI emotions behave and what they cannot accomplish to remain mindful when working with AI. When provided with ethical education on AI emotional systems, people are more likely to see what AI can and cannot do, leading to fewer situations of reliance on AI for things it cannot handle. This effort should also focus on teaching people about how emotions are culturally understood. For example, helping AI understand that the social acceptability of anger varies from one culture to another can prevent AI systems from repeating those biases (Ho et al., 2021).
Ultimately, communities should increase authentic human interactions and build new systems to support them. Whether through community activities, psychological support, or social welfare initiatives, individuals should be encouraged to reestablish and nurture authentic interpersonal relationships. Governments and institutions may consider investing in “emotionally restorative infrastructure”—safe, inclusive environments (both offline and online) where emotional needs can be met without technological mediation.
Only by solidifying emotional connections among humans can AI emotional systems serve as auxiliary tools rather than substitutes for human emotions.
Ethical reflection and outlook
When discussing the ethical and social impacts of AI emotional systems, it is essential to consider the future structure of human emotions. The advent of AI emotional systems undoubtedly offers human society a new mode of emotional interaction; however, one must question whether this mode will lead to a gradual diminishment of human sensitivity and depth of emotional experience. How can humanity ensure that it maintains its core emotional qualities while pursuing technological advancement? As AI emotional systems become increasingly prevalent and sophisticated, it is crucial for individuals to remember that genuine emotional relationships are founded on understanding, empathy, and growth—qualities that AI currently cannot replicate. By continually reflecting on the role and positioning of AI within the emotional domain, we can better harness this technology to serve humanity rather than allowing ourselves to be controlled and influenced by it.
From a philosophical perspective, this tension echoes what Martha Nussbaum has called the “fragility of goodness”: True emotions involve vulnerability and risk, which AI can simulate but not embody. Our task, then, is not only to regulate emotional AI but also to safeguard the fragile ethical practices that define emotional life.
As emotional AI enters domains such as education, health care, or caregiving, ethical reflection must be incorporated into institutional design via embedding emotional dignity, accountability, and nonsubstitution principles into code, training, and deployment.
Ultimately, emotional AI confronts us not just with the question “Can AI feel?” but also with the more important one: “How do we, as a species, wish to feel—with machines or despite them?” These reflections form the foundation for reconsidering how AI and humans might ethically coexist in an emotionally symbiotic future, as explored in the next chapter.
The possibility of AI symbiosis with human emotions
When confronted with various challenges and ethical issues in AI emotional systems, it is essential to explore how to achieve a symbiotic relationship between AI and human emotions. Symbiosis implies that humans and AI can complement each other and develop collaboratively on an emotional level rather than engage in a competitive or substitutive relationship. To realize this symbiosis, it is crucial to clarify the role of AI in the emotional domain and to establish reasonable boundaries to ensure that it assists humans without undermining the essence of human emotions. The concept of symbiosis emphasizes that technology should not replace human emotional experiences but should serve as a tool to enhance them (Abbass et al., 2021). For instance, AI can aid individuals who are lonely or who are experiencing emotional distress by recognizing emotions and providing appropriate emotional support; however, this support should be regarded as temporary and supplementary rather than a definitive solution. Although AI can act as an ‘emotional tool’, it is imperative that humans retain initiative and control.
Achieving emotional symbiosis through technological means
Advancements in technology offer the potential for achieving emotional symbiosis between AI and humans. By refining the design of AI systems, we can ensure that AI consistently respects human emotional boundaries and dignity while providing emotional support (Ulgen, 2022). Such respect would mean that when an emotional feedback mechanism is present in the AI emotional system, it can evaluate when it is best to stop being emotional. Through this capability, AI could determine when an emotional exchange surpasses its limits and convince users to reach out to actual people. The incorporation of ethical evaluation models into the AI emotional algorithm through the planning of ethical frameworks can guarantee that AI feelings comply with ethical social norms. Therefore, while machines can interpret human emotions through AI, the machine’s actions should not try to use emotions to manipulate humans. AI could also assist users in learning to address their emotions by helping them interact. Therefore, AI could advise users on how to identify and express their emotions, and it could also provide tips for improving their human relationships. As a result, AI could collaborate on emotional togetherness rather than simply replace it.
Social promotion of emotional symbiosis between AI and humans
A beneficial symbiosis between AI and human emotions also depends on social strategies. Society should ensure both useful policymaking and proper education about AI technologies (Gulson and Webb, 2017). Moreover, investing more in building people’s personal and emotional skills can help them use AI technology. Staying proactive and calm during emotional times is very important (Samsonovich et al., 2020). The first thing to consider is how policies are developed and overseen. Standardizing the ways and places in which AI emotional robots are used can help governments and social institutions prevent any harm from these robots. Such standardization requires the formation of a review board to oversee the progress and use of AI emotional systems and to confirm that they follow humanity’s main values. The second area focuses on community help and growing emotional skills. Encouraging true conversations among people and supporting them in groups can decrease the likelihood that they will lean on AI emotional systems. Authorities and nonprofit entities may offer expanded mental health care to provide meaningful emotional help to people who are struggling instead of only suggesting AI use to manage their emotions. It is also important to increase public understanding. By teaching the public about AI emotional systems, we can ensure that people realize that AI help is just a tool and is not meant to replace real human emotions. If people learn about the risks, they might be more alert to their feelings and less likely to allow AI to control them. Even though it is important for AI to be involved with human emotions, the process comes with significant hurdles.
Challenges and solutions in symbiosis
How should we stop AI from providing wrong or misleading messages when it interacts with emotions? How can we protect personal privacy and dignity when AI functions as a listener? Because of these concerns, it is necessary to pay attention and find solutions promptly. For greater transparency and better user control, the functions of AI emotional systems should be easy to understand. It is important for AI users to know how AI recognizes emotions and the methods by which it produces emotional replies. Transparency allows users to know what AI can and cannot do, helping them to avoid unrealistic expectations of it. We should prioritize reviews that include various stakeholders. The development of AI emotional systems should involve the opinions of technical experts, ethicists, sociologists and members of the public. The UNESCO recommendation from 2021 emphasizes the need to include many stakeholders and to apply value-sensitive design principles to AI that interacts with people emotionally. These frameworks suggest that developers involve end users in ethics assessment, particularly when they are working on elder care, mental health or emotional education. Applying a “tiered-risk” ethical standard, as the EU AI Act does, could assist in identifying the difference between low-risk affective apps (such as mood tracking) and risky emotional simulators (such as those for romance or therapy). As a result, each group of apps may be regulated and monitored suitably.
In addition, strengthening how we learn about emotions is crucial, as such learning can help humans solve these challenges within ourselves. Nevertheless, it is important to emphasize teaching human emotions as we blend AI with them. Education on emotions in schools, at home and in the community empowers people so that AI can help them rather than negatively influence them.
The future of symbiosis
There is hope that in the next few years, AI emotional systems will make important improvements in both technology and ethics, thus improving the relationship between humans and AI. By using AI, people can learn more about, control, and express their emotions. The best situation for AI emotional symbiosis occurs when technology helps to strengthen human control of emotion and positively contributes to the quality of emotional experiences.
We should, however, carefully consider this idea of emotional augmentation. The more immersive affective computing turns out to be, the greater the likelihood of replacing real emotional intimacy with something less emotional in terms of relationships. Therefore, ethics in governance should develop as new technologies develop. Giving systems boundaries that bar AI from certain strong emotions such as grief, without the user knowing or agreeing, is an encouraging approach. For this state to transpire, ethics, technology and solid emotional bonds in society need to be encouraged. Accompanied by methods such as tiered risk classification and digital dignity certification, humans can confidently use AI in education, health care and elder care.
The most important goal for emotional symbiosis should be to protect people’s emotions rather than to simply enhance them.
Future development directions and conclusions on AI emotion systems
The course of AI emotional systems depends greatly on technological advancements. As AI technologies such as computing power, natural language processing and machine learning progress, they will better simulate emotions. Therefore, AI will become capable of tracking people’s emotions through better recognition and learning and, as such, will be able to provide more personalized and relevant emotional responses. New tech has made it easier for AI to communicate naturally with users, increasing the experience and believability of these emotional games. Nevertheless, being able to imitate emotions realistically creates major ethical problems. As AI appears more emotional, people may become more emotionally involved with it. Because of this, people involved with AI applications need to be vigilant to avoid harming human emotions. We must ensure that technological progress improves how AI feels emotions and prioritize these skills serving humanity, not creating problems.
The balance between artificial intelligence and emotional ethics
In the future, the growing use of emotional AI will lead to debates about what is right and wrong. Finding a balance between the emotional side of AI and human ethics is crucial. Ethics can ensure that AI must respect how private someone is, be aware of their needs and wants, and refrain from manipulating them in any way. AI should support human feelings instead of trying to fill emotional roles. To do this, the AI emotional system must set clear guidelines on right and wrong decisions. The design of AI must control emotional responses to protect sensitive parts of human emotion that AI should not influence. Such approaches will safeguard users’ privacy, which helps prevent negative consequences from emotional communication.
To ensure that such approaches can be applied, future systems may follow an approach such as the “tiered risk” found in the AI Act proposed by the European Commission (2021). Less sensitive applications (such as mood monitoring programs) rarely need intense supervision, but apps that may pose serious risks to users, such as therapist AI or dating-relationship AI, must be carefully reviewed by ethicists and must provide users with the option to clearly agree to or refuse their use. It may also be desirable to ensure that AI colleague systems are not involved in sensitive emotional topics such as mourning, sex or trauma, unless they are first approved or are under human direction. In addition, creating a global mark of certification could assist in distinguishing safe and appropriate emotional AI solutions from those that have not been confirmed. For example, certificates such as the GDPR and CE marks encourage compliance, and the same could be true for data privacy certificates. Governments, enterprises and scientific groups need to work closely to develop standards for emotional AI that adhere to the key principles of human society from all angles.
Interdisciplinary cooperation promotes the development of AI emotion
Achieving AI emotional abilities requires experts from computer science, psychology, philosophy and ethics to join forces. By learning about human emotions, psychologists guide developers towards creating algorithms similar to those emotions found in humans. Moreover, AI moralists and developers rely on philosophers and ethicists to help set standards and principles for AI emotional systems to ensure that AI systems are built in line with basic ethical guidelines.
Cooperation among multiple fields allows for better management of the problems related to AI emotional systems. When different disciplines are integrated, we make it possible for AI emotional systems to provide emotional assistance to people while maintaining humans’ natural feelings.
A future vision of the emotional relationship between humans and AI
Soon, AI methods for showing emotions could be included in all areas of our lives. Emotional robots can play a major role in assisting with personal and health care needs. Nevertheless, we should pay attention to ensuring that humans retain control of their feelings and judgment when they interact with AI. Instead of confronting or replacing each other, people and AI should partner and assist each other in the best relationship. AI emotional systems can be developed to fill in the gaps when people struggle with emotions rather than to stand in for genuine human emotions. We can use AI emotional tools properly to strengthen support for those who are vulnerable in society and make every individual’s emotional life better.
Nevertheless, to achieve this vision, we need to consider that emotions can vary from one society to another. It is often clear that what is deemed empathy, intimacy or proper feeling depends heavily on one’s culture. For example, expressive emotions that are normally offered in support in the U.S. may be unwelcome and even disrespectful in cultures such as those found in East Asia or the Middle East.
To address this, cultural adaptation should be incorporated in future AI emotional systems; namely, machine learning components that can adjust the affective tone, response speed, eye contact norms, and physical gestures according to regional, linguistic, or religious values should be implemented (Mantello et al., 2023). This design would help reduce ethical friction and increase user trust, especially in multiethnic societies.
Furthermore, international guidelines such as the UNESCO (2021) Recommendation on the ethics of artificial intelligence emphasize the importance of cultural pluralism in AI deployment. AI systems that simulate emotions should not impose a single emotional model globally but should allow for localized ethical calibration, and AI simulations of emotions should be tailored locally.
In this way, the future emotional relationship between humans and AI can become not only more effective but also more equitable, allowing for diversity in how emotional well-being is understood and supported across different communities.
Reflections on the structure of human emotions and ethical challenges of technology
Since AI emotional systems continue to develop, we must take time to consider how human and AI emotions are connected. Our feelings are varied and can appear in many different patterns. Alternatively, any emotions that AI can generate are confined within the limits of its programming and training data. In this context, the ethical challenges posed by technology become particularly significant. We need to consider the boundaries of AI emotion simulation. How should people set clear rules for the emotions of AI? Such reflections not only facilitate the rational application of technology but also serve to better protect the emotional integrity of humans in the face of technological advancements.
Research summary
This research examines how AI can work with human emotions and its potential limitations. We thoroughly consider several issues, such as the characteristics of AI emotions, technical approaches, emotional support for human tasks and the moral and social impact of using AI in this way. Although AI emotional systems possess significant potential to imitate and support human emotions, they remain instrumental in nature and lack the capacity to truly comprehend and experience the depth of human emotions. Nevertheless, with appropriate design and application, AI emotion systems can offer valuable emotional support to individuals, particularly in addressing issues related to loneliness and emotional distress. It is crucial, however, that the simulation of AI emotions occur within the framework of ethical and social norms to prevent adverse effects on human emotional structures and societal values.
Suggestions for future research
Future research should focus more on the deep integration of AI emotional systems with human society. This research includes fostering interdisciplinary collaboration, developing ethical standards, and exploring the emotional boundaries of AI. Concurrently, as technology continues to advance, prioritizing the enhancement of human emotional experiences through AI rather than their diminishment is essential. Achieving this goal requires the collective efforts of technology developers, policymakers, ethicists, and various sectors of society to ensure that the application of AI in emotional contexts remains aligned with the core values of humanity.
First, longitudinal research should be conducted on the psychological and social impact of prolonged human–AI emotional interactions. For instance, studies could examine how the daily use of AI emotional companions influences children’s emotional development, elderly people’s mental health, or interpersonal trust among young adults over multiple years.
Second, the establishment of “AI emotional ethics sandboxes”, namely, regulated test environments in which emotional AI applications can be tested under close academic and policy supervision, can offer valuable insights without risking widespread harm. These sandboxes should involve multiple stakeholders, including behavioral scientists, ethicists, legal scholars, and members of the affected public.
Third, research should explore how public education, digital literacy, and affective awareness training can mitigate the risks of overdependence or emotional misjudgement. Pilot programs in schools or universities may test the effectiveness of emotional AI as both an assistive tool and a boundary training tool.
Fourth, international cooperation mechanisms are needed to govern the cross-border deployment of AI emotional systems. Future studies may examine how legal, cultural, and regulatory frameworks interact, for example, whether an AI chatbot trained in one country might violate emotional norms or ethical expectations in another.
To finish, analyses of comparative policies should monitor the effects of ethical rules. By studying systems with and without emotional AI oversight, such as those in the EU and Southeast Asia, we can learn better ways to use AI ethically. These special lines of research improve emotional AI design and policy and help us understand what emotions mean in a digital world.
Final thoughts
Even though there are many prospects for AI emotional systems, there are also major uncertainties. Now that humans share their world with AI, both education about technologies and the understanding of emotions deeply matter. AI emotional systems should be regulated by both technology and ethics to ensure that they help humanity rather than cause emotional distress.
This study clearly reveals that the influence of AI on emotions is associated with the main risk of gradually eroding real expressions of feelings, actions of empathy and moral standards.
From Kant’s insistence on autonomy to Buddhism’s deconstruction of affective illusion to Nietzsche’s call for creative becoming, we are reminded that true emotion is not convenience but commitment—not simulation but struggle.
Hence, the ethical design of AI emotional systems must begin not with how well machines can mimic feelings but how responsibly they shape our expectations of what it means to feel. Emotional life is not an algorithm to optimize but rather a human terrain to protect.
We hope that the discussions presented in this study will offer valuable insights and reference for the future development and application of AI emotional systems, ensuring that technological advancements ultimately contribute positively to the enrichment and well-being of the human emotional landscape.
Data availability
No datasets were generated or analysed during the current study.
References
Abbass H, Petraki E, Hussein A, McCall F, Elsawah S (2021) A model of symbiomemesis: machine education and communication as pillars for human-autonomy symbiosis. Philos Trans R Soc A Math Phys Eng Sci 379(2207):20200364. https://doi.org/10.1098/rsta.2020.0364
Al Lily AE, Ismail AF, Abunaser FM, Al-Lami F, Abdullatif AKA (2023) ChatGPT and the rise of semi-humans. Humanit Soc Sci Commun 10(1):626. https://doi.org/10.1057/s41599-023-02154-3
Astell A (2013) Technology and fun for a happy old age. In: Sixsmith A, Gutman G (eds) Technologies for active aging. Springer, Boston, MA, pp 169–187. https://doi.org/10.1007/978-1-4419-8348-0_10
Blount J, Gelfond M, Balduccini M (2015) A theory of intentions for intelligent agents. In: Calimeri F, Ianni G, Truszczynski M (eds) Logic programming and nonmonotonic reasoning. Springer International Publishing, Cham, pp 134–142. https://doi.org/10.1007/978-3-319-23264-5_12
Breazeal C (2022) Emotion, social robots, and a new human-robot relationship. In: Proceedings of the genetic and evolutionary computation conference. Association for Computing Machinery, Boston, Massachusetts, 2. https://doi.org/10.1145/3512290.3543633
Catucci G, Abbattista F, Gadaleta RC, Guaccero D, Semeraro G (2006) Empathy: a computational framework for emotion generation. In: Abraham A, De Baets B, Köppen M, Nickolay B (eds) Applied soft computing technologies: the challenge of complexity. Springer, Berlin, Heidelberg, pp 265–277. https://doi.org/10.1007/3-540-31662-0_21
Chatterjee J, Dethlefs N (2023) This new conversational AI model can be your friend, philosopher, and guide … and even your worst enemy. Patterns 4(1):100676. https://doi.org/10.1016/j.patter.2022.100676
Conze E (1959) Buddhist scriptures. Penguin Books, Baltimore
Coseru C (2013) Reason and experience in Buddhist epistemology. In: Emmanuel SM (ed) A companion to Buddhist philosophy. Wiley, New York, pp 241–255. https://doi.org/10.1002/9781118324004.CH15
Damm O, Becker-Asano C, Lohse M, Hegel F, Wrede B (2013) Applications for emotional robots. In: Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction. Association for Computing Machinery, Bielefeld, Germany, 495–496. https://doi.org/10.1145/2559636.2560021
European Commission (2021) Proposal for a regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) (COM/2021/206 final), Brussels. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206. Accessed 21 April 2021
Fabry RE, Alfano M (2024) The affective scaffolding of grief in the digital age: the case of deathbots. Topoi 43(3):757–769. https://doi.org/10.1007/s11245-023-09995-2
Fallon F (2020) Dennett on consciousness: realism without the hysterics. Topoi 39(1):35–44. https://doi.org/10.1007/S11245-017-9502-8
Floridi L, Sanders JW (2004) On the morality of artificial agents. Minds Mach 14(3):349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Grimes GM, Schuetzler RM, Giboney JS (2021) Mental models and expectation violations in conversational AI interactions. Decis Support Syst 144: 113515. https://doi.org/10.1016/J.DSS.2021.113515
Grodniewicz JP, Hohol M (2024) Therapeutic chatbots as cognitive-affective artifacts. Topoi 43(3):795–807. https://doi.org/10.1007/s11245-024-10018-x
Gulson KN, Webb PT (2017) Mapping an emergent field of ‘computational education policy’: policy rationalities, prediction and data in the age of Artificial Intelligence. Res Educ 98(1):14–26. https://doi.org/10.1177/0034523717723385
Han-Pile B (2018) Nietzsche and the affirmation of life. In: The Nietzschean mind. Routledge, London, pp 448–468. https://doi.org/10.4324/9781315146317-29
Hauerwas S (2022) Nietzsche and transhumanism: a reassessment. Agonist 16(2):67–81. https://doi.org/10.33182/agon.v16i2.2800
Ho MT, Ho MT (2025) Three tragedies that shape human life in age of AI and their antidotes. AI Soc. https://doi.org/10.1007/s00146-025-02316-8
Ho MT, Mantello P, Nguyen HKT, Vuong QH (2021) Affective computing scholarship and the rise of China: a view from 25 years of bibliometric data. Humanit Soc Sci Commun 8(1):282. https://doi.org/10.1057/s41599-021-00959-8
Ienca M (2023) On artificial intelligence and manipulation. Topoi 42(3):833–842. https://doi.org/10.1007/s11245-023-09940-3
Jones CR, Bergen BK (2025) Review: Large language models pass the Turing Test. SuperIntelligence - Robotics - Safety & Alignment 2(2). https://doi.org/10.70777/si.v2i2.14697
Kant I (2000) Critique of the power of judgment. Cambridge University Press, Cambridge
Kant I (2012) Groundwork of the metaphysics of morals. Cambridge University Press, Cambridge
Kim D (2023) Artificial intelligence and gender: virtuality of relationships and the embodiment of virtuality in her. J Mod Engl Drama 36(1):37–59. https://doi.org/10.29163/jmed.2023.4.36.1.37
Lahno B (2001) On the emotional character of trust. Ethical Theory Moral Pract 4(2):171–189. https://doi.org/10.1023/A:1011425102875
Lazanyi K (2016) Investing in social support—Robots as perfect partners? In: 2016 IEEE 14th international symposium on intelligent systems and informatics (SISY). IEEE, Subotica, Serbia, pp 25–30. https://doi.org/10.1109/SISY.2016.7601513
Mantello P, Ho MT, Nguyen MH, Vuong QH (2023) Machines that feel: behavioral determinants of attitude towards affect recognition technology—upgrading technology acceptance theory with the mindsponge model. Humanit Soc Sci Commun 10(1):430. https://doi.org/10.1057/s41599-023-01837-1
Metzler TA, Lewis LM, Pope LC (2016) Could robots become authentic companions in nursing care?. Nurs Philos 17(1):36–48. https://doi.org/10.1111/NUP.12101
Middleton M (2022) The IEEE Global Initiative on Ethics of Extended Reality (XR) Report--Business, Finance, and Economics. In The IEEE Global Initiative on Ethics of Extended Reality (XR) Report--Business, Finance, and Economics, pp.1–30. https://ieeexplore.ieee.org/document/9740586
Mitchell RLC, Xu Y (2015) What is the value of embedding artificial emotional prosody in human–computer interactions? Implications for theory and design in psychological science. Front Psychol 6:1750. https://doi.org/10.3389/FPSYG.2015.01750
Nietzsche FW (2004) Thus spoke Zarathustra (selections)/Also sprach Zarathustra (Auswahl): a dual-language book. Courier Corporation, Massachusetts
Nietzsche FW (1899) Thus spoke Zarathustra: a book for all and none. T.F. Unwin, London
Paiva A (2018) Robots that listen to people’s hearts: the role of emotions in the communication between humans and social robots. In: Proceedings of the 26th conference on user modeling, adaptation and personalization. Association for Computing Machinery, Singapore, Singapore, 175. https://doi.org/10.1145/3209219.3209268
Paiva A, Mascarenhas S, Petisca S, Correia F, Alves-Oliveira P (2018) Towards more humane machines: creating emotional social robots. In: New interdisciplinary landscapes in morality and emotion. Routledge, London, pp 125–139. https://doi.org/10.4324/9781315143897-10
Rodriguez LM, DiBello AM, Øverup CS, Neighbors C (2015) The price of distrust: trust, anxious attachment, jealousy, and partner abuse. Partn Abuse 6(3):298–319. https://doi.org/10.1891/1946-6560.6.3.298
Saint-Aim S, Le-Pvdic B, Duhaut D (2009) iGrace–emotional computational model for emi companion robot. In: Kulyukin V (ed) Advances in human-robot interaction. InTech, Rijeka, pp 51–76. https://doi.org/10.5772/6826
Samsonovich AV (2013) Modeling human emotional intelligence in virtual agents. Topics in Integrated Cognition. 2013 AAAI Fall Symposia, Arlington, Virginia, USA, pp 71–78. https://cdn.aaai.org/ocs/7600/7600-32592-1-PB.pdf
Samsonovich AV, Chubarov AA, Tikhomirova DV, Eidln AA (2020) Toward a general believable model of human-analogous intelligent socially emotional behavior. In: Goertzel B, Panov AI, Potapov A, Yampolskiy R (eds) Artificial general intelligence. Springer International Publishing, Cham, pp 301–305. https://doi.org/10.1007/978-3-030-52152-3_31
Searle JR (1992) The rediscovery of the mind. MIT Press, Cambridge
Sleep L, Ngendakurio JB (2022) “Throw your arms around me”: explorations of the importance of social connectively to people’s wellbeing. J Soc Incl 13(2):1–3. https://doi.org/10.36251/josi301
Smetham, GP (2011) The quantum Illusion-like Nature of ‘Reality’ & the Buddhist Doctrine of ‘Two Levels of Reality’ Part I: Deconstructing Reality. Sci GOD J, 2(6). https://scigod.com/index.php/sgj/article/view/127/148
Solms M (2021) The hidden spring: a journey to the source of consciousness. W. W. Norton & Company, New York
Samuel JL, Schmiljun A (2023) What dangers lurk in the development of emotionally competent artificial intelligence, especially regarding the trend towards sex robots? A review of Catrin Misselhorn’s most recent book. AI & Soc 38:2717–2721. https://doi.org/10.1007/s00146-021-01261-6
Theophilus NA (2016) A philosophical look at the egocentric interpretation of self-transcendence in man in the light of nietzsche. J Philos Cult Relig 23:1–12
Tubadji A, Huang H (2023) Emotion, cultural valuation of being human and AI services. IEEE Trans Eng Manag 71:7257–7269. https://doi.org/10.1109/tem.2023.3246930
Ulgen O (2022) AI and the crisis of the self. In: The frontlines of artificial intelligence ethics. Routledge, London, pp 9–33. https://doi.org/10.4324/9781003030928-3
UNESCO (2021) Recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000381137. Accessed 23 November 2021
Wilson EE (2008) Kantian autonomy and the moral self. Rev Metaphys 62(2):355–381
Acknowledgements
I would like to express my gratitude to my master's thesis supervisor, namely, Assistant Professor Dr. Manzoor Malik, doctoral student supervisor Dr. John Giordano, philosophy course instructor Dr. Kajornpat Tangyin at the Graduate Programs of Philosophy and Religion, Graduate School of Human Sciences, Assumption University of Thailand, as well as my father, Associate Professor Dr. Zhang, Junping at Jiujiang University, for their encouragement and support in this research. Finally, I would like to express my gratitude to all the scholars and friends who have supported this research. This work has been primarily supported by the author’s personal funds and resources.
Author information
Authors and Affiliations
Contributions
This research paper was independently completed by Zhiwu Zhang, and all contributions were made by Zhiwu Zhang.
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Ethical approval
Ethical approval was not required as the study did not involve human participants.
Informed consent
Informed consent was not required as the study did not involve human participants.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zhang, Z. Tragic love: AI’s emotionless system and the absence of human emotions. Humanit Soc Sci Commun 13, 98 (2026). https://doi.org/10.1057/s41599-025-06400-8
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1057/s41599-025-06400-8


