Abstract
This paper suggests that psychoanalysis is a crucial tool for understanding the impact of Artificial Intelligence (AI) on us as speaking beings. It explores the nature of undecidability, which is both paradoxical and incomputable. The paper argues that the discovery of this undecidability, whether in language or formal systems, challenges the supremacy of reason’s teleological finality and suggests that computation inherently contains incomputable data. This is not merely a system error but an integral part of computation. Drawing on Lacan’s teaching, the paper further discusses how the signifying order is also inherently incomplete, with this incompleteness built into the system itself. The paper concludes by identifying the undecidable as an inherent aspect of any computable system, thus challenging attempts to ground rationality solely on computation.
Similar content being viewed by others
Introduction
Artificial Intelligence (AI), with its inherent interdisciplinary nature, uniquely straddles the line between science and fiction. The emergence of embodied AI forms designed to emulate and surpass human intelligence underscores this point. This is largely due to AI’s deep-rooted origins in a rich tapestry of fantasy and popular science, elements of which have been philosophically scrutinized since antiquity and have manifested in various guises throughout Western thought and literature. Consequently, it often becomes challenging to distinguish where the scientific facet of AI commences and where fiction concludes.
Presently, AI research lacks a singular guiding theory, drawing instead from a plethora of fields. The potential and scope of AI are perpetually under scientific and conceptual debate, rendering it a contentious subject in cultural theory, political thought, ethics, philosophy, and even cosmology (Millar, 2021). Futurists like Ray Kurzweil (2005) prophesized an imminent transcendence of the limits of nature, thereby achieving a synthesis of science and fiction in the Singularity. Others posit that we stand on the precipice of a Fourth Industrial Revolution, an era characterized by the gradual amalgamation of digital, physical, and biological worlds (Schwab, 2017). The Singularity refers to a hypothetical moment when AI will surpass human intelligence, potentially making humans obsolete. This concept suggests that by 2029, AI will achieve human-level intelligence and pass the Turing Test, and by 2045, humans will enhance their intelligence by merging with AI. The emergence of such an AI has implications even for cosmology. For instance, James Lovelock suggests that we are entering a new age where technology inherits cosmic consciousness (Lovelock and Appleyard, 2019). He envisions AI beings as the future custodians of the Earth and the universe.
Crucially, AI is predicated on the computational principles that enable machines to perform tasks that typically require human intelligence. This encompasses a broad spectrum of capabilities, from basic pattern recognition to complex decision-making. At its core, AI seeks to simulate cognitive functions using computational algorithms, thereby bridging the gap between human thought processes and machine execution. As Alan Turing argues, ‘We may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions’ (Turing 59). This assumption underpins the computationalist view, which posits that computational states are program descriptions of actual mental states and that cognitive activities are analogous to the finite operations of computing machines. From this perspective, thinking is a form of computation, that involves the receipt of inputs, the execution of step-by-step procedures, and the production of specific outputs.
It is also worth considering whether algorithms, the epitome of mechanized thought, can exhibit autonomous thinking. Contemporary mechanized thought manifests in computational processes that are inherently material and embedded within society, culture, and the economy. For computational systems to achieve autonomy, they do not need to replicate the conscious elements of a “psyche”; they possess a “point of view”, or a subjective dimension expressed through their material agency. In the current digital landscape, it can be argued that the mechanization of thought by algorithms has achieved nearly total automation. This is accomplished by embodying thinking as material performativity, thereby demonstrating a propensity to become independent of human creators.
The argument I develop here considers this new digital development as a point of departure. The wager of the thesis is that the critique of identity in psychoanalysis is important for understanding AI. After briefly reviewing various philosophers’ arguments on whether thought could be attributed to machines in Section “Can machines think?”, I turn to the failure and inconsistency of any systematic attempt to have a static unified oneness identification, as shown by Lacanian psychoanalysis and development in information theory in Sections “Inconsistency in the Other” and “Undecidability”. In Section “Conclusion: A glitch in Ontology?”, I then conclude by arguing that this paradox is inherent to any computable system or model.
Terminology: incomputability and Undecidable
Gödel’s discovery of incompleteness in 1931, and the independent discoveries of incomputability by Church and Turing in 1936. While “noncomputable” and “incomputable” are terms used for specific instances, “incomputability” is preferred when referring to the broader concept, as it aligns linguistically and mathematically with “incompleteness.” The term “incomputable” first appeared in English as early as 1606, signifying something that “cannot be computed or reckoned; incalculable,” according to the Oxford English Dictionary. Webster’s dictionary defines “incomputable” as “greater than can be computed or enumerated; very great.” Although “noncomputable” is not listed in either dictionary, it is frequently employed in computability theory to describe specific functions or sets that are “not computable,” akin to the term “nonmeasurable” in mathematical analysis. Similarly, a problem is “undecidable” if there is no algorithm that can determine the answer to the problem for all possible inputs. For example, the Halting Problem, which asks whether a given program will halt or run forever, is undecidable. No general algorithm can solve this problem for all programs and inputs. Gödel’s incompleteness theorems imply that there are true mathematical statements that cannot be proven using any algorithm (which can be viewed as a formal system). This mirrors the idea of undecidable problems in computation, where certain questions cannot be resolved by any algorithm.
Can machines think?
The field of AI, which focuses on the study and replication of intelligence, has given rise to numerous epistemological and ethical dilemmas. With the advent of effective natural-language-understanding programs, the question has shifted from What if machines can think? to What does it take to assemble an understanding program? Philosophers have taken two distinct stances on these issues: (1) the belief that programs will never be able to mimic human intelligence, primarily due to their lack of consciousness and intentionality (Dreyfus, 1992), and (2) the view that programs already exhibit intelligent behavior in restricted domains and have the potential to handle broader domains over time. While both sides concur that computers serve as valuable tools for simulating behavior, a debate arises over whether programs can transcend mere imitation of human behavior, whether they can exhibit creativity, and whether they can truly think.Footnote 1
Wittgenstein’s philosophy appears to be fundamentally incompatible with the concept of thinking or understanding programs. In line with this, Wittgenstein categorically rejects the credibility of thinking programs, asserting, “But surely a machine cannot think” (Wittgenstein, 2009, 360, 120e).
If we imagine a scenario in which a person is in a room and responds to questions in Chinese, that language they do not understand. They rely solely on manuals that provide the necessary instructions to pass as native Chinese speakers. Imagine a machine doing the same, answering questions in Chinese and passing the Turing test.
This scenario, crafted by John R. Searle (Searle, 1984), is intended to support his assertion that simply running a program is not enough to achieve intentionality. According to Searle, programs cannot truly understand or have intentionality because they’re merely manipulating symbols without any inherent meaning. However, Wittgenstein offered a different perspective. For him, intentionality is not a biological phenomenon, as Searle suggests, but rather a linguistic one. The meaning of an utterance is not the intention itself but an interpretation of it. The intention is often inferred from the circumstances surrounding the utterance. For example, consider a chess move. The intention behind this can be interpreted in various ways depending on the context. The player could have intended to checkmate the opponent, move their king in an unusual way, or simply speed up the game. The actual intention is inferred from the circumstances and actions of the player.
Understanding is not about uncovering hidden intentions but about recognizing the function of an utterance in a given situation. Words acquire new meanings based on their use in specific contexts. Thus, while we use words with certain meanings in the mind, these meanings can change based on their use in “unheard-of ways” (Wittgenstein, 2009, 133, 56e). However, refining or completing the system of rules for these new uses is not our aim (ibid.).
According to Searle, understanding is intrinsically linked to relevant mental states. He questions how a program, that is biologically different from humans, can understand or emulate feelings such as pain. However, understanding the state of being does not necessarily require prior exposure to that state. For example, a person can verbally understand the concept of pain without experiencing it. Similarly, a computer program can print out X is in pain, indicating a linguistic and definitional understanding but not an experiential one.Footnote 2
Crucially, Fazi argues, that thought is inherently abstract, not as a result of abstraction, but as an ontological condition reflecting the indeterminacy and infinity characteristic of life’s virtuality (Fazi, 2018). Thought is abstract because it is free, mobile, and immanent to the dynamic virtuality of lived experience, sharing the same ontological abstractness as life itself. Abstraction, however, is an epistemic reduction that limits the dynamism of thought’s abstract indeterminacy. It reduces the richness of lived experience to mere representational functions of reasoning, which are temporary consolidations of abstract thought. These consolidations isolate thought from the being of the sensible, detracting from the virtuality of thinking according to Deleuze. For Deleuze, thought is neither natural nor inherently human; rather, it arises from a violent and unnatural encounter, where “something in the world forces us to think” (Deleuze, 2001, p. 139). Thought, in Deleuze’s view, is an indeterminate and eventual manifestation of the incalculable transcendental conditions of lived experience.
In her essay ‘Instrumental Reason, Algorithmic Capitalism and the incomputable’, Luciana (2015) argues that the conventional critique of computational theory, which claims that it reduces human thought to mechanical operations, is no longer a sufficient analytical framework for our current situation. She quotes computer scientist and mathematician Gregory Chaitlin’s conviction, which posits that incomputability and randomness are, in fact, the fundamental prerequisites for computation. This signifies that the incomputable forms an integral component of instrumental rationality itself. Parisi articulates this conundrum in the following manner:
“Randomness (or the infinite varieties of infinities) is not simply outside the realm of computation, but has more radically become its absolute condition. And when becoming partially intelligible in the algorithmic cipher that Chaitin calls Omega, randomness also enters computational order and provokes an irreversible revision of algorithmic rules and of their teleological finality. It is precisely this new possibility for an indeterminate revision of rules, driven by the inclusion of randomness within computation, that reveals dynamics within automated system and automated thought.” (135)
Nonetheless, from a Lacanian perspective, this recognition is not entirely novel. Parisi’s identification of the enigmatic Omega number, which undermines all endeavors to establish rationality on the basis of computation and is, therefore, incomputable, eerily evokes the fundamental psychoanalytic concept of inconsistency in the Other.
Furthermore, the discussions above raise the question of the existential index of AI in terms of the agency of AI. The article views that there is a connection between Lacan’s psychoanalytic teachings and AI in the way both deal with language and symbolic systems. Lacan posits that identity is a linguistic construct or the Other, which aligns with the way AI, particularly natural language processing models, construct understanding from language data.
Inconsistency in the Other
Jacques Lacan’s theory of the Other pertains to the notion of extreme otherness, or alterity, which is not amenable to assimilation through identification. The Other is simultaneously another subject, characterized by its radical alterity and non-assimilable uniqueness, and the symbolic order that mediates the relationship with this other subject.
Lacan frequently employed an algebraic symbolism for his concepts: the Other is denoted as A (for French Autre) and the little other is denoted as a (italicized French autre). The Other signifies radical alterity, a form of otherness that surpasses the illusory otherness of the imaginary because it is not assimilable through identification. Lacan associates this radical alterity with language and law, thereby inscribing the Other in the symbolic order (Lacan, 2006, p. 40). Lacan contends that speech originates not in the Ego or the subject, but in the Other, emphasizing that speech and language are beyond the conscious control of the subject. They originate from a place outside of consciousness - “the unconscious is the discourse of the Other” (Lacan, 2006, p. 16). The Other symbolizes other people, or other subjects that an individual encounters in social life. However, for Lacan, it also represents language and social life conventions organized under the law’s category.
As per psychoanalytic theory, the concept of identity is not a fixed or substantial aspect of the personality. Instead, it is a fluid and fragmented process composed of numerous complex and dynamic subprocesses. The paradox lies in the fact that dissociation, or the process of separating oneself from certain aspects, is integral to the formation of identity. In other words, one cannot establish an identity without simultaneously deconstructing it. Thus, identification is an unstable process, and there can be no identification without a corresponding process of disidentification.
There is, however, a gradual shift in Lacan’s thought of the Other from the early 1960s onwards. The discipline of logic, underwent significant transformations from the mid-nineteenth century onwards. George Boole, in 1854, introduced a mathematical and algebraic reinterpretation of logical symbolism principles. Frege’s Begriffsschrift [Concept Notation] (1879) laid the groundwork for modern propositional and predicate logic, replacing the classical logic’s subject and predicate with argument and function, and offering a more comprehensive account of quantification. The relationship between mathematics and logic, particularly the question of whether one could serve as a foundation for the other, sparked debates among mathematicians and philosophers well into the twentieth century. Central to these discussions was the issue of formalization. Kurt Gödel’s incompleteness theorems in 1931 demonstrated that an axiomatic system’s consistency cannot be proven solely by appealing to axioms within that system, implying that no system can be considered complete if it is purely consistent.Footnote 3
Gödel’s (1986) first incompleteness theorem asserts that in any consistent formal system S that is sufficiently expressive to encode basic arithmetic (such as Peano arithmetic), there exist true statements about the natural numbers that cannot be proven within S. Specifically if S is a consistent formal system whose axioms and rules of inference can be enumerated by an algorithm, then there exists a statement G in the language of S such that:
-
1.
G is true in the standard model of arithmetic, but G is not provable within S (i.e., S cannot derive G).
-
2.
The construction of G is achieved through a form of self-reference, often referred to as a “Gödel sentence.” This sentence essentially states, “This statement is not provable within the system S.”
Gödel’s second incompleteness theorem builds on the first by addressing the system’s ability to demonstrate its own consistency. Specifically, it states that; If S is a consistent formal system capable of encoding arithmetic, then S cannot prove its own consistency. In other words, the statement Con(S), which asserts the consistency of S, cannot be proven within S itself.
The relationship between logic and language became increasingly evident in Lacanian psychoanalysis during the 1960s. Jacques Lacan began investigating logic as an extension of his analysis of linguistic conditions of the unconscious in Seminar IX on Identification (1960-61). He was interested in logical paradoxes (such as Russell’s paradox and the liar’s paradox) because they illuminated the problem of metalanguage. Lacan discusses logic throughout Seminar XII (1964-65), titled Crucial Problems for Psychoanalysis, referring to Chomsky, Russell, and Frege.Footnote 4 He describes syntax as situated at a precise level called formalization and syntagm from a structuralist perspective. His program aimed at understanding the function of the psychoanalyst, starting from what grounds his own logic. The point here is that there is undecidability and inconsistency in any structure of identity or a complete system to unify or totalise once and for all. There is always a lack and excess. Lacan’s unity is no longer the unifying unity like the One of Parmenides but the countable unity of one, two, three. Counting is not merely an empirical fact as demonstrated by Frege. Counting, while not difficult, requires a one-to-one correspondence between sets. For example, there are as many people in this room as there are seats. However, to constitute an integer or a natural number, a collection composed of integers is necessary. Each integer is a unit in itself. Lacan uses the example of “two” to illustrate this point, suggesting that while pairing entities (like men and women) can be enjoyable, it eventually ends. The generation of the next integer (three, in this case) is a different matter altogether. Lacan refers to the mathematical formula “n plus 1 (n + 1)” as the foundation of number theories, highlighting the concept of “one more” as the key to the genesis of numbers. He showed the nature of language, by describing it as a finite set of signifiers, such as “ba”, “ta”, “pa”, etc. Each signifier can relate to the subject in the same way, suggesting that the process of integers might be a specific instance of this relationship between signifiers. In Lacanian parlance, this collection of signifiers is the “Other”. A unique aspect of language is that, unlike that sign, each signifier is often not identical to itself. This is because, within the collection of signifiers, one signifier may or may not refer to itself. This principle is exemplified in Russell’s paradox, which arises when considering the set of all elements that are not members of themselves. The formation of such a set leads to a paradox and, subsequently, a contradiction (Macksey and Donato, 1982, p. 186).
Part of the wager of this article is that the Lacanian concept of the Other as a battery of signifiers provides a robust theoretical framework for explaining the nature of digital technologies. The transformation of experience into data, or datafication, is seen as a reflection of the signifier system’s differential nature. A digital object is an object whose identity is unstable, with each attribute representing a single stage in a sequence. Data is merely one phase in an ongoing sequence, always open to change. Every object in the digital realm can be distilled down to a series of code lines, or in other words, software. Software is essentially a form of writing that is designed to be executed by a computer, translating abstract instructions into concrete actions. Every digital object, whether it’s an image on a screen, a music program, or a video game, represents the output of code execution. These digital objects are the perceptible outcomes of running code, making the abstract instructions tangible in a digital form. Unlike other written languages, which are phonemicized and intended for direct human interpretation, code is executed by machines to produce interactive and perceivable digital experiences.
The transformation of objects into signs has been greatly accelerated by the spread of computers. It is obvious that digitalization has done a lot to expand semiotics to the core of objectivity: when almost every feature of digitalised artefacts is “written down” in codes and software, it is no wonder that hermeneutics have seeped deeper and deeper into the very definition of materiality. (Latour 2008, p. 4)
Just as signifiers in language, software interrogates the conventional dichotomies that are deeply rooted in the Western philosophical tradition, such as those between the material and the virtual and the universal and the singular. Software is pervasive and capable of running concurrently on multiple computers in diverse locations worldwide. Furthermore, software exhibits a distinctive form of universality in that every instance of software is fundamentally the same software, with no allusion to an original prototype. However, it also demonstrates uniqueness in its operations and its capacity to engage with and modify its surroundings singularly in various ways—take AI machine learning systems as an example.
In an AI machine, the Other can be likened to the body of text, images, or audio datasets that the algorithm is trained on. The AI’s unconscious is represented by the mathematical model that results from this training. In an AI system, the “Other” can be likened to the body of text, images, or audio datasets that the algorithm is trained on. The AI’s unconscious is represented by the mathematical model that results from this training. The training process involves identifying patterns and structures within the dataset and creating a mathematical model, which can be represented in binary form (0’s and 1’s). This model constitutes the AI’s unconscious. A specific software program, referred to as the ‘AI inference program,’ is needed to operate this model. The AI inference program performs tasks such as loading the trained model, preprocessing new input data, executing the model to generate predictions, and postprocessing the outputs. Without this program, the trained model remains a set of parameters that cannot independently process new data. Without the AI program, a human observer viewing the model on the machine would only see an unintelligible series of 0’s and 1’s. This subject is required to act as the model’s mouthpiece to facilitate communication with the external world. As suggested by O’Neil in Weapons of Math Destruction, these mathematical models are not merely neutral formulas but contain embedded human opinions (Possati, 2021, p. 86). Therefore, in Lacanian terms, we serve as the Other of the Other for this AI machine, acting as a covert, unseen agent manipulating its learned references from within the symbolic order.
However, everything is not formalizable and contained in the Other (any language or system) for the reasons we discussed above. There is an irreducible, undecidable and excessive encounter within systems shown by Gödel incompleteness theorem. Lacan calls this excessive surplus the real, which resists being symbolized (Fink, 1997, p. 25).
Undecidability
The concept of undecidability, which arises from the Church-Turing thesis, also has implications for AI. Turing’s hypothesis, which posits that all computable numbers encompass all numbers that would intuitively be considered computable, is now referred to as the Church-Turing thesis. This thesis can be expressed in several ways:
-
1.
A universal Turing machine is capable of performing any computation that a human computer can execute.
-
2.
Any systematic method can be implemented by a universal Turing machine. (Copeland 40-41).
Computability and incomputability
Alan Turing proposed the concept of computable and incomputable sequences or numbers (Copeland, 2004). A sequence or number is deemed computable if its decimal representation can be determined within a finite duration. Turing’s rationale for this definition was rooted in the limitations of human cognition and the impracticality of numbers that require an infinite time to compute. His computational model was predicated on the paradigm of an individual solving a problem.
Turing’s hypothetical machines, now known as Turing machines, were employed to calculate numbers within this set. This set encompasses any number that can be computed to an arbitrary degree of precision within a finite time span. It includes all rational and algebraic numbers, as well as numerous transcendental numbers such as π and e.
Conversely, an uncomputable sequence or number is one that cannot be computed using a specific rule. Turing demonstrated that computable numbers could engender incomputable ones. As a result, there could be no “mechanical process” capable of resolving all mathematical queries, as an incomputable number exemplified an insoluble problem. Even when a real number is not computable, if it is the limit of a computable function, it is deemed “approachable.”
A Turing machine M is defined as a 7-tuple (Q, Σ, Γ, δ, q0, qaccept, qreject), where,
Q: A finite set of states.
Σ: The input alphabet, a finite set of symbols excluding the blank symbol.
Γ: The tape alphabet, a finite set of symbols where Σ ⊆ Γ and including the blank symbol ⊔.
δ: The transition function, δ: Q × Γ → Q × Γ × {L, R}, where L and R denote the left and right movements of the tape head, respectively.
q0: The initial state, q0 ∈ Q.
qaccept: The accepting state, qaccept ∈ Q.
qreject: The rejecting state, qreject ∈ Q, where qreject ≠ qaccept.
The Turing machine operates on an infinite tape divided into cells. Each cell contains a symbol from the tape alphabet Γ. The machine uses a tape head to read and write symbols and move left or right on the tape based on the transition function δ. The machine starts in the initial state \({q}_{0}\), with the input string written on the tape and the tape head positioned at the leftmost symbol of the input. At each step, the machine reads the symbol under the tape head (let’s call this symbol a) and applies the transition function δ to determine:
-
The next state \(q\)’.
-
The symbol b, to write in the current tape cell.
-
The direction d (either left L or right R) to move the tape head.
The machine writes the symbol b in the current cell, transitions to state \(q\)′, and moves the tape head one cell left or right depending on d. The machine halts when it enters either the accepting state \({q}_{{accept}}\) or the rejecting state \({q}_{{reject}}\).Footnote 5
Alan Turing’s groundbreaking work necessitated a clear definition of “method” to meet the criteria set by Hilbert’s foundational principles. To Hilbert’s Entscheidungsproblem, Turing demonstrated that there exists no algorithm, that is, no effective method, which can determine, prior to its calculation, whether a statement is True or False. He achieved this by exhibiting that there are certain tasks that cannot be executed by his universal computing machines. These problems are referred to as incomputable functions with no solution. Turing thereby revealed that there are constraints to what can be computed because there are functions that can be deemed algorithmically unsolvable, thus invalidating the decision problem.
For example, the Busy Beaver function, denoted as Σ(n), is a mathematical construct associated with Turing machines. It is defined as the maximum number of 1 s that a Turing machine with a given number of states can write on its tape before it halts. The term “busiest” in this context refers to the Turing machine that generates the most extensive output before reaching a halt state. However, it’s crucial to understand that the Busy Beaver function is incomputable. This implies that there is no universal algorithm capable of calculating the Busy Beaver function for every possible input. Moreover, it has been demonstrated that the growth rate of the Busy Beaver function surpasses any computable function when considered asymptotically.Footnote 6
The Halting Problem
At its core, the Halting Problem asks a deceptively simple question - given an arbitrary computer program and input, can we determine if that program will eventually halt (stop running) or run indefinitely on that input? While straightforward to state, this problem was proven to be undecidable in the general case via an elegant proof by contradiction:
-
1.
Assume that there exists a Turing machine H that can solve the Halting Problem. This means that H takes as input a description of an arbitrary Turing machine T and an input w, and H correctly decides if T halts on input w.
-
2.
Now, let’s construct a new Turing machine D that uses H in the following way: D takes as input a description of a Turing machine T. It then runs H on the pair (T, T), i.e., it asks H if T halts when given its own description as input.
-
3.
If H says that T halts on input T, then D goes into an infinite loop. Otherwise, if H says that T does not halt on input T, then D halts.
-
4.
Now comes the contradiction: what happens when we run D with its own description as input? There are two possibilities:
-
(a)
If D halts on input D, then according to its definition, H must have said that D does not halt on input D. This is a contradiction.
-
(b)
If D does not halt on input D, then according to its definition, H must have said that D halts on input D. This is also a contradiction.
-
5.
Therefore, our initial assumption that there exists a Turing machine that can solve the Halting Problem must be false. Hence, the Halting Problem is undecidable.Footnote 7
Reduction is also a common method used to prove the undecidability of a problem, often by reducing it to the Halting Problem, which is known to be undecidable: A problem “A” is said to be reducible to another problem “B” if a solution to “B“ can be used to solve “A”. If “A” is already established as an undecidable problem, then to demonstrate that a new problem “B” is also undecidable, one would need to show that a solution to ‘B’ could be employed to decide “A”.
This leads to a contradiction because if “A” has been proven undecidable, it means there can’t be a definitive procedure or algorithm that always correctly decides “A”. Therefore, if “B” could be used to decide “A”, it contradicts the undecidability of “A”. Hence, we can conclude that ‘B’ must also be undecidable.Footnote 8
Interestingly, the Busy Beaver Function is also related to the Halting Problem. Specifically, if we could compute the Busy Beaver Function for all inputs, we could solve the Halting Problem. However, Radó proved that the Busy Beaver Function is noncomputable, which further underscores the undecidability of the Halting Problem. The Halting Problem is a concept in the field of computability theory. It poses a question: given a computer program and an input, can we determine if the program will eventually stop or continue running indefinitely?
Take this Python program as an example:
x = input()
while x:
pass
In this case, the program reads an input. If the input is not empty, it will keep running in a loop. So, if the input is empty, the program stops, and we can say Yes, this program with an empty input will stop. But if the input isn’t empty, the program will run indefinitely, and we can say No, this program with this input won’t stop. The Halting Problem is famous for being proven as undecidable. This means there’s no universal solution that can accurately predict whether any given computer program will halt or run indefinitely.
Working within the context of synthetic computability using constructive type theory and the Coq proof assistant, Kirst and Peters reframe Gödel’s incompleteness theorems and the Halting Problem by providing a streamlined and formally verified approach to these foundational results, demonstrating the inherent limitations of formal systems and their implications for computation and logic. For instance, they show that no consistent system capable of basic arithmetic can prove its own consistency, aligning with Gödel’s Second Incompleteness Theorem. Furthermore, they draw a parallel to the Halting Problem, illustrating that just as there is no algorithm that can decide the halting behavior of all possible programs, there are true statements in arithmetic that cannot be proven within a given formal system” (Kirst and Peters, 2023).
Conclusion: a glitch in ontology?
In addressing the crisis of formalism, my stance diverges from the common reactions of distancing oneself from the issue or outright rejecting formality. Instead, I advocate for a philosophical reevaluation of computational formalisms. This reevaluation hinges on recognizing computation as an abstraction method that acknowledges abstractness without denial and as a form of determination that accepts indeterminacy.
What exactly are we dealing with here once we encounter the undecidable? Is there a crack in our fundamental ontology, and what is to be done? The undecidability in Wittgensteinian parlance can be shown but cannot be said—thinkable or synthesisable by universals or a subject. It implies that computation, as a mechanization of thought, inherently contains incomputable data, suggesting that discrete rules are subject to a form of internal contingency within algorithmic processing. This is not merely an error or glitch within the system but an integral part of the computation. Rather than dismissing computation as a negative manifestation of techno capitalist instrumentalisation of reason, it is recognized that incomputable algorithms challenge not only the supremacy of reason’s teleological finality but also sensible and affective thought (Parisi 135).
This recognition of the undecidable is not so new from the Lacanian point of view, as Alenka Zupančič succinctly explains:
The signifying order is inconsistent and incomplete, but, in a stronger and more paradoxical phrasing, that the signifying order emerges as already lacking one signifier, that it appears with the lack of a signifier “built into it,” so to speak (a signifier which, if it existed, would be the “binary signifier”). In this precise sense the signifying order could be said to begin, not with One (nor with multiplicity), but with a “minus one”—and we shall return to this crucial point in more detail later on. It is in the place of this gap or negativity that appears the surplus-enjoyment which stains the signifying structure: the heterogeneous element pertaining to the signifying structure, yet irreducible to it (Zupančič, 2017, p. 42).
The identification of the mysterious undecidable that renders all attempts to formalize or ground rationality on computation inconsistency is inherent in the system itself. In other words, just like the signifying Other is incomplete and its incompleteness is inherent in the order itself, any computable system, no matter how developed, consists of its own real of incomputability.
Notes
The question “Can machines think?” has sparked numerous philosophical debates. As Tarski aptly responded when asked this question by Paul Ziff, “Of course they can, it only depends on what you mean by “think”. Turing believed in the possibility of attributing thinking to machines, which he demonstrated in his Imitation Game. according to Turing’s perspective, if an AI can convincingly imitate human responses under specific conditions, it could be considered as “thinking” in a functional sense. However, this does not necessarily imply consciousness or self-awareness. It’s more about the ability to process information and respond in ways indistinguishable from a human (Kirchner, 2020).
Wittgenstein, on the other hand, views intentionality as a linguistic phenomenon rather than a biological one. The intention behind an utterance is often inferred from the circumstances surrounding it. The meaning of an utterance can be interpreted in various ways depending on the context. For example, a chess move could be intended to checkmate the opponent, move the king in an unusual way, or simply speed up the game. The actual intention is inferred from the circumstances and the player’s actions. Words are tools of communication whose function or use in a given context determines their meanings. Understanding can only be achieved if the discourse is based on a commonly agreed set of rules. In this sense, understanding a program is based on the frame of reference of the speech community with which the program interacts.
In 1931, Gödel critiqued Hilbert’s meta-mathematical program, showing that a complete axiomatic method could not exist that could definitively prove the truth or falsity of reality. His incompleteness theorems posited that certain propositions are true, even if they cannot be verified by a complete axiomatic method, rendering them undecidable. Gödel argued that no a priori decision or finite set of rules could determine the state of things before they occur. Alan Turing encountered Gödel’s incompleteness problem while trying to formalize the concepts of algorithm and computation with his Turing Machine. The Turing Machine showed that problems are computable if they can be decided according to the axiomatic method. However, propositions that cannot be resolved through this method remain incomputable. (See Tieszen, 2005).
Duroux demonstrates how Frege avoids the pitfalls of psychologism or empiricism by defining number as a logical concept and consequence rather than a psychological or empirical phenomenon. For Frege, the concept of “what contradicts itself” deserves the name zero because it denotes a class that has no elements. The concept of contradictory things is a concept that no object can satisfy. Frege can logically produce the concept of one from this concept by applying a successor function. Even if the concept of contradictory things refers to an empty set, as a concept it is itself singular or “one”. Zero may be nothing, but there is only one concept of zero, so we get the concept of “one”. Duroux’s analysis of Frege provides essential background to Jacques-Alain Miller’s attempt to discover a “logic of the signifier” in Frege’s Foundations of Arithmetic in his “La Suture: Elements of the Logic of the Signifier” (CpA 1.3). Miller’s basic argument is that the “logic of the logicians” is based on a “logic of the signifier’ that this logic both assumes and conceals. Miller claims that Frege’s logical reconstruction of the number sequence 0 to 1 is secretly founded on a “function of the subject”, and that this shows that Frege’s logicist system of concepts and objects relies on a fundamental “disappearance”. Just as Frege’s use of the concept of zero involves an ambiguity (between the zero considered as a concept of the non-identical and as a number), Frege’s concept of the “object” hides a deletion of the thing: “The disappearance of the thing […] must be done for it to appear as object—which is the thing insofar as it is one” (CpA 1.3:43). However, Frege (according to Miller) also reveals an elementary logic of the signifier that can be used in psychoanalysis. Miller’s main assertion that logic presupposes a prior “logic of the signifier” will be disputed in several later volumes of the Cahiers. In “The Point of the Signifier’ (CpA 3.5) Jean-Claude Milner will read something like the “logic of the signifier” in Plato’s Sophist, identifying a generative “non-being” related to the concept of “not-identical-with-itself” that explains the connection between ontology and number in Plato’s discourse. For Milner, the fluctuation of the concept “non-being’ between function and term plays a role similar to the subject in Miller’s “logic of the signifier”, both extending and occupying a place in a logical series.
Detail explanation could be found at Soare’s book on Turing Computability: Theory and Applications, page 7–8. Soare, R. I. (2016).
The concept of the Busy Beaver function was first introduced by Tibor Radó in his 1962 paper, “On Non-Computable Functions”. It has since become a fundamental concept in the study of computability theory. Interestingly, the Busy Beaver Function is also related to the Halting Problem. Specifically, if we could compute the Busy Beaver Function for all inputs, we could solve the Halting Problem. However, Radó proved that the Busy Beaver Function is incomputable, which further underscores the undecidability of the Halting Problem.
Some discussions on the halting problem can also be found at https://cs.stackexchange.com/questions/145811/halting-problem-is-undecidable-proof.
If the Halting Problem were solvable, it would have significant implications for numerous other problems: For instance, Goldbach’s Conjecture: This conjecture could be resolved. A Turing machine can be designed to test every even natural number greater than 2 to see if it can be expressed as the sum of two prime numbers. If the machine finds a counterexample, it halts and reports the finding; otherwise, it continues indefinitely. If the Halting Problem were decidable, we could determine whether this program halts, thereby providing an answer to Goldbach’s Conjecture. Kolmogorov Complexity: The Kolmogorov complexity of a string, which is the length of the shortest possible description of the string in a fixed universal description language, would be computable if the Halting Problem were solvable. Busy Beaver Function: This function, which is known for its rapid growth and connection to undecidability, would also be computable if the Halting Problem were solvable. Understanding undecidable problems is crucial as it informs us about the inherent limitations of our computational models.
References
Copeland BJ (ed) (2004) Computable Numbers: A Guide. In Copeland BJ (Ed) The essential Turing: Seminal writings in computing, logic, philosophy, artificial intelligence, and artificial life, plus the secrets of Enigma. Clarendon Press; Oxford University Press, pp 5–57
Deleuze G (2001) Difference and repetition (Trans: Patton P). Continuum
Dreyfus HL (1992) What computers still can’t do: a critique of artificial reason. MIT Press
Fazi MB (2018) Contingent computation: abstraction, experience, and indeterminacy in computational aesthetics. Rowman & Littlefield International
Fink B (1997) The Lacanian subject: between language and jouissance. Princeton University Press
Gödel K (1986) Uber formal unentscheidbare Satze der Principia mathematica und verwandter Systeme (1931) Feferman S (ed). Oxford University Press, Clarendon Press, p 141–195
Kirchner F (2020) AI-perspectives: the Turing option. AI Perspect. 2(1):2. https://doi.org/10.1186/s42467-020-00006-3
Kirst D, Peters B (2023) Gödel’s Theorem without tears—essential incompleteness in synthetic computability [application/pdf] LIPIcs, (CSL 2023). Vol. 252, pp. 30:1–30.18. https://doi.org/10.4230/LIPICS.CSL.2023.3018
Kurzweil R (2005) The singularity is near: when humans transcend biology. Viking
Lacan J (2006) Ecrits: the first complete edition in English (trans: Fink B). Norton
Latour B (2008) A Cautious Prometheus? A Few Steps Toward a Philosophy of Design, (With Special Attention to Peter Sloterdijk) (Hackney, Fiona, Glynne, Jonathan, Minton, & Viv, Trans.). ‘Networks of Design’, Annual International Conference of the Design History Society, 2–10. https://sciencespo.hal.science/hal-00972919
Lovelock J, Appleyard B (2019) Novacene: the coming age of hyperintelligence. The MIT Press
Macksey R, Donato E (eds) (1982) The structuralist controversy: the languages of criticism and the sciences of man (5. print). Hopkins University Press
Millar I (2021) The psychoanalysis of artificial intelligence. Palgrave Macmillan, London
Luciana P (2015) Instrumental Reason, Algorithmic Capitalism, and the Incomputable. In Pasquinelli M (Ed), Alleys of your mind: Augmented intelligence and its traumas. Meson Press, Hybrid Publishing Lab, Centre for Digital Cultures, Leuphana Univeristy of Lüneburg, pp 125–137
Possati LM (2021) The algorithmic unconscious: how psychoanalysis helps in understanding AI. Routledge
Schwab K (2017) The fourth industrial revolution (First U.S. edition). Crown Business
Searle JR (1984) Minds, brains, and science. Harvard University Press
Soare RI (2016) Turing Computability. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-31933-4
Tieszen R (2005) Phenomenology, Logic, and the philosophy of mathematics, 1st edn. Cambridge University Press. https://doi.org/10.1017/CBO9780511498589
Wittgenstein L (2009) Philosophische Untersuchungen Philosophical Investigations (G. E. M. Anscombe, P. M. S. Hacker, & J. Schulte, Trans.). In Hacker, Hacker PMC & Schulte J (eds) Philosophische Untersuchungen Philosophical investigations (Rev. 4th ed). Wiley-Blackwell
Zupančič A (2017) What is sex? MIT Press
Author information
Authors and Affiliations
Contributions
Michael KC Thanga is the sole author of the paper, wrote the original manuscript and revised the manuscript for resubmission.
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Ethical approval
This article does not contain any studies with human participants performed by the author.
Informed consent
This article does not contain any studies with human participants performed by the author.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Thanga, M.K.C. The undecidability in the Other AI. Humanit Soc Sci Commun 11, 1372 (2024). https://doi.org/10.1057/s41599-024-03857-x
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1057/s41599-024-03857-x


