Uncertainty is a ubiquitous, yet often misunderstood condition studied by academia. In the humanities, subjects wield it as a tool, using it as a source of inspiration and entertainment. In the social sciences, subjects consider it a weapon that, when controllable, is leveraged for competitive advantage and, when not, is defended against to minimize harm. We describe the dangers of its misunderstanding—specifically, those arising from mistaking untreatable uncertainties as treatable ones and vice versa. We draw from the analysis of these uses and misuses of uncertainty to make a call to action to improve how we study and address this unique condition that makes strategic decisions so much harder to optimize.
Unnecessarily unrealized benefits and significant net harms are the results of the current academic work on uncertainty. Extant research embodies uncertainty both negatively and positively, and as both treatable and not. That confusion is perhaps forgivable, given some uncertainties are difficult to model and even impossible to address. Such inconsistencies aside, uncertainty is a concept wholly worthy of study, given it affects every living thing, generating effects from an individual to a planetary scale. What concerns us here is that major decision problems involving uncertainty have too often been mislabeled or misunderstood by social scientists as non-existent, manageable, or even optimizable, when they are not, resulting in significant costs. We argue that such academic performative behavior has now caught up to us (Graeber, 2012). At the extreme, critical scholarship (e.g., Beck, 1992, 2009) sees uncertainties, especially human-made ones, not only as uninsurable and unpredictable (in timing and consequences), but also as having the potential to cause irreparable damage at a global scale. Ironically, such existential-level threats often arise as side-effects of well-intended technological progress that did not sufficiently account for uncertain effects (e.g., most recently, in artificial intelligence [AI]Footnote 1). This comment serves as a call to action because, in theory, we can do much to improve society’s understanding of uncertainty and reduce the harms currently being done in our research. But, if we do not come together to help humanity better understand the uncertainties it faces, then that will be one of science’s greatest failures.
The uncertainty we speak to here is not risk nor anything reducible to risk (Knight, 1921). Instead, when real uncertainty vexes a decision, that decision is made unoptimizable because critical information is missing for the computation and comparison of expected values of possible choices and actions. The result of such uncertainty is often surprise, shock, and the revelation of prior unknown unknowns. Such uncertainty is ubiquitous across life on Earth and can result in both bad and good outcomes, as Darwin (1965/1872) noted early on in his studies of nature. As such, our perspective is that uncertainty is not only multi- and cross-disciplinary—spanning the social sciences, physical sciences (e.g., with quantum theory in physics), religion (e.g., with the mysteries of God), humanities (e.g., with improvisational arts), and applied sciences (e.g., with stealth technologies)—but also that it is such a robust concept that is not easily reducible (e.g., to math) because it often arises from complex, non-linear, real systems. Ontologically, this places the conversation about uncertainty somewhere between realism and constructivism (Sørensen, 2018). And, ethically, although we speak to net harms, utilitarianism is not the only important orientation, as individual rights and duties matter, as do justice and professional standards, when we act to address the challenges posed by an array of different uncertainties.
There are several types of uncertainty (see Arend [2024a] for one current extensive typology). For example, there are unknown unknowns—these are important factors that only become known after a decision is made (e.g., the dangers to the ozone layer when chlorofluorocarbons were considered safe decades prior) or are understood as being currently unknown but at play in some focal phenomenon (e.g., in the elusive explanation for quantum entanglement). There are also known unknowns that involve unknowable values—these are identified factors that are involved in a decision that have no estimable value prior to when a decision has to be made (e.g., in an Ellsbergian [1961] urn lab experiment, participants would like to know the probability distribution involved, but cannot determine it before they must choose). And, there are known unknowns that involve factors that have knowable values—these are identified factors that involve values that can be determined in practice before the focal decision has to be made, through learning, asking experts, running experiments, searching, and so on. Note that this last type is not actually uncertainty as we have defined it; while, as given, there is an unknown, that unknown can be eliminated, and so this last type of decision-problem is reducible to at least risk, We included it here because this is the kind of unknown that most academics have recently confused with the irreducible unknowns that are at the heart of our concerns. Table 1 describes these uncertainty types, whether they can be addressed to optimize a decision-problem that they vex, and how they have been treated by scholars. In general, the humanities (e.g., art, literature, philosophy) have considered uncertainties as a tool for creative acts, while the social sciences (e.g., economics, sociology) have considered them as a weapon for gaining advantage or as a threat to defend against.
Uncertainty in the humanities and social sciences has most often been associated with economics because of its roots in probabilistic decision-making and the major business tools and institutions that grew from those roots (e.g., insurance, bonds, and stock markets). That noted uncertainty has been of interest in other major fields: when it is manufactured for entertainment (e.g., in suspense-building music, art, film, electronic games, magic, and design) or for military advantage (e.g., counterintelligence); when it involves trying to get into the heads of others (e.g., as social uncertainty); or when it vexes phenomena that lie at the core of physics or mathFootnote 2. Uncertainty remains a fundamental challenge to social science disciplines like strategic management, entrepreneurship, psychology, anthropology, and sociology (see Arend [2024b] for one recent review of such literature). In the social sciences—uncertainty has been central to the work of scholars like Beck (1992, 2009), Bloom (2013), Graeber (2012), and Stehr (2003), who: warn of growing man-made existential uncertainties (e.g., in climate change); run the gambit of modeling uncertainty as risk, as variance in a random walk, as an infinitely-diffuse prior beliefs, and as irreducible unknownness; see its upside in necessitating the building of institutions to mitigate and insure against some of its forms; and, see its proliferation as a product of knowledge creation. To economics historians, like North (2005) and Perelman (2012), uncertainty is often described as more manageable, addressed through learning and institution-building (i.e., to help order a chaotic environment), or by applying standard risk management tools like insurance and diversification. However, this historic downplaying of uncertainty’s actual unmanageability is dangerous, regardless of its disappointing predictability. No top economics journal publishes papers that say “I don’t know how to solve this problem”, even if one can explain why that is so; they publish mathematical solutions. So, it is not surprising that, even from its beginning, the concept of uncertainty has been almost exclusively modeled as some form of risk where probability and Bayesian updating can be applied (Bernstein, 1996) so that a mathematical solution can be determined and a paper published. But, that is now more than ever an insufficient approach, because of the growing realization that there are highly significant untreatable uncertainties we need to addressFootnote 3.
The approach to uncertainty to this point has resulted in significant realized net harms, in addition to often-associated major missed benefits. First, there are harms arising from the continued confusion over, and the consequent absence of, proper and universally stipulated definitions and types of uncertainty. This is an important delineation because there are separable types of uncertainties, each requiring a different means to address them. Such harms include damages arising from the resulting miscommunications about uncertainties—by using the same label for different types or using the wrong label—that lead to misaddressing them. Second, there are harms arising from direct mis-prescriptions about specific uncertainties, such as recommending treatments for untreatable uncertainties or suggesting the application of scarce resources to uncover unknowns that cannot be uncovered. These harms include wasted resources, time, and effort, in addition to a loss of legitimacy when such prescriptions ultimately fail. Third, there are harms that arise from not learning the right lessons from failures (or from successes) when the outcome is misattributed to either bad luck (when it was bad actions) or to good actions (when it was good luck) because the scholarship has been unclear on which problems involve the types of uncertainties that cannot be predicted or managed. Fourth, there are harms that arise from a lack of awareness of the uncertainties in a decision-problem and/or the available correct treatments, if any. Fifth, there are harms that arise from the overconfidence (or lack of confidence) in acting (or not acting) in the face of uncertainty, especially when the stakes are high, uninsurable, and involve significant externalities (e.g., seen in violations of precautionary principle by free market entities in the past with chlorofluorocarbons, forever chemicals, online privacy, algorithm bias, and so on). And, related to those scholarly and practical failures that lead to net harms, are the missed opportunities to generate substantive benefits. Crucially, without agreement on proper definitions of types, it remains impossible to generate and leverage databases that would capture common uncertainty types, the common decisions they crop up in, and their relevant possible treatments, listing which treatments work and which don’t, including the subsequent costs, benefits, and surprises. Such databases could be used to uncover further uncertainties (as most new knowledge does create new unknowns), their causes, their impacts, and their connections to other uncertainties, both simultaneous and sequential. And, it could be used to identify those phenomena that are not uncertain but may appear to be, and so help reduce the number of decisions that can be optimized but mistakenly are not.
To strengthen the argument that current uncertainty scholarship is harmful, we offer further details. Regarding the first cause, that of unnecessary confusion, currently, uncertainty is a broad label in social science research covering too many: radically different investigative embodiments (e.g., from mathematically modelable risk to mathematically unmodelable unknown unknowns); loosely-defined variants and labels (e.g., aleatory, epistemic, ontological, internal, external, subjective, objective, social, non-social, perception, impact, outcome, option); real-world phenomena (e.g., from the well-structured to the chaotic); and, dimensions (e.g., regarding what is uncertain, what is the source of the uncertainty, when it will be resolved, and so on). Without explicit clarification—over the goal and the context of a situation, let alone over what is unknown and how—we end up prescribing a conflicting list of treatments, from don’t act to act, from gather more information to just using what is available, and from taking a unique path to remain flexible. This situation is not helpful to practice; it is harmful.
Regarding the second cause, that of mis-prescription, too much current research purports to treat the untreatable uncertainties. Recall that non-optimizable decisions involve uncertainties caused by missing key pieces of information (e.g., the full set of options, outcomes, probabilities, or utilities); they are untreatable as given. That said, these decision problems do provide a wedge, in the form of informational market failures, that allows invention, surprise, and entrepreneurial entry to occur. Unfortunately, there remain two severe issues in academia regarding such uncertainty: one is that many peers do not believe these decision problems actually exist, and the other is that many peers are writing that they can optimize such decisions. To the former issue, even though it may be uncomfortable, it is a scientific fact that there are things (e.g., factors, factor values, causes) that will never be known, be provable, or be modelable; there have been, are, and will always be, important unknown unknowns, surprises, and decisions that cannot be forever delayed to gather more needed information (e.g., Beck, 2009; Stehr, 2003). To the latter issue, this is simply unacceptable; no matter how rewarding it may be to the authors and to the journal to publish a ‘solution’ to a problem that one has first defined as unsolvable, such performativity (or hopeful delirium) just wastes resources as well as our legitimacy as social scientists (Graeber, 2012).
Regarding the issue of separating uncertainty from not-uncertainty—separating out the predictable, preventable surprises from those that are not—there are some scholars studying high-reliability organizations and conducting post mortems on catastrophes who estimate that a large portion of negative surprises (up to 70%) were preventable with the information and technology available at the time (e.g., Morell, 2010; Weick & Sutcliffe, 2015). Essentially, there is a sizable potential for harm reduction when such knowable unknowns can be separated from true unknowns, and so can then be identified and acted upon in a timely manner with resources correctly invested in relevant awareness, motivation, capability, and execution.
Given this situation, we call on our peers and stakeholders to act. We offer both a broad theoretical direction as well as specific practical means to do so. We call for efforts to be directed at quickly gaining a consensus over a typology and the related definitions of uncertainties, and then at building a database upon that typology—one that describes each type’s embodiment, lists the attempted treatments that work and don’t, connects to other uncertainties, and that can be easily updated and queried. The primary focus would be on big problems that are vexed by these uncertainties—e.g., the ones with wide possible outcome ranges spanning both the significantly bad and the significantly good. By agreeing on and verifying specific types of uncertainties and treatments in isolation, we could then study what occurs when two or more interact in different ways to build up a more realistic set of important decision problems to study (e.g., O’Connor & Rice, 2013). In theory, none of these directions is beyond our current capabilities. Each of these directions would benefit almost every field in the sciences; in fact, each step would likely provide more value than the collective insights on non-risk uncertainty produced by all of the science so far. However, getting such consensus and motivating such collective action across diverse academic fields, let alone across their practitioner stakeholders, will be quite difficult, but it is not impossible.
There are fields, even collections of fields, that have come to an agreement on terminologies and built databases. While doing so is more common in the fields that deal with mathematical terms and physical phenomena (e.g., the definition of velocity is universally accepted, and differs from that of momentum), many applied fields have wide agreement on word-based definitions (e.g., on what a patent requires) and have built useful databases on them (e.g., historic patent libraries); as such, there is hope for social sciences dealing with applied decision problems as well. There are at least five levers of influence we can use to move forward to build the needed consensus: At a grassroots level, we can get together with interested peers and discuss the issues to a point of agreement, we can initiate our doctoral students in the typology, and we can push the agenda in conferences. At the publication level, we can strongly suggest or otherwise influence publishers and editors to agree to a typology and make that explicit in the guidelines for publishing and for reviewing in respected outlets. At the association level, we can also strongly suggest conferences, discussion groups, interest groups, and even online voting to reach a consensus on an acceptable typology. At an organizational level (e.g., at an association or institution, or private entity), we can simply develop and commercialize the database, supporting it to make it a standard in the wider marketplace (e.g., for consultants, practitioners, and students to use). And, at a wider policy level, we can also lobby major funding agencies to use the typology as a standard in evaluating project submissions.
If we can agree on definitions and a typology in order to start building databases of uncertainties and related decision problems and possible treatments, then the future is bright. Currently, associated harms will be greatly reduced, and substantial benefits will be realized, as a concept that has been mostly ignored, misunderstood, or misaddressed will finally get its rightful and sorely-needed attention. For example, real untreatable uncertainties will be acknowledged and taken seriously, and naïve suggested treatments for them will be rightly ignored. Aided by AI, digital access, and a multi- and cross-disciplinary academic interest, such new knowledge bases are likely to generate great value as humanity faces greater uncertainties. At present, however, uncertainty is an all-too-common but often irreducibly complex factor in nature, and arguably more so for fields studying human behavior—i.e., the humanities and social sciences. We-as-academics are doing a grave and unnecessary disservice to our academic sciences and practice by effectively choosing to perpetuate the confusion over uncertainty. We cannot, and should not, wait to act. Ironically, there is little uncertainty on how we could and should move forward, and on what benefits that could bring; so, let’s go forth, despite and because of any uncertainties and surprises that await.
Data availability
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
Notes
See https://www.safe.ai/work/statement-on-ai-risk for the statement.
For example, in mathematics, Gödel (1931) proved an impossibility theorem—i.e., that every mathematical system contains ‘true’ statements that cannot be proved; this naturally leaves specific foundations uncertain, being unproven and unknowable.
Of course, there have been economists in the past who have understood uncertainty to be ‘not risk’ (e.g., Knight, 1921) and several more recently (e.g., Beck, 1992). Some, like Knight (1921), believe that the only way to deal with untreatable uncertainty is to bear it like his entrepreneurs do. Others (e.g., Arend, 2024a) consider possible alternatives—like changing the given decision-problem (where that given problem remains untreatable but the altered one is treatable by more standard means and can be, in specific instances, the better problem to consider for the parties involved). In terms of the current editorial/ reviewer attitudes towards the admission by would-be contributing authors of any non-solvability of specific problems, that needs to change. Given the reality of such problems, and the importance of identifying them, their impacts, and our collective current limitations and vulnerabilities to them, the publication of such non-solvable problems needs to occur more often.
References
Arend RJ (2024a) Uncertainty in strategic decision making—analysis, categorization, causation and resolution. Palgrave McMillan, Switzerland
Arend RJ (2024b) Uncertainty and entrepreneurship: a critical review of the research, with implications for the field. Found Trends Entrep 20(2):109–244
Beck U (1992) Risk society: towards a new modernity. Sage, London
Beck U (2009) World at risk. Polity Press, Cambridge
Bernstein PL (1996) Against the gods: the remarkable story of risk. Wiley, New York
Bloom N (2013) The macroeconomics of time-varying uncertainty—IMF Lectures. https://pages.stern.nyu.edu/~dbackus/BFZ/Literature/Bloom_slides_Jan_13.pdf. Accessed 1 Aug 2024
Darwin C (1965/1872) The expression of the emotions in man and animals. University of Chicago Press, Chicago (1872 Appleton, New York)
Ellsberg D (1961) Risk, ambiguity, and the Savage axioms. Quart J Econ 75:643–669
Gödel M (1931) Uber formal unentscheidar Satze der Principia Mathematica und verwandte Systeme. Monatsh Math Phys 38:173–198
Graeber D (2012) The sword, the sponge, and the paradox of performativity some observations on fate, luck, financial chicanery, and the limits of human knowledge. Soc Anal 56(1):25–42
Knight FH (1921) Risk, uncertainty, and profit. Houghton Mifflin, New York
Morell JA (2010) Evaluation in the face of uncertainty: anticipating surprise and responding to the inevitable. Guilford Press, New York
North DC (2005) Understanding the process of economic change. Princeton University Press, Princeton
O’Connor GC, Rice MP (2013) A comprehensive model of uncertainty associated with radical innovation. J Prod Innov Manag 30:2–18
Perelman M (2012) What went wrong: an idiosyncratic perspective on the economy and economics. Rev Radic Pol Econ 44(4):494–503
Sørensen MP (2018) Ulrich Beck: exploring and contesting risk. J Risk Res 21(1):6–16
Stehr N (2003) Modern societies as knowledge societies. In: Ritzer G, Smart B (eds) Handbook of social theory. Sage Publications, New York, pp. 494–508
Weick KE, Sutcliffe KM (2015) Managing the unexpected: resilient performance in an age of uncertainty. John Wiley & Sons, Hoboken
Author information
Authors and Affiliations
Contributions
All contributions to this paper were made by its sole author.
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Ethical approval
Ethical approval was not required as the study did not involve human participants.
Informed consent
Informed consent was not required as the study did not involve human participants.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Arend, R.J. Necessary and unnecessary uncertainty in academic sciences. Humanit Soc Sci Commun 11, 1621 (2024). https://doi.org/10.1057/s41599-024-04152-5
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1057/s41599-024-04152-5