Abstract
The eighth season of the American medical drama series Chicago Med (2015–) represented the application of artificial intelligence (AI) in a hospital environment with multiple storylines. Born in the 1950s, medical dramas are among the most popular forms of serial television. Traditionally, the genre aims for a certain amount of accuracy and has educational goals. Former studies investigated the entertainment education and cultivation effects of these series, concluding that these dramas have the potential to contribute information and shape viewers’ opinions on various health-related topics. Chicago Med is a long-running broadcast production with a worldwide audience and considerable viewership. This paper analyzes the series’ representation of medical AI and discusses how this portrayal potentially shapes the audience’s opinion. The research started by identifying artificial intelligence-related storylines in the 22 episodes of the season. The analysis focused on the reasons and outcomes of AI applications, the character’s attitudes, and the ethical issues, including transparency, selective adherence, automation bias, responsibility gap, hallucination, unequal access, and political dimensions. The storyline analysis concluded that Chicago Med provided thought-provoking positive and negative scenarios about applying different types of AI in the surgical and emergency departments. The complex portrayal included groundbreaking opportunities, challenges, dangers, and ethical considerations. The main characters’ attitudes varied, from strong support or opposition to more nuanced, shifting opinions. The educative and engaging content has a potential for knowledge transfer and encourages critical thinking about medical AI.
Similar content being viewed by others
Artificial intelligence invasion in the Gaffney Medical Center
Television series regularly present the potential advantages and dangers of artificial intelligence (AI) and how its widespread application can change the world. Science fiction series like Black Mirror (2011–present) and Westworld (2016–22) profoundly engage with AI representation, while the ongoing American medical drama series Chicago Med (2015–present) recently elaborated on the application of AI in a hospital environment.
Chicago Med is a popular American medical series created by NBC. It is a part of the One Chicago franchise that contains two other ongoing series (Chicago Fire, 2012–; Chicago P.D. 2014–): all three productions ranked among the top 10 primetime entertainment shows in the 2023–2024 television season. According to data from NBC (Salamone, 2024), Chicago Med has drawn an average of 10.5 million weekly viewers. This broadcast series is also available on the network’s streaming channel and on-demand platforms. It has a worldwide distribution and viewership in different continents, including Europe, Australia, and Asia. It is a long-running production: in 2024, it had nine complete seasons and was renowned for its 10th season.
The core location of the story is the Gaffney Medical Center, a metropolitan teaching hospital—the focus is on the emergency and surgical departments. In Season 8 (2022–23), an investment corporation donates a high-tech surgical suite called OR 2.0, described as a “platform that integrates robotics, immersive computing, advanced sensory detection, and artificial intelligence to make the most challenging and complex surgeries viable” (S8 E9). The system has a voice-based communication interface that helps with decision-making, provides cutting-edge visualization, and guides the manual work of the surgeon. Parallel with this, the Emergency Department (ED) gets a “technological facelift,” meaning that the unit starts to use several AI-based technologies.
Medical television series and representations of innovative technologies have had a close connection since the appearance of the medical drama genre in the 1950s. According to Lee and Taylor (2014: 14), these productions are “pre-scripted, fictional entertainment television shows in which the main events occur in hospitals, [and] the main topics are the diagnosis and treatment of disease or injury.” Adding to this definition, the protagonists are mostly doctors and nurses. The show creators tend to focus on cases that make for good drama because of the excitement, dynamism, and unexpected turns. Emergency and surgical cases have this potential; thus, these departments appear regularly in this genre.
The genre emerged in a golden age of science, familiarizing the audience with a significant product of the era: the technologized hospital (Turow, 2010). Real-life scientific discoveries, new medical tools, and bioethical issues are regularly featured. The doctor-innovator is a typical medical series character: fictional doctors are usually among the first to apply groundbreaking technologies, raise concerns, or make further developments on existing instruments. The research and development process often lasts more than one episode, thus providing a fertile ground for different storylines. Integrating scientific and technological developments is an opportunity to create fresh plots. Product placement agreements between shows and companies are also potential reasons for putting medical technology on display (Cati and Toschi, 2023). For instance, Grey’s Anatomy (2005–present) portrayed the Da Vinci Surgical System, holographic imaging, and 3D printing (Nádasi, 2016).
Despite being fictional entertainment productions, these dramas have had educational intent since the genre’s birth. The creators of these shows have consistently aimed for accuracy and realism in their portrayal of medical content (Rocchi, 2019; Turow, 2010). The sets and props are realistic enough to create an atmospheric hospital environment, and healthcare experts are involved in the writing and shooting process to ensure that each episode features state-of-the-art plots, proper jargon, and correct implementation of diagnostic and therapeutic interventions.
Aiming for accuracy is beneficial because, as the recently published meta-analysis of Hoffman et al. (2023a) concluded, health storylines of fictional TV shows influence viewers. Former research on medical series has also proved that audiences tend to use these shows as sources of healthcare information (Murphy et al. 2008; Jain and Slater, 2013; Bodoh-Creed, 2017). This genre informs them about diseases, injuries, treatment options, and innovations while influencing their understanding, attitudes, and actions toward them. Medical dramas affect expectations relating to health issues, procedure outcomes, and healthcare professionals. According to Pescatore (2023), the genre has been one of the most popular products of free-to-air TV, and it has an enduring and widespread cultural influence. It is a powerful mediator between healthcare and public understanding; thus, it deserves examination not only for its entertainment value “but for its interweaving of elements that reflect, amplify, and sometimes question our understanding of medicine, social structures, and human relationships” (Pescatore, 2023: 7).
Researchers have examined the genre’s entertainment education (edutainment) potential: this concept means that viewers learn by watching entertainment productions, engaging with their—often emotional—storylines, and identifying with the characters. Kato et al.’s (2017) meta-analysis suggests that, the popular format, the engaged viewership, and the well-known characters enhance edutainment. According to Ismail and Salama (2023), who performed a quantitative and qualitative content analysis on eighteen seasons of Grey’s Anatomy, the depiction of neurological and neurosurgical diseases is good quality, educative content. Hoffmann et al. (2023b) concluded that the depiction of e-cigarettes and the associated lung injury in medical dramas can be beneficial in awareness-raising and tobacco prevention education. Cardiopulmonary resuscitation (CPR), delivered by laypeople before the patient receives proper treatment, can save lives: that is why the implementation of CPR gets persistent attention (Diem et al. 1996; Eisenman et al. 2005; Hinkelbein et al. 2014; Colwill et al. 2018). Show creators and healthcare associations collaborated several times to build educative storylines. For example, the writers of ER (1994–2009) worked together with the Kaiser Family Foundation (KFF) to display emergency contraception and sexually transmitted diseases (Brodie et al. 2001). The KFF collaborated with Grey’s Anatomy to promote the preventability of fetal HIV infection (Rideout, 2008). Further, Hollywood Health & Society (HH&S) advised ER writers on teenage obesity, high blood pressure, and heart disease (Valente et al. 2007). HH&S also helped to display BRCA gene mutation in ER and Grey’s Anatomy (Hether et al. 2008).
A common theoretical framework of medical drama research is the cultivation theory of George Gerbner. The original version from the 1960s states that television has a long-term effect on reality perception and does not differentiate between genres. The newer, genre-specific version clarifies that the perceived realism of the content depends on the genre, so the program’s type influences the cultivation effect’s strength (Grabe and Drew, 2007; Morgan and Shanahan, 2010). Perceived realism concerns the viewers’ perception of media content—whether they see it as authentic or not. The empirical study of Tian and Yoo (2020) concluded that exposure to medical dramas is positively associated with the perceived realism of the genre, which increases trust toward fictional medical professionals. This correspondence positively affects trust in real-life doctors. Chung (2014) also empirically measured genre-specific cultivation on a large sample. The results confirmed that medical dramas affect the audience: heavy viewers tend to underestimate chronic illnesses such as cardiovascular disease, but they have more fatalistic beliefs about cancer due to the exaggerated death outcomes presented in these series. Quick et al. (2023) investigated the portrayal of organ donation in the first fifteen Grey’s Anatomy seasons, concluding that the benefits of donation got more attention than the barriers, as did the refutation of commonly cited myths. We suggest that the standard genre elements of medical dramas—like the state-of-the-art medical content, the spectacular display of diseases, injuries, and treatments, and the proper environment, equipment, jargon, costumes, and props—have the potential to enhance the cultivation effect.
Several studies investigated Chicago Med. Eilmus and Clayton (2024) evaluated the depiction of genetic screening and eugenics in medical dramas, including Chicago Med. Studies have analyzed the COVID-19 representations of the genre (Cambra-Badii et al. 2023; Nádasi, 2022). Five American productions portrayed the pandemic, among which Chicago Med focused on the crisis period in the overloaded hospital. Cambra-Badii et al. (2023) measured the effectiveness of Chicago Med, Grey’s Anatomy, and The Good Doctor (2017– present) in teaching COVID-19-related bioethical issues for health science and humanities students with a positive outcome. The usefulness of medical dramas in bioethics education is a recurrent research topic (Cambra-Badii et al. 2021; Czarny et al. 2010; Hirt et al. 2013; Kendal and Diug, 2017). Entertainment programs are unique opportunities for cancer education, according to Kim (2022): Chicago Med exhibited considerable, various tumor depictions. Bitter et al. (2021) examined resuscitation outcomes in Chicago Med, Code Black (2015), and Grey’s Anatomy: according to the findings, these shows portray favorable outcomes unrealistically frequently. As the researchers warn, this representation misinforms the viewers and encourages them to opt for aggressive care in case of serious illnesses. Representation of medical errors, like diagnostic and operative mistakes, may lead to unnecessary anxiety and fear; furthermore, it can create mistrust toward real-life physicians and the healthcare system, according to Carney et al. (2020). The study included Chicago Med and argued that negative bias makes people more likely to recall negative depictions than positive ones.
Studies regularly identify beneficial representations and shortcomings. Despite the creators’ efforts to write medically trustworthy content, accuracy has limits; for instance, treatments are faster than in real life or are overly successful. Medical series are criticized for these misrepresentations because they potentially entail misunderstandings and mistrust. This responsiveness to misrepresentations emphasizes the content’s significant, potential real-life effects.
Considering the outcomes of former research, this paper suggests this genre can be a platform for knowledge transfer about medical applications of artificial intelligence, and the representation of the series can shape the viewers’ opinion about these technologies. Chicago Med is not the only medical drama that features AI; for instance, Grey’s Anatomy displayed self-driving cars in its 20th season. However, to the best of our knowledge, the AI representation of Chicago Med is so far uniquely extensive within the genre. The diverse medical application of AI was a core plot element for several episodes, so the audience got an opportunity to dive into the topic. It is worth examining the information and interpretation distributed by the series.
The qualitative content analysis is divided into two chapters: the first focuses on OR 2.0, while the second discusses the AI systems that appeared in the emergency department. The research questions are similar in both sections. The first concerns the reasons and outcomes of AI usage. The second focuses on the portrayal of seven medical-AI-related ethical issues that are well-established in the academic literature of artificial intelligence, like transparency, automation bias, and responsibility gap. The next section of our study introduces the history of medical AI applications and elaborates on these issues. The third research question relates to the characters’ opinions and attitudes towards artificial intelligence. As viewers identify with the well-known characters, examining their standpoints is beneficial because they can influence how laypeople think about medical AI—which will potentially appear in their healthcare treatment.
A review of real-world ethical issues of AI
Medicine has long been a prominent field for artificial intelligence applications. For a comprehensive introduction into AI technology and its applications see Russell and Norvig (2016). Initially, AI was used in medical decision-support systems that advised doctors on diagnosis and cure. The earliest application was the MYCIN blood-clotting diagnostic system developed at Stanford University in the 1970s (Shortliffe, 1974). The next milestone was the release of DXplain, the fully-fledged medical decision-support system, in 1984. After the symptoms are entered, the rule-based DXplain offers a differential diagnosis. In 2024, the database of the systems contains 2600 diseases. It is a successful product, but today, it has many competitors—Kwan et al. (2020) offer a meta-analysis of clinical decision-support systems.
Sutton and colleagues (2020) provide a detailed overview of the limitations and risks of using these applications. One significant risk is the systems’ negative impact on the user’s skills: for instance, if the user relies on the system to a large extent, their skills might diminish. This phenomenon is connected to the ethical issue called automation bias. The authors also identified issues about the system’s transparency and the collection of training data.
The History of Artificial Intelligence in Medicine by Kaul et al. (2020) is an in-depth guide to the process of AI adoption in medicine. The last two decades added two critical new resources to the landscape. One is the vastly increased computational power that allowed high-resolution image processing. The other is the new machine learning technique of deep learning. These innovations made it possible to process large amounts of data for diagnosis.
Several monographs, overview papers, and volumes thematize the ethical issues involved with AI. The Oxford Handbook for Ethics of AI (Dubber et al. 2020) attempts to provide complete coverage in its forty-four chapters. Another comprehensive overview is “The ethics of AI ethics: An evaluation of guidelines” (Hagendorff, 2020), which systematically reviews twenty-two ethics guidelines, identifies twenty-two issues, and ranks them by coverage. These comprehensive works and Sutton’s overview provide a general framework from which we narrow our focus to seven ethical challenges that are especially relevant in the medical domain.
Transparency
The human need for intellectual oversight of machines appears to be universal. Intellectual oversight requires the system to be transparent: it must be in a state where people understand its inner workings and their access is not obstructed. In an administrative framing, transparency means access to the data that the system provider also has access to. Scholars often link transparency and trustworthiness by claiming that the former is necessary for the latter (HLEG, 2020). However, transparency has a more epistemological dimension: there is the potential for AI to become a black box (Héder 2023), whereby not even its creators will be able to fully grasp its inner workings. Usually, this is not because the creators do not have access to any arbitrary detail of the system; instead, there are so many details and such high complexity that humans cannot maintain meaningful intellectual oversight and control over the system. Both approaches are crucial in the medical field. The handling of medical data has to be transparent and privacy-aware. Having explainable AI is essential because of the high stakes of medical decisions. Therefore, significant research effort is spent on medical decision-support systems that not only provide accurate results but also enable showing the primary literature on which their answers are based.
Selective adherence to AI advice
This ethical issue relates to the propensity to adopt algorithmic advice selectively when it matches preexisting stereotypes or other biases about decision subjects (e.g., when predicting high risk for members of negatively stereotyped minority groups or situations (Alon-Barkat and Busuioc, 2023). In the medical field, selective adherence means the uncritical approval of the output of a medical decision-support system when it is in line with the bias of the user and the critical attitude and more frequent rejection of it when the user’s bias is not present. For example, suppose a doctor believed that one racial group is more likely to be addicted than another, and they had an assessment system to flag potential substance abusers. In that case, they would override the system’s flag more often for the latter group.
Automation bias
Lyell and Coiera (2017, 423) define automation bias (AB) as “users become overreliant on decision-support, which reduces vigilance in information seeking and processing.” There are several real-life disasters associated with this issue, like the incident in which Uber’s autonomously driven test car, supervised by a human, fatally hit a pedestrian in 2018 (He, 2021). In this case, the system was functioning perfectly for a long time before making a mistake, making the supervisor’s attention redundant most of the time. The concept of AB concerns decision-making and user capability erosion. In medicine, AB can appear, for example, in the case of decision-support systems; if the system provides good advice most of the time, it reduces the vigilance of the doctor who follows it unthinkingly, even when the system needs to be overridden.
Responsibility gap
This ethical issue concerns ownership of merits and harms (Raji et al. 2020). Traditionally, responsibility is assigned to the person who makes a decision or conducts an act. A person can be held responsible and punished in case of a mistake. The concept of a legal person extends this capability to corporations. However, while AI can also make decisions and conduct deeds, it is commonly not categorized as a person; thus, it cannot be responsible or punished (Matthias, 2004). Therefore, there is an issue of assigning responsibility: decisions are made by an agent who cannot bear responsibility for them, creating a “gap” (Barnes 2020) in the chain of accountability.
The responsibility gap in the medical domain is a crucial issue; medical decisions happen in highly regulated legal and moral contexts. The medical field has long featured highly explicit accountability chains, which include provable adherence to procedure, informed and recorded consent, and a chain of trust. The inclusion of an unaccountable artificial agent might require the reengineering of these processes.
The common framing, described above, considers the lack of control and comprehensibility of AI to be sufficient for the formation of a responsibility gap. Others, like Champagne and Tonkens (2015), believe that responsibility can still be attributed even without holding the responsible party at fault. In their argument focusing on military AI, they propose that a commander using AI preemptively accepts responsibility for the outcomes, even if the specific operation is fully delegated to the AI, thereby not becoming a direct cause of any negative outcome. They call this ‘noncausal imputation’.
A third kind of approach proposes a compromise: although it acknowledges the formation of the responsibility gap (Nyholm 2020), but it proposes workarounds. One proposal offers a creative solution by replacing prospective liability with taking retrospective answerability with a commitment to increased transparency and even the obligation to explain and apologize (Kiener 2022), provided this is acceptable to all concerned parties—thus adopting a contractarian position. Another proposal, specifically for the context of medical AI by Lang, Nyholm, and Blumenthal-Barby (2023) takes the path of reification by the shared responsibilation of the ineliminable responsibility gap itself, essentially calling for a responsible management of the risk arising from the gap.
Hallucination
Regarding AI, “hallucination” refers to events where systems generate or interpret data in unexpected, often incorrect ways. This phenomenon is particularly relevant in deep learning, image and speech recognition, and generative models (Salvagno et al. 2023). Since image recognition is a staple of medical diagnostic AI and is already used to interpret X-rays, scans, and other visual measurements, hallucination could become an important issue in medical AI.
Unequal access
Using AI as a tool could enhance the opportunities of those who are wealthy and/or privileged enough to have access to it, while the rest of the population loses out. Hence, the economic gap widens, and a vicious cycle is created. This situation might arise in medicine through unequal access to advanced medical AI. Biomedical inequality—evidenced by the low representation of diverse populations in biomedical data – means that there is a significant health risk for those groups. Developing AI models based on such biased datasets could perpetuate existing inequalities, highlighting the need for diverse, high-quality data for equitable healthcare outcomes (Gao et al. 2023). Another potential disparity is related to gender: AI is notably widening the gender gap in information technology, with women experiencing invisibility as contributors to the industry. Addressing implicit biases and enhancing women’s abilities and willingness to engage in AI-related careers are crucial steps toward closing this gap (Idemudia & Onoshakpor, 2023).
Political dimensions
Technologies perceived as powerful continuously acquire political interest, and AI can be construed as a significant force: the general public, the government, and big corporations are competing for control of it. Because of the fear of losing employment due to this advanced technology, there is a public appetite for legislative control. Regulation of AI is essential for harnessing its potential while keeping humans in control and ensuring that AI’s power is used for good; addressing the relevant ethical considerations is vital for AI’s integration into society (Taddeo and Florid, 2018). Regarding medicine, both the regulation and the labor dimensions are crucial. Medicine has a long track record of developing protocols and other strict rules and high standards for medical technology; therefore, medical AI also became highly regulated at the outset. Further, healthcare relies on highly trained, highly compensated personnel. Artificially replicating some of their skills, knowledge, and expertize with automation might have benefits on a societal level: broader access to healthcare, cost savings, and faster care, ultimately resulting in a healthier society. Unless delicately balanced, though, this process might go against healthcare professionals’ interests; they could lose their status and employment opportunities.
OR 2.0: from groundbreaking success to devastating complications
The eighth season of Chicago Med consists of twenty-two episodes and OR 2.0. first appears in Episode 9; it is a donation from the Dayton Corporation. Previously, Crockett Marcel (Dominic Rains), the ambitious general surgeon of Gaffney, saved the CEO and an employee of this tech company after an accident (S8 E4). Jack Dayton (Sasha Roiz) is a programmer and businessman, depicted in the media as a brilliant innovator hero who wants to change the world; however, he admits that his motivations include money and ego. Dayton sets up OR 2.0 and brings the representatives of his company, Doctors Petra Dupre (Mishael Morgan) and Grace Song (T. V. Carpio), to the Gaffney. Introducing two female experts is a progressive representation of the IT sector.
Robert Evans’ case frames the OR 2.0-related storylines: this patient is already diagnosed with terminal pancreatic cancer when he arrives at the ED with a minor injury. His doctor, Will Halstead (Nick Gehlfuss), suggests exploring whether OR 2.0 can provide him a last chance. Marcel hesitates because he is still discovering how the system works, but Dayton sees a promotion opportunity in the case. The science-fanatic Evans convinces Marcel to operate on him, as he believes that even his death would provide valuable information. The operation succeeds, but a few months later, the cancer is back. Evans dies in his second operation because of a complication, and from that point on, some of the doctors try to suspend the use of OR 2.0.
All in all, fifteen episodes were OR 2.0 related. Twelve patients received treatment in the advanced suit for different health issues. The analysis includes scenes in the OR 2.0 (surgeries, simulations) and the characters’ discussions about the system.
The research questions regarding the OR 2.0 storylines are:
R1: What were the reasons for and the outcomes of OR 2.0’s usage?
R2: What kinds of ethical issues did OR 2.0 raise?
R3: What attitudes did the professionals and patients have toward OR 2.0?
Reasons for and outcomes of OR 2.0 usage
The OR 2.0 is for innovative surgeries, not routine procedures like laparoscopic appendectomy, as resident Kai Tanaka-Reed (Devin Kawaoka) expresses. Using OR 2.0 had four types of indications, three of which were medical:
-
(1)
To solve unsolvable cases. Groundbreaking surgeries happen in this ward: the surgeons of Gaffney Medical Center overcome the limitations of medicine and save patients who are inoperable without OR 2.0. It is the ultimate chance to save Robert Evans and the only option to provide two patients with a better quality of life. For a Crohn’s disease patient, OR 2.0 makes the esophageal reconstruction possible (S8 E10). Surgeons also cure a young man with a rare, extreme bodily malformation, ankylosing spondylitis (S8 E17).
-
(2)
To provide the best outcome for patients. Marcel reattaches the hand of a construction worker (S8 E14) and saves a traumatically injured teenager from amputation (S8 E15). Doctors also save the life of a critically injured stabbing victim (S8 E13), and OR 2.0 provides a shorter recovery time for resident Tanaka-Reed from his hernia repair (S8 E16). A hospital board member has a noninvasive brain tumor operation (S8 E19).
-
(3)
To operate on visually challenging cases. Surgeons relocate the operation of a patient with a broken pelvis and heavy bleeding from a traditional ward to OR 2.0 to use the advantages of intraoperative magnetic resonance imaging (MRI) and save the patient from the risk of transfer and contagion (S8 E12). In another episode, a man has an open appendicitis surgery because, formerly, he had a sigmoid colon resection that complicated his case (S8 E16).
-
(4)
Promotion of OR 2.0. Dayton regularly invites audiences to surgeries in OR 2.0—including media and business representatives – for advertising purposes. These operations resemble the 1800s and early 1900s surgical demonstrations when surgery was stabilized as a prestigious specialization. The surgical amphitheater, the performative, show-like operations, demonstrated new procedures and tools. In Episode 22, Marcel performs a hernia surgery on Dayton in front of the audience; with this, the developer demonstrates the potential of the system and his trust in it.
Ethical issues
Thirteen operations take place in OR 2.0, and eleven are successful. Despite this high success rate, several AI-related challenges and diverse ethical issues appear, as summarized in Table 1.
During the first operation, OR 2.0 questions Marcel’s suture technique because Petra Dupre calibrated the system for a less professional surgeon. Seeing the doctor’s expertize, Dupre reconfigures the system to stop the alert (S8 E9). Marcel did not know about this setup—which is a transparency issue. The company guards the system’s configuration from the surgeons. While Dupre is there to provide support and explanations, she also acts as a gatekeeper, making the AI-based technology less transparent for its ultimate users, the doctors.
Surgeons and OR 2.0’s judgments differed in several situations, which represented the issue of selective adherence. The willingness to follow the advice of AI generally depends on professional attitudes. The AI-opposing trauma surgeon, Dean Archer (Steven Weber), is not open to AI guidelines, while the more optimistic Marcel tends to be overreliant. Nevertheless, there are exceptions to his attitude. The system suggested the termination of one surgery because the patient’s survival chance was under 50 percent, but Marcel disregarded this advice (S8 E10). In the case of a traumatically injured leg, OR 2.0 prognosticated only a 7 percent chance of saving the limb. However, during the surgery, Marcel detected the return of circulation with his eyes and touch (S8 E15), thus terminating the amputation. As these storylines suggest, keeping control in the surgeon’s hand is beneficial; the medical expert should have the authority to override the system. Relying solely on OR 2.0 warnings and suggestions is a mistake because the system is overly cautious due to its setup.
There is an exception when doctors cannot override the system. During surgery, OR 2.0 evaluates the physical condition of the doctors according to Dayton’s safety setups. It flags an alarm because Archer has high blood pressure (S8 E12) and locks out Marcel because of his fatigue (S8 E13). In these cases, surgeons have no control over OR 2.0, potentially benefiting the patients’ safety and quality of care. The selective adherence issue also appears in patients: those who get favorable suggestions from OR 2.0 agreed to use this advanced technology despite the risks.
The hypermodern surgical suit can both advance and endanger the surgeon’s expertize. With his limited experience, first-year surgical resident Tanaka-Reed performed a life-saving surgery in OR 2.0 in Marcel’s presence, which Dayton saw as an ultimate success (S8 E13). However, after a spell of successful cases, during an innovative hand attachment surgery, OR 2.0 stopped instructing Marcel because it did not have enough data. Dupre had to remind the lost surgeon to use his clinical judgment (S8 E14). This case is a typical automation bias issue: the system cannot guide due to insufficient input, revealing a temporary deskilling of the user. At the beginning of the episode, Marcel and Dayton said in an infotainment: “The OR 2.0 is to surgery what GPS was to travel.” Then, this slogan sounded like a favorable comparison. However, after the nearly failed surgery, Marcel admitted that he regularly practices driving without GPS to keep his skills fresh. He realized that he needed to do the same in the case of surgery. Finally, he problematized his heavy reliance on OR 2.0 and rescheduled his upcoming operation to a traditional ward to practice and get back his confidence.
Surgical merit becomes controversial because who has the credit for the operations in OR 2.0: the surgeon, the system, or the system’s developer? This dilemma is a display of the accountability gap. Marcel assigns the merit to the system for the esophageal reconstruction (S8 E10). In Episode 17, Marcel and leading neurosurgeon Sam Abrams (Brennan Brown) operate on an ankylosing spondylitis patient: without OR 2.0, their expertize would not be enough to help the young man. The two surgeons have a heated debate about the success factors. Abrams agrees to participate in the surgery because he thinks the patient will be paralyzed without his neurosurgery expertize. He believes that OR 2.0 and a general surgeon like Marcel are insufficient to succeed. Marcel reminds him that there would be no operation without OR 2.0. The patient and his mother call Marcel and Jack Dayton their guardian angels. Later, Robert Evans’ wife praises God for OR 2.0, which brings a religious dimension to the discussion (S8 E20).
As an advanced tool, OR 2.0 raises questions about accountability for mistakes, responsibility for malpractice, and acceptability of complications. Despite the advanced simulation and pre-calculation, complications happen. During Robert Evans’ second surgery, after a long calculation, OR 2.0 shows a lesion to Marcel, who removes it. Evans does not wake up after the surgery; he had a stroke. Marcel feels responsible for not being able to prevent this complication, which is a well-known risk of this procedure. Dayton wanted to protect the surgeon’s reputation and prevent accountability by deleting the operation’s documentation. It is another example of the transparency issue: the owner can cancel data from the system. He hands the documentation to Marcel on a USB stick to let him learn from his mistake. While comparing the visual data, Grace Song realizes that Marcel is innocent: OR 2.0 hallucinated the lesion and misled Marcel. As Halstead interprets it, the unreliable, unmarketable OR 2.0 killed the patient. Dayton refuses to pull the system back from usage and sale; thus, Halstead and Song hack it during Dayton’s promotional surgery to demonstrate its fragility.
Before the sabotage, Dayton decides that OR 2.0 is available only for paying patients, demonstrating unequal access. He started as a benefactor of the Gaffney Medical Center but became a majority investor in the institution; thus, he can deny access to the AI-based surgical unit. This situation represents the political dimension of controlling access to technology.
Attitudes toward OR 2.0
Doctors, nurses, and developers have four attitude types, among which two are stable, and two are shifting as they realize the opportunities and challenges of OR 2.0.
-
1.
Optimism, trust. Dayton, the creator of OR 2.0, fully trusts his system, just as Dupre, the head of the beta-testing team. As many call him, the “face of OR 2.0” is Marcel, the first surgeon who operated on an actual patient in OR 2.0. He is not uncritical about the system and faces its challenges but remains optimistic.
-
2.
Growing trust. Leading neurosurgeon Abrams is critical of OR 2.0 and the hype around it. However, during a complicated surgery, he starts to appreciate the technological potential of the system.
-
3.
Growing cautiousness. ED doctor Halstead encouraged the application of OR 2.0 on his patient; later, he intentionally ruined its reputation. He makes Dayton’s employee, Grace Song, more cautious. The ED charge nurse, Maggie Lockwood (Marlyne Barrett), collaborates with the surgical department and realizes the dangers of the surgeon’s deep investment in OR 2.0.
-
4.
Antagonism. Archer makes a 2001: A Space Odyssey (1968) reference to OR 2.0. According to him, operating there is like performing surgery in a sports bar; OR 2.0 is a backseat driver, a scalpel. He criticizes Marcel for handing over his judgment to a machine. Once, he tried to push away 2.0.
Not every patient expressed an opinion about OR 2.0 (some were unconscious and in critical condition); however, among those who voiced a standpoint, no one opposed the treatment. Some patients’ primary reaction is optimism and trust: instead of prosthetics, the construction worker chooses the innovative but risky hand reattachment surgery. Gratefulness for OR 2.0 characterizes the ankylosing spondylitis patient and his mother. Proactivity describes patients who encourage doctors to seek advanced treatment options or to treat them in OR 2.0. A man with Crohn’s disease begs Marcel to find a surgical solution instead of artificial nutrition (S8 E10). Tanaka-Reed, a doctor himself, asks his colleagues to operate on him in OR 2.0; he sees this as the best option.
Emergency department: optimization or dehumanization of care?
As an influential developer and utilitarian leader, Dayton initiates optimization reform in the emergency department. Parallel with the technologization, he intends to turn the Gaffney, a public hospital into a private institution to make it financially viable to bring further technologies—even though many people have access to healthcare only through public EDs. These two intentions interconnect in the storylines and influence the characters’ attitudes toward the technological reforms—thus, this chapter discusses both objectives. The reorgainzation started in the eleventh episode of the season; all in all, five episodes contained ED-related AI applications. About these, the chapter answers the following research questions:
R1: What were the reasons for and the outcomes of AI usage in the ED?
R2: What kinds of ethical issues did the optimization cause?
R3: What attitudes did the professionals have toward the technological reforms?
Reasons for and outcomes of AI usage in the ED
Multiple technologies appeared in the ED with different indications.
-
1.
To diminish administrative burden. The ED gets an electronic medical recording system (EMR) that speeds up the paperwork.
-
2.
To advance data analysis and decision-making. Five different AI-based systems are used in diagnostic procedures and triage. Only one is unproblematic; the others raise considerable practical and ethical issues.
-
3.
Automation. Two advanced technologies appear: one manages the logistics in the department, and the other controls the noise level. The adaptation is not satisfactory.
Ethical issues
Two of the reforms were effective and ethically unproblematic. One of these is electronic medical recording (EMR), which diminishes the administrative duties that detract time and energy from patient care (S8 E12). This speech-to-text sound-recording system is faster than written documentation. Initially, Halstead resists because he does not want to bury his face in a monitor. Writing helps him think, which is in the patient’s interest as he sees it. Song has a different opinion about interest: EMR saves time; thus, doctors can treat more patients with a shorter waiting time.
The second effective tool is the neural network-based, AI-supported extensive data analysis system that helps to diagnose a child with a mysterious disease (S8 E12). Similarly to EMR, Halstead initially refuses to collaborate with the AI, as he believes the family needs a human being, not technology, in this challenging situation. He accepts Song’s help only when he runs out of options, and AI provides fast and effective data processing, thus helping diagnose.
However, other decision-support systems and simple automations raise challenges and moral dilemmas, as presented in Table 2. OpioHealth is an AI-based system that red flags patients suspected of drug abuse. A young female patient is the first to get a mark, as many painkillers appear in her medical history (S8 E11). Thus, her ED doctor, Hannah Asher (Jessy Schram), can only prescribe her medicine with the department head’s permission. However, the patient is not a user. Asher learns that OpioHealth gave false markings in other hospitals, and its producers do not make its database-building method transparent. Asher is critical of the labeling method, the lack of background information, and former mistakes. The hospital’s lawyer argues that the AI does not make a decision; it just provides an alarm for the doctors, who are the ultimate decision-makers. Asher thinks it is already problematic because OpioHealth can raise negative prejudices. Finally, Gaffney Medical Center terminated its usage. Beyond transparency, the issue of selective adherence is also present in OpioHealth. As an ex-addict, Asher overrides the system to protect patients from unfair labeling. Her anti-AI attitude and opposition to the optimization reform also fueled her decision.
In the case of blood shortage, Song uses machine-assisted decision-making for triage, a complex bioethical challenge when a hospital lacks resources (S8 E13). The system predicts patients’ possibility of survival and prioritizes those with the best chance. Only medical parameters matter: patients’ private life and emotional factors are irrelevant. However, Halstead convinces Song to assign blood to the caretaker of an orphan with only a 7 percent chance of survival. Later, she regretted her decision because it disadvantaged other patients. The emotionless AI prioritizes the majority social interest instead of individual patients—this case reinforced Song’s belief in the moral superiority of the system.
In the fifteenth episode, an AI-based diagnostic search engine for irregular symptoms gave a potential plague diagnosis to a patient. Several ED workers were alerted and started to prepare for a healthcare crisis. However, Halstead finds bed-bug eggs in the patient’s hospital bed and realizes that the man has bites from the insect. The doctor disregards the system’s suggestion and uses his vision to complete the diagnostic procedure; his selective adherence saves the hospital from unnecessary actions and potential panic. This case is similar to the one when OR 2.0 suggested amputation, but by using his vision and touch, Marcel realized that the limb could be saved.
Another AI-based diagnostic system confuses a patient because it lets her get acquainted with every potential diagnosis. The young woman’s chart displays several rare diseases; thus, she requests tests to exclude every finding. Halstead cannot convince her to do only the necessary exams. The patient nearly dies during a biopsy, while her illness is indeed not severe or extraordinary (S8 E18). She asked Halstead whether the AI was like a fellow doctor who double-checked the ED specialist. Halstead explains that AI is more like a colleague who studies every database and journal. It is a selective adherence issue in two ways. Halstead disregards the unlikely diagnosis, but the patient adheres to the algorithmic advice instead of the opinion of her human physician. An AI-based diagnostic tool created an information overload for the patient, and the fears and insecurity that ensued put her life in danger.
The show delves into the problems with paid healthcare. If Dayton manages to turn the hospital into a private institution, many will lose access to advanced AI technologies and healthcare. The political dimension of the ED-based storylines is significant. The fear that automation and AI will take people’s jobs appears in multiple cases. Nurse Lockwood used to manage the department’s space distribution and inform her colleagues about logistics, but Song installed a monitor that replaced her work. According to Song, habits do not matter; people get used to new systems. Song installed sensors that give a red warning signal when the noise level is too high in the ED. A schizophrenic patient had a panic attack because of it; thus, Lockwood wanted to remove them and argued that she always warns ED workers to keep their voices down. Song’s standpoint is that adults should self-monitor with the help of visual warnings and offer to change the warning color. Lockwood concluded that Dayton wants to “automate people out of their jobs.”
Character attitudes
The technological facelifts have strong opponents in the ED: no doctor or nurse is straightforwardly supportive of the revolutions.
-
1.
Optimism, trust. Dayton and Song maintain their enthusiasm for the ED reforms. Beyond machines, Dayton brings expectations toward a “machine-like” functioning. He wants ED specialists not only to use his tools but also to follow the logic of the machines: instead of emotions, rely on facts and be faster and more effective. Overall, he wants to transform the hospital and its workers, too.
-
2.
Growing trust. Halstead is cautious initially, but his good experiences with innovations like the neural network and his growing sympathy for Song make him more open.
-
3.
Growing cautiousness. Asher is more and more concerned about the patients.
-
4.
Antagonism. Department leader Archer is critical of the reforms and ironically wonders whether the Dayton Corporation wants to change the doctors into robots. He does not consider himself a Luddite but thinks the company is trying to solve nonexistent problems. Charge nurse Lockwood also finds the technological upgrades unnecessary: she sees the innovations as a waste of time and energy, compromising the ED’s functioning.
Preconceptions about AI fuel opinions and attitudes toward innovation reform. Characters’ reactions to ED technologization and OR 2.0 are not always separable: Archer and Lockwood have bad experiences with the surgical system. Dayton’s ambition to turn Gaffney into a private institution also enhances opposition as several ED workers find that profit and curing are incompatible.
Patient reactions are not significant. Some are unaware that doctors involve AI in their care, for instance, in making prognoses or decisions – the patient who asks for unnecessary tests because of the AI diagnosis predictions is an exception.
Discussion and conclusion
Science fiction represents artificial intelligence in fantasy universes and encourages viewers to think about AI in this fictional setting. In contrast, by following the traditions of the medical drama genre, Chicago Med recreates a sense of reality, aims for accuracy, and embeds AI-related storylines in an atmospheric hospital environment. The implementation of AI in the surgical and emergency departments was a core plot element in the eighth season of the show.
The series provides a multifaceted and educative portrayal of medical AI. As Section 3 explains, the application of the OR 2.0 surgical suit made groundbreaking surgeries possible, helped to solve otherwise unsolvable or visually challenging cases, and provided the best potential outcomes for several patients. However, the system confronted the surgeons with new challenges. The series complexly reflected on the medical-AI-related ethical issues. Transparency, selective adherence, and automation bias caused problems during surgeries; the hallucination of OR 2.0 led to the death of the patient. The responsibility gap created conflict between surgeons. The developer of OR 2.0 maintained control over the system. After a couple of successful surgeries, he decided to treat only for-profit patients in the suit, which raised the unequal access issue. The attitudes toward OR 2.0 varied, including optimism, growing trust, increasing cautiousness, and antagonism.
Section 4 introduced the comprehensive technological reforms of the emergency department, including AI-based administration, data analysis, and decision-support. Most had ambiguous portrayals; selective adherence and transparency issues complicated patient care. ED doctors and nurses struggled with the improved work environment in several cases, and the concern that technology will take over their jobs has risen.
Contrary to popular sci-fi narratives, AI never comes to life or harms patients intentionally in Chicago Med. When OR 2.0 locks surgeons out of operations, it follows an algorithm written by its developer. Inappropriate bits of advice are the results of settings, missing data, or hacking. However, except in the case of hallucination, trustworthy doctors always correct the mistakes of developing AI on time.
So, what is artificial intelligence according to the series’s representation? Is it the future of medicine or a threat? The title of this paper is a provocative binary opposition that mirrors two extreme approaches. However, medical AI entails complex bioethical issues, so the answer is not straightforward. Chicago Med reflects on the two ends, but the virtue of the series’ AI representation is that it does not take a stand for either. Instead, it provides nuanced standpoints, problematizations, and deliberations. The thought-provoking positive and negative scenarios encourage critical thinking. For instance, OR 2.0 was like a super tool for planning and implementing surgeries, an essential system for surgery’s development. However, the last two episodes present a darker picture in a medical and social sense. The final storylines were primarily negative, and because of the negative bias, these might have had a more significant effect on the audience. Nevertheless, the optimistic attitude of well-known characters like Marcel and some patients can be a positive, balancing influence. As the Executive Director of Patient and Medical Services concluded, OR 2.0 is a promising innovation; however, it is not ready, and its current state leads to tragedies. Introducing AI into a hospital is a complex task: Chicago Med emphasizes the importance of medical considerations within the process—economic and political pressure toward rapid implementation can be dangerous.
The question of whether OR 2.0 is fiction or reality got media attention. Shoaff (2023) described that different parts of the system already exist; thus, implementing an OR 2.0-like suit is close. Friedman’s article (2023) introduced Oren Gottfried, neuro- and spine surgeon of Duke Health and permanent consultant on Chicago Med, who gave his voice to OR 2.0. The media tends to discuss innovation-related storylines: we suggest these reflections potentially strengthen the audience’s sense of the content’s realism and enlarge their trust, enhancing entertainment education’s effectiveness and content cultivation probability.
Medical dramas intend to distribute correct health-related content (Turow, 2010), and health storylines influence viewers (e.g., Hoffman et al., 2023a). The academic literature regularly discusses these productions’ entertainment education and cultivation effect (e.g., Chung, 2014; Tian and Yoo, 2020). These shows not only inform them about medical innovations, but, by their framing, help to form opinions. Studies have examined Chicago Med from various perspectives (Bitter et al. 2021; Camdra-Badii et al. 2021; Cambra-Badii et al. 2023; Carney et al. 2020; Eilmus and Clayton, 2024; Kim, 2022; Nádasi, 2022). There are medical series with so far more considerable academic reception (for instance, Grey’s Anatomy); however, because of the complex content and longstanding success, we argue that the series deserves extended critical attention.
Research on AI ethics should turn more attention to AI representations in popular culture because, for instance, television series inform and affect millions of viewers—among many, who do not consume scientific content. Entertaining, emotionally engaging shows with relatable characters can distribute knowledge and opinion-shaping. Of course, ever heavy-viewers of Chicago Med can have multiple other sources to get AI-related information, and other representations of AI in popular culture have the potential to affect them. The representation of medical AI is significant because it can be a part of the healthcare of viewers who become patients: the portrayal can affect the acceptance of the technologies, and this has a direct, personal effect on the patients’ lives.
Because of the above-described characteristics of the medical drama genre, these productions can be effective platforms to raise awareness and help audiences form opinions about medical AI. To our knowledge, Chicago Med’s artificial intelligence representation is uniquely extensive within the genre: this paper intended to establish a dialog between the academic literature and the analyzed content. In the future, an empirical study could measure the exact effect of the analyzed storylines. If other medical series provide AI portrayal, that would be worth an analysis to get a more extensive research corpus.
Data availability
The paper presents a media analysis, focusing on a very popular television series that is available worldwide for streaming.
References
2001: A Space Odyssey (1968) Stanley Kubrick Productions. Metro-Goldwyn-Mayer
Alon-Barkat S, Busuioc M (2023) Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. J Public Adm Res Theory 33(1):153–169
Barnes P (2020) Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In: Proceedings of the 2020 conference on fairness, accountability, and transparency. 33–44
Bitter CC, Patel N, Hinyard L (2021) Depiction of resuscitation on medical dramas: proposed effect on patient expectations. Cureus 13(4):e14419. https://doi.org/10.7759/cureus.14419
Black Mirror (2011–present) Channel 4/Netflix. Zeppetron et al
Bodoh-Creed J (2017) The ER effect: How medical television creates knowledge for American audiences. In Kendal E and Diug B (eds) Teaching medicine and medical ethics using popular culture. Palgrave Macmillan, London, p 37–54
Brodie M, Foehr U, Rideout V, Baer N, Miller C, Flournoy R, Altman D (2001) Communicating health information through the entertainment media. Health Aff 20(1):192–199
Cambra-Badii I, Moyano-Claramunt E, Mir-Garcia J, Baños JE(2023) Teaching bioethical issues of COVID-19 pandemic through cinemeducation: A pilot study. Int J Educ Pract 11(3):339–350
Cambra-Badii I, Pinar A, Baños JE (2021) The good doctor and bioethical principles: a content analysis. Educación Médica 22(2):84–88. https://doi.org/10.1016/j.edumed.2019.12.006
Carney M, King TS, Yumen A, Harnish-Cruz C, Scales R, Olympia RP (2020) The depiction of medical errors in a sample of medical television shows. Cureus 12(12):e11994. https://doi.org/10.7759/cureus.11994
Cati A and Toschi D (2023) Biomedical imaging and rhetoric of diagnosis in medical dramas and docuseries. In Antonini S and Rocchi M (eds) Investigating Medical Drama TV Series: Approaches and Perspectives. 14th Media Mutations International Conference, Media Mutations Publishing. https://doi.org/10.21428/93b7ef64.65fb3b27
Champagne M, Tonkens R (2015) Bridging the responsibility gap in automated warfare. Philos Technol 28(1):125–137
Chicago Fire (2012–present) NBC. Wolf Entertainment
Chicago Med (2015–present) NBC. Wolf Entertainment
Chicago P.D. (2014–present) NBC. Wolf Entertainment
Chung JE (2014) Medical dramas and viewer perception of health: testing cultivation effects. Hum Commun Res 40(3):333–349. https://doi.org/10.1111/hcre.12026
Code Black (2015–2018) CBS. CBS Television Studios et al
Colwill M, Somerville C, Lindberg E, Williams C, Bryan J, Welman T (2018) Cardiopulmonary resuscitation on television: Are we miseducating the public? Postgrad Med J 94(1108):71–75. https://doi.org/10.1136/postgradmedj-2017-135122
Czarny MJ, Faden RR, Sugarman J (2010) Bioethics and professionalism in popular television medical dramas. J Med Ethics 36(4):203–206
Diem SJ, Lantos JD, Tulsky JA (1996) Cardiopulmonary resuscitation on television—miracles and misinformation. N Engl J Med 334(24):1578–1582
Dubber MD, Pasquale F and Das S (eds) (2020) The Oxford Handbook of Ethics of AI. Oxford: Oxford University Press
Eilmus A, Clayton J (2024) Eugenics and genetic screening in television medical dramas. Med Humanit 0:408–416. https://doi.org/10.1136/medhum-2023-012882
Eisenman A, Rusetski V, Zohar Z, Avital D, Stolero J (2005) Can popular TV medical dramas save real life? Med Hypotheses 64(4):885
ER (1994–2009) NBC. Warner Bros Television et al
Friedman H (2023) Duke neurosurgeon voices AI operating room on Chicago Med
Gao, Sharma and Cui (2023) Addressing the challenge of biomedical data inequality: An artificial intelligence perspective. Ann Rev Biomed Data Sci 6:153–171
Grabe ME, Drew DG (2007) Crime cultivation: comparisons across media genres and channels. J Broadcast Electron Media 51(1):147–171
Grey’s Anatomy (2005–present) The Mark Gordon Company et al
Hagendorff T (2020) The ethics of AI ethics: an evaluation of guidelines. Minds Mach 30(1):99–120
He S (2021) Who is liable for the UBER self-driving crash? Analysis of the liability allocation and the regulatory model for autonomous vehicles. In: Uytser S and Vargas DV (eds) Autonomous vehicles: business, technology and law. Singapore: Springer, pp 93–111
Hether HJ, Huang GC, Beck V, Murphy ST, Valente TW (2008) Entertainment education in a media-saturated environment: examining the impact of single and multiple exposures to breast cancer storylines on two popular medical dramas. J Health Commun 13(8):808–823
Héder M (2023) The epistemic opacity of autonomous systems and the ethical consequences. AI Society 38(5):1819–1827
High-Level Expert Group on AI (HLEG) (2020) Assessment list for trustworthy artificial intelligence. https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment
Hinkelbein J, Spelten O, Marks J, Hellmich M, Böttiger BW, Wetsch WA (2014) An assessment of resuscitation quality in the television drama Emergency Room: Guideline non-compliance and low-quality cardiopulmonary resuscitation lead to a favorable outcome? Resuscitation 85:1106–1110
Hirt C, Wong K, Erichsen S, White (2013) Medical dramas on television: a brief guide for educators. Med Teach 35(3):237–242
Hoffman BL, Hoffman R, VonVille HM, Sidani JE, Manganello JA, Chu KH, Felter EM, Miller E, Burke JG (2023a) Characterizing the influence of television health entertainment narratives in lay populations: a scoping review. Am J Health Promot 37(5):685–697. https://doi.org/10.1177/08901171221141080
Hoffman BL, Sidani JE, Miller E, Manganello JA, Chu KH, Felter EM and Burke JG (2023b) Better than any DARE program: Qualitative analysis of adolescent reactions to EVALI television storylines. Health Promotion Practice. https://doi.org/10.1177/15248399231177049
Idemudia and Onoshakpor (2023) Gender, Workforce and Artificial Intelligence. In: 2023 IEEE AFRICON: 1–3
Ismail II, Salama S (2023) Depiction of nervous system disorders in television medical drama: a content analysis of 18 seasons of Grey’s Anatomy. Clin Neurol Neurosurg 224:107569. https://doi.org/10.1016/j.clineuro.2022.107569
Jain P, Slater MD (2013) Provider portrayals and patient–provider communication in drama and reality medical entertainment television shows. J Health Commun 18(6):703–722
Kaul V, Enslin S, Gross SA (2020) History of artificial intelligence in medicine. Gastrointest Endosc 92(4):807–812
Kiener M (2022) Can we Bridge AI’s responsibility gap at Will? Ethical Theory Moral Pract 25(4):575–593
Kim G (2022) Examining diversity: a content analysis of cancer depictions on primetime scripted television. J Cancer Educ 37(no. 6):1842–1848
Kwan JL, Lo L, Ferguson J, Goldberg H, Diaz-Martinez JP, Tomlinson G (2020) Computerised clinical decision support systems and absolute improvements in care: meta-analysis of controlled clinical trials. BMJ 370:m3216. https://doi.org/10.1136/bmj.m3216
Kato M, Ishikawa H, Okuhara T, Okada M, Kiuchi T (2017) Mapping research on health topics presented in prime-time TV dramas in “developed” countries: a literature review. Cogent Soc Sci 3. https://doi.org/10.1080/23311886.2017.1318477
Kendal E, Diug B (2017) Teaching Medicine and Medical Ethics Using Popular Culture. Palgrave Macmillan, London
Lang BH, Nyholm S, Blumenthal-Barby J (2023) Responsibility gaps and black box healthcare AI: shared responsibilization as a solution. Digit Soc 2(3):52
Lee TK, Taylor LD (2014) The motives for and consequences of viewing television medical dramas. Health Commun 29:13–22. https://doi.org/10.1080/10410236.2012.714346
Lyell D, Coiera E (2017) Automation bias and verification complexity: a systematic review. J Am Med Inform Assoc 24(2):423–431
Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6:175–183
Morgan M, Shanahan J (2010) The state of cultivation. J Broadcast Electron Media 54(2):337–355
Murphy ST, Hether HJ and Rideout V (2008) How healthy is prime-time? An analysis of health content in popular prime time television programs. In: Menlo Park, CA (ed) Kaiser Family Foundation Report
Nádasi E (2016) Changing the face of medicine, alternating the meaning of human: Medical innovations in Grey’s Anatomy. Crit Stud Telev 11(2):230–243
Nádasi E(2022) A koronavírus-járvány ábrázolása az amerikai kórházsorozatokban. Információs Társadalom 22(3):9–23. https://doi.org/10.22503/inftars.XXII.2022.3.1
Nyholm S (2020) Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers
Pescatore G (2023) Why medical drama? An interdisciplinary study of narrative layers and societal impact. In: Antonioni S and Rocchi M (eds) Investigating Medical Drama TV Series: Approaches and Perspectives. 14th Media Mutations International Conference. Media Mutations Publishing. https://doi.org/10.21428/93b7ef64.c9d8cd00
Quick BL, Kriss LA, Rains SA, Sherlock-Jones M and Jang M (2023) An investigation into the portrayal of organ donation on Grey’s Anatomy Seasons 1 through 15. Health Communication. https://doi.org/10.1080/10410236.2022.2163051
Raji ID, Smart A, White RN, Mitchell M, Gebru T, Hutchinson B, Smith-Loud J, Theron B, Barnes P (2020) Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, 33–44
Rideout V (2008) Television as a health educator: A case study of Grey’s Anatomy. In: Kaiser Family Foundation Report. Menlo Park, CA: Kaiser Family Foundation. https://www.kff.org/wp-content/uploads/2013/01/7803.pdf
Rocchi M (2019) History, analysis and anthropology of medical dramas: a literature review. Cinergie 15:69–84
Russell SJ and Norvig P (2016) Artificial intelligence: a modern approach. Pearson
Salamone G (2024) Here’s When Chicago Med Season 10 Premieres. https://www.nbc.com/nbc-insider/chicago-med-has-been-renewed-for-season-10-all-to-know
Salvagno M, Taccone FS, Gerli AG (2023) Artificial intelligence hallucinations. Crit Care 27(1):180
Shoaff B (2023) Is There Such Thing As Chicago Med’s OR 2.0? https://www.looper.com/1191338/is-there-such-thing-as-chicago-meds-or-2-0/
Shortliffe EH (1974) MYCIN: A rule-based computer program for advising physicians regarding antimicrobial therapy selection. Doctoral Dissertation. Stanford University, CA
Sutton RT, Pincock D, Baumgart DC (2020) An overview of clinical decision support systems: benefits, risks, and strategies for success. NPJ Digit Med 3:17. https://doi.org/10.1038/s41746-020-0221-y
Taddeo M, Florid L (2018) How AI can be a force for good. Science 361(6404):751–752
The Good Doctor (2017–present) ABC. Sony Pictures Television
Tian Y, Yoo JH (2020) Medical drama viewing and medical trust: a moderated mediation approach. Health Commun 35(1):46–55. https://doi.org/10.1080/10410236.2018.1536959
Turow J (2010) Playing Doctor: Television, Storytelling, and Medical Power. University of Michigan Press, Ann Arbor
Valente TW, Murphy S, Huang G, Gusek J, Greene J, Beck V (2007) Evaluating a minor storyline on ER about teen obesity, hypertension, and 5 a day. J Health Commun 12(6):551–566
Westworld (2016–2022) HBO. HBO Entertainment
Author information
Authors and Affiliations
Contributions
EN contributed the review of the history of medical series, and all media-related aspects, found mostly in “Artificial intelligence invasion in the Gaffney Medical Center”. MH contributed the review of AI ethics, any elements on AI technology and AI history found mostly in “A review of real-world ethical issues of AI”. The plotline analysis and mapping to ethics issues in “OR 2.0: from groundbreaking success to devastating complications” and “Emergency department: optimization or dehumanization of care?” was joint work by EN and MH, each contributing their respective exercise. “Discussion and conclusion” is also a joint work by EN and MH.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethical approval
This article does not contain any studies with human participants performed by any of the authors. This article does not involve any experiments on animals either. We used no personal data, medical data, or any kind of human-related information. No AI was involved in creating the paper. Therefore, this study is not subject to ethical approval.
Informed consent
This article does not contain any studies with human participants performed by any of the authors. We used no personal data, medical data, or any kind of human-related information. Therefore, informed consent was not applicable.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Nádasi, E., Héder, M. The future of medicine or a threat? Artificial intelligence representation in Chicago Med. Humanit Soc Sci Commun 11, 1346 (2024). https://doi.org/10.1057/s41599-024-03810-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/s41599-024-03810-y