Introduction

In sociology, agency is generally defined as an individual’s capacity to engage the social world1, or the efficacy of human action2. In this paper, patient agency is defined as the patient’s capacity to engage efficiently with, act on, and assume responsibility of their state of health. The Merriam-Webster dictionary defines equity as, “fairness or justice in the way people are treated.”

New artificial intelligence (AI)-based technologies such as large language models (LLMs) can progressively result in improved ways of delivering healthcare that enhance patient access, personalization, and engagement. With LLMs, we are rapidly reaching both an exciting and concerning time for healthcare, where AI technologies of today will no longer solely have an augmenting3 or even perhaps a guiding4 role in the clinician’s care arsenal, but, rather, enable the patient to become her own clinician.

To ensure that LLM-based technologies reach their full potential, however, it is important that interested stakeholders engage in an in depth discussion regarding their benefits, risks, and limitations. Therefore, the present manuscript attempts to offer the benefits and risks of a paradigm-shift in the practice of medicine that promotes health equity worldwide.

Benefits of the use of LLMs in promoting patient agency

Patient benefits of LLMs

Large language models carry an as yet unknown potential to facilitate the healthcare-involved stakeholders to focus their care around patients’ needs by improving access, engagement, and patient agency. However, LLMs, similar to other AI-based technologies, also present significant challenges associated with patient privacy, security, bias, and accountability that have to be taken into consideration5,6. Because LLMs are able to formulate comprehensible responses to complex inquiries, they offer an opportunity to advance healthcare delivery in all health- and digitally-literate patients worldwide, irrespective of where they live and their social determinants of health.

Enhancing access to care

Large language models have opened a new window into a vast landscape of new possibilities regarding the quality of care that patients can access and how they access it. LLMs can simplify the description of medical conditions, assist in drafting medical documents, create training programs and processes, and streamline research processes, and may potentially transform healthcare by enhancing diagnostics, medical writing, education, and project management7,8.

Enhancing precision medicine

LLMs offer functionality (e.g., text-to-speech) which may enhance access to care for patients with disabilities, and they can also accurately translate output to languages, thus making healthcare more accessible to individuals and their special needs worldwide.

For many years, clinicians and industrial stakeholders have investigated novel ways to deliver personalized care; however, factors such shortages of clinicians, budget constraints, and over-burdened systems have largely prevented these efforts from achieving these goals. LLMs can analyze large volumes of patient data, such as genetics9,10, lifestyle11, EHR12,13, and medications14,15, and, therefore, they may enhance precision medicine by identifying potential risks, suggest preventive strategies, and develop personalized treatment plans for patients with chronic or rare conditions16.

Promoting patient engagement and outcomes

It has been recognized that increased patient engagement, resulting in patients taking more ownership of their health-related decisions, often leads to better outcomes. Consequently, patients who adopt better adherence to their treatment plans more frequently acquire and attain effective preventive actions, which eventually result in improved short- and long-term outcomes.

LLMs have the transformative potential to be powerful allies in promoting patient engagement17,18. By enhancing personalized patient education, access to understandable medical information, and clinical decision support; by facilitating patient-clinician communication or understanding a consent form; by providing personalized health plans and coaching; by extracting key information from patient or clinician notes; and by empowering self-management, and supporting shared decision-making, LLMs can help create a more patient-centered, proactive, and equitable healthcare experience. However, responsible development, ethical implementation, and a focus on human connection are crucial to realizing these benefits and ensuring that LLMs truly empower and engage patients in their care.

Promoting patient agency in shared decision-making

Patient agency refers to “the abilities and capabilities of patients to act, contribute, influence, and make decisions about their healthcare”19. It depends on both the readiness and inclination of patients to take part in care decisions, and the barriers sustained by healthcare providers, as well as traditional services and systems, that limit such engagement19. Shared decision-making involves a concerted practice that includes a patient and her healthcare team working together to reach joint decisions, which are based on the patient’s informed preferences and clinical evidence19.

Despite increased appreciation of the significance of health-literacy at a policy and global level20,21, and given that the practice of medicine today is more patient-centered than in the past, many clinicians believe that it is the patients’ responsibility to improve health literacy rather than the responsibility of the clinician to adapt their communication and educational methods to the different health-literacy levels and needs of their patients. A cofounding of this situation is the confusion regarding health- and digital-literacy, which, while they are related, they are intrinsically different.

In the dawn of today’s worldwide digital transformation22, the generally accepted assumption has been that digitally-literate people are also health-literate; however, while digital literacy is a valuable asset and contributing factor to health literacy in today’s digital age, it is not a guarantee or direct substitute for health literacy. Being digitally literate makes it easier to access health information, but it does not automatically ensure someone can understand, evaluate, and apply that information effectively in a health context23.

While digital technologies have enabled and facilitated increased access to health-information and healthcare applications, not all individuals will have the knowledge or the capacity to access them effectively24. Thus, for health-literacy to be achieved, patients not only need accurate information they can trust, but they also need sufficient skills to identify accurate and reliable sources of information within the enormous available array of resources to which they are now exposed through LLMs.

Thus, one can frame the patient-centered changes empowered by LLMs in terms of: (i) values and preferences--patients may prioritize certain outcomes or aspects of care differently than clinicians (e.g., quality of life vs. longevity, natural remedies vs. aggressive interventions); (ii) understanding of their condition--they have done their research, formed opinions, and have specific needs and questions; and (iii) practical limitations--they might have constraints related to lifestyle, finances, time, social support, or personal beliefs that impact their ability to adhere to certain treatment plans; a patient might intellectually understand and even agree with a plan, but practically, it might be very difficult for them to adopt and implement it, in their daily life.

As patients become increasingly more health-literate worldwide, it appears that there will be a need for compromise between an LLM’s optimal clinical care (or from a medical perspective, best course of action), and what patients feel they need and are able to achieve that will be dependent on the level of risk a patient is willing to accept, as well as the severity of the illness. Eventually, improved awareness of a patient’s health should facilitate and enhance more effective communication and more fruitful engagement with the patient’s clinicians. At the systems level, relevant infrastructures and processes also need to be in place to support such a system (comprised of health-literate physicians, specialists, and other health care providers).

Promoting patient agency in individual decision-making

While improving health literacy has been a stated objective20,21, there are aspects of health-literacy, such as the understanding of risk and benefit, needed in a discussion of treatment choices and the resulting decision-making that may not always be feasible or even available.

Eventually, improved patient levels of health-literacy result in a better understanding of their disease and the available types of treatment, and render them better equipped to care for themselves, even if they are not able to have immediate access to therapies from which they could benefit. Assuming broad-band internet access exists, such benefits could be realized in remote areas, for example, in Africa, Central and South-East Asia, Central and Latin America, where the under-resourced medical systems and the remoteness of some areas may force people to prioritize resources and travel, in order to see a specialist.

In conclusion, given that LLMs offer a real and growing capacity for health-education, knowledge, and competency that could advance patient empowerment and agency, their integration and adoption by healthcare services and structures could minimize patient- or physician-driven medical errors (Box 1). While patients are becoming the drivers of this transformation, such a process will require their participation, worldwide, in leading roles in determining strategy, making policy and governance, and in building and sharing practices and evidence such that patient empowerment and engagement become broader and deeper at all levels (i.e., education, research, care, regulation, governance, etc.).

Risks of the use of LLMs in promoting patient agency

Health care providers should also strive to understand the potential benefits, risks, and ramifications of LLMs in order to guide patients appropriately when possible. Similarly, patients and their relatives who may seek “expert” opinion in LLMs should be mindful of the limitations and risks of these technologies, as well as their lesser capacity compared to an expert in understanding and appreciating the information they have been provided (Box 1).

Using LLMs with caution

Despite LLMs’ growing utility in assisting diagnosis and improving patient-physician communication, challenges persist, including limitations in contextual understanding, assessment of risk, and degree of reliance. While an explosion in LLM-related research has focused on improving medical writing, diagnostics, and communication, there has been a need for careful validation of medical knowledge, as well as ethical concerns with respect to how that knowledge is integrated into traditional medical practice.

It is imperative that all stakeholders not become overly-enthusiastic with the potential utility of LLMs in healthcare. We must always keep in mind that the ultimate focus of any new technology is to facilitate the delivery of medical care in a way that improves patient outcomes while protecting human dignity (i.e., by respecting privacy and safety). Therefore, it is vital that we are transparent about the potential limitations and risks associated with the use of LLMs.

Accuracy, accessibility and completeness

Accurate and accessible medical information is key to successful patient-centered care. Obtaining relevant risk and benefit information on treatment alternatives ensures optimal health outcomes for patients and their relatives seeking reliable information that they can understand and trust about a specific medical condition or treatment.

Owing to the inherent nature of LLMs, algorithms that generate an output by relying on analysis of vast amounts of text, they run the risk of including biased views and inaccuracies in their outputs. Bias may manifest itself when, for example, LLMs draw conclusions from a data source in which certain population demographics are underrepresented3,4. Of particular concern are so-called LMM hallucinations, which can potentially be harmful to patients by delivering inaccurate diagnoses or recommending improper treatment options8. To guard against such problems, it is essential that LLMs, like other AI tools, are subject to rigorous pre-market evaluation and post-market monitoring4. One way to accomplish this is by including medical professionals (the “expert human in the loop” concept) throughout the development, evaluation, and deployment of LLMs in clinical practice.

Privacy and security

Similarly to traditional AI-based healthcare technologies, LMM developers and stakeholders must recognize and address patient privacy and security concerns. LLM developers must be as transparent as possible, without compromising on model performance, with patients and the industry about the functionality of their algorithms and the potential associated risks (e.g., compromise patient privacy) they may present3. The collection, processing, storage, and sharing of sensitive patient information raise significant privacy concerns, with at least one major concern being the risk of unauthorized access or data security breaches25. As LLMs interact increasingly with patients and healthcare providers, they may increasingly collect and store natural history health information (i.e., medical history, test results, diagnoses, other sensitive data, etc) that need to be safeguarded.

With multimodal patient data collection, another privacy concern involves the non-trivial risk of patient re-identification26. Even if the data collected undergo complete de-identification, it has been shown that it is still possible that individuals can be re-identified by linking medical with other non-medical available data27.

Bias and accountability

While a clinician may be aware as to whether an AI algorithm is suitable for use in a particular case of a specific patient, patient-driven use of LLMs for medical decisions may be vulnerable to bias or bias in the training data3,4. For that matter, OpenAI explicitly states in its terms of use that it assumes no responsibility for the content generated by GPT28, making it clear that the burden of error falls entirely on the user. This inevitably poses the question of who will be held accountable in case of an inaccurate or inappropriate output leading to harm, blurring the lines of accountability and responsibility3.

Thus, if a patient adopts LLMs for medical advice, it is essential that she investigates the diversity of the employed LLM training data, and be appropriately cautious about that advice. Ensuring high visibility of LLMs (i.e., pertinent to their guidelines and warnings) and ensuring that users have sufficient understanding of the effectiveness of these tools in order to assess the reliability of their recommendations is of paramount importance.

Clinician responsibility

LLMs may provide significant benefits to patients, as noted previously, but will also require significant oversight from clinicians to monitor for abnormalities, communicate with patients, and update treatment strategies based upon new information. For clinicians and healthcare systems to incorporate these technologies into clinical practice, it will be necessary for the administrative burden to be recognized by payers and compensated appropriately. This will be a major challenge for systems built on legacy without interoperability and adaptability5,29, and for smaller organizations that may not have the resources to permit such transformations and modernizations.

LLM performance “Ups” and “Downs”

While Google Search has been the most frequent outlet for patient-driven medical learning and guidance, ChatGPT performed better than Google Search when questioned on general medical knowledge. Importantly, however, it scored worse when it was asked healthcare-related recommendations30.

In a study aimed at evaluating its ability to generate a differential diagnosis (DDx) alone or as an aid to clinicians, a Google LLM optimized for diagnostic reasoning evaluated in 302 challenging, real-world New England Journal of Medicine case reports exhibited standalone performance that exceeded that of unassisted clinicians. The DDx score was higher for clinicians assisted by the Google LLM compared to clinicians assisted by conventional search engines and standard medical resources. Furthermore, clinicians assisted by the Google LLM arrived at more comprehensive differential lists than those working without its assistance, suggesting that the LLM could empower clinicians and patients’ access to specialist-level expertise31.

Incidentally, the prevailing view that the incessant LLM scale-up (i.e., increase of their complexity and data volume that have been trained, as well as computational infrastructure) will render them more potent and flexible has not proven correct since larger and more instructable LLMs may have become less reliable in addressing areas of low difficulty32. In addition, while early LLMs often avoid user questions, scaled-up models tend to give a seemingly logical yet incorrect answer much more often, including errors on difficult questions that humans often overlook32. These observations underscore the need for a new conceptual strategy, design, and development of general-purpose AI, particularly in the high-valued healthcare area, where a distribution of predictable errors is needed32.

Importantly, the text generated by LLMs may also have the tendency to produce hallucinations due to a variety of reasons, such as unreliable sources33, probabilistic generation34, biased training data35, insufficient context34, and self-contradictions33. Researchers have suggested that these limitations can be overcome by training these algorithms using datasets from healthcare domains, e.g., EHR data like GatorTron36,37, or LLM constraining using medical datasets like Med-PaLM 238 and Flan-PaLM39.

Conclusions

The World Health Organization has defined equity as the absence of avoidable, unfair, or remediable differences among groups of people, whether the groups are defined socially, economically, demographically, or geographically40. In order for health equity to be achieved, every citizen should have a reasonable opportunity to gain access to fully available healthcare.

While application of AI in medicine has been proposed as the “democratizer” of health care, thus reducing worldwide disparities and improving health equity, LLMs offer even a more radical transformation in the delivery of health care by increasing patient agency in the patient-clinician relationship, and for the first time enabling patients to make medical decisions by and for themselves. Additionally, LLMs may (i) shift focus to health and prevention, empowering individuals to manage their health proactively, potentially reducing reliance on reactive, expensive healthcare interventions; and (ii) expand access to personalized health support by providing continuous, affordable, and personalized health coaching and monitoring to individuals, regardless of their location or socioeconomic status. By contrast, such tectonic shifts in health care are not coming without risks that may endanger patient well-being.

The implications of the aforementioned discussion in the clinician-patient relationship are as follows: (i) Healthcare is expected to become more patient-centered--Clinicians are expected to move beyond simply presenting “the best” medical option, and engage in shared decision-making, understanding and respecting patient values and practical limitations; (ii) Discourse is key—Instructive, formal, extended and effective expression of thought is crucial for bridging the gap between clinical expertise and patient understanding, ensuring patients are truly informed to participate in compromise and decision-making; (iii) “Optimal clinical care”--From an LLM point of view, optimal care is a more nuanced concept, as it might be redefined in the future to include not only medical effectiveness, but also patient satisfaction, adherence, and quality of life, acknowledging the patient’s perspective as integral to successful healthcare; and (iv) Training for healthcare professionals needs to adapt to this rapidly changing environment--Medical education needs to continue to emphasize communication skills, shared decision-making techniques, and understanding patient perspectives, alongside traditional medical knowledge, but must do so in the context of this important element of the new health ecosystem. Thus, a positive and necessary shift in healthcare systems and philosophies towards greater patient involvement and personalized approaches is evolving. It is, therefore, crucial to acknowledge the complexity of balancing clinical expertise with patient autonomy and to underscore the importance of open communication and shared decision-making in this evolving landscape.

Therefore, LLMs hold further promise for democratizing healthcare by improving access, affordability, and efficiency, and by breaking down language barriers and cultural competency. They can empower patients and clinicians alike, eliminate impediments to information and care, and potentially transform healthcare delivery, especially for underserved populations.

The challenges in integrating LLMs in the important patient-physician relationship41 or in exclusive patient decision making involve the unleveled relationships between patients, clinicians, and other healthcare professionals, services, and systems, the failure to recognize the added value of health-literate patients, and the often observed conservative and inflexible hierarchical cultures that are resistant to change. Healthcare structures and settings further contribute to the uneven relationships between patients and clinicians during shared decision making, which often result in failure to recognize the patient’s experience and in building trust.

It appears that progressively LLMs will level patients’ health literacy needed to improve patient agency and autonomy, leading to improved patient-physician mutual acceptance and respect. In order to augment the understanding of these ideas and their associated challenges, subsequent steps should involve the close investigation of the events, patterns, and structures identified as barriers to patient agency in this manuscript, recognizing the irreplaceability of human medical professional knowledge and experience, while also emphasizing the importance of integrating AI technologies with human expertise.