Introduction

The electronic health record (EHR) encapsulates an entire longitudinal patient record across multiple care delivery settings. In this paper, the term EHR refers to a longitudinal, interoperable digital system designed to integrate patient information across sites of care. Equivalent terms such as Electronic Medical Record or Electronic Patient Record are used in other regions (e.g., Europe and the United Kingdom) to describe similar concepts; however, we adopt the broad EHR terminology consistent with usage in North America. It is supposed to streamline clinical workflow, provide decision support, and enable quality management and outcomes reporting1. The potential benefits of EHRs include improved access to complete patient records at the point of care, reliable e-prescribing, greater coordination of care, improved coding and billing, and the creation of consistently legible documents2. However, most clinicians would agree that EHRs have fallen far short of their full potential. This shortcoming is even more troubling considering the near-ubiquitous adoption of these digital tools3,4.

While the gap between actual and potential benefits of EHRs is multifactorial, the cognitive burden associated with the daily use of EHRs is one of the most important. It is largely the result of suboptimal software usability5. In their current state, EHRs sub-optimally serve the context-specific needs of end-users. Poor usability negatively impacts clinical efficiency and data quality6. Several studies demonstrate poor usability scores for EHRs; some even document negative impacts on patient outcomes, such as medication safety events or poor documentation resulting in patient harm7,8,9. Other studies have found dose-dependent relationships between worse EHR usability scores and higher odds of burnout. Another study showed that among more than 4000 physicians surveyed, 63% agreed that the EHR added to daily practice-related frustrations7,8. This might underestimate the problem because while some usability aspects can be quantified, others may be more subjective, complex, or difficult to define and measure.

While usability issues account for many of the shortcomings of EHRs, other key issues should be considered. These include policy and regulatory constraints and the overall maturity of healthcare-related technologies. Howe et al. outline several usability challenges that can contribute to patient harm and poor user satisfaction10. Our analysis takes a similar approach with an expanded focus to consider key issues that include: (1) data entry and documentation, (2) interoperability, (3) alerts and clinical decision support (CDS), (4) visual display, (5) system automation and defaults11, (6) user-centered design processes, (7) technology maturity, and finally (8) policy and regulatory considerations.

The implementation of EHR systems in low-resource settings should also be considered. Low-resource settings are shaped by multiple determinants of health, including connectivity, infrastructure, and literacy. A recent scoping review and expert-consensus study published in the Bulletin of the World Health Organization identified 127 determinants across digital, social, commercial, and political domains and highlighted connectivity, device/software availability, and digital literacy as among the most urgent to address. Recognizing and mitigating these determinants is essential to promote equitable implementation of EHR systems in low-resource settings. Fig. 1

Fig. 1: summarizes key domains contributing to electronic health record (EHR) usability challenges and highlights corresponding emerging solutions.
Fig. 1: summarizes key domains contributing to electronic health record (EHR) usability challenges and highlights corresponding emerging solutions.
Full size image

Together, these themes illustrate how technical innovation, human-centered design, organizational processes, and supportive policy frameworks can collectively advance EHR functionality, clinician efficiency, and patient safety.

This review aims to enumerate the shortcomings of currently available EHR systems and suggest innovative solutions to improve their use. At present, these systems remain static and do not readily align with clinical workflows. The immaturity of available technologies further limits their ability to communicate, interpret, and act intelligently upon complex healthcare information12. Further, EHR development and design processes and a nascent policy and regulatory environment restrict rapid reform. These EHR systems cannot reach their full potential unless they evolve to reflect user-friendly best practices, seamlessly integrate into existing workflows, and incorporate flexible architectures that facilitate greater user control and customization tailored to diverse workflows.

Data entry and documentation

Understanding the problem

EHR systems must enable users to input structured (i.e., vital signs, medication entry, etc.) and unstructured (i.e., clinical notes) data. Current EHRs do not excel at this. For each hour spent on direct patient care, physicians spend up to two hours on EHR-related tasks, which include data entry, documentation, patient messaging, and other activities13. Outpatient physicians spend approximately one-third of dedicated patient time interacting with the EHR, which can negatively impact the patient-physician relationship, clinician job satisfaction, and quality of the patient encounter14. Additionally, inadequate user interfaces can result in errors that may lead to unintended patient harm.

Creating solutions

The use of scribes to assist with documentation burdens has been shown in some studies to improve physician workflow, decrease burnout, and enhance job satisfaction. Still, they require time-intensive training, have high turnover rates, and cost up to $50,000 annually13,15. In other models, physicians record audio during or after patient encounters, and these would be transcribed asynchronously by “telescribes,” leaving traditional data entry of structured elements in the hands of the clinical workforce. On the other hand, ambient listening technologies leverage natural language processing (NLP) algorithms to automatically convert conversations and physician instructions to data entry and meaningful notes, but they may require significant training13. Large language models (LLMs) are a type of generative AI system that can autonomously learn from textual data to understand and generate sophisticated text. While LLMs have numerous shortcomings at this present time that limit immediate use in healthcare, they can theoretically alleviate administrative burdens on the workforce if applied ethically, responsibly, and appropriately. Utilizing LLMs to assist with patient messages is being trialed at many institutions16. While clinical accuracy and acceptance appear to vary across studies, and additional work is needed to improve automated message generation capabilities, such solutions could drastically reduce administrative burdens17. Ambient intelligence technology and LLMs may not fully resolve all the shortcomings of EHRs at this time. Still, they can—if leveraged correctly—facilitate better physician–patient interactions, reduce time wasted on some EHR-related activities, and possibly reduce unintended harm caused by data entry errors.

Interoperability

Understanding the problem

The data contained in an EHR must be standardized so that it can be exchanged and used between different sites within the same health system (inter-system), different sites using the same vendor (intra-vendor), and between different vendors (inter-vendor)18. Lack of interoperability can lead to fragmented patient records, delayed communication between care teams, and increased clinician time required to obtain and communicate health information to relevant stakeholders19.

Many of the interoperability challenges can be addressed by adopting standards. HL7 v2 and vocabulary standards, such as RxNorm, LOINC, and SNOMED provide standards for data exchange, integration, sharing, and retrieval of electronic health information20. However, current HL7 standards are limited in handling novel health data types, accommodating the emerging mobile health app ecosystem, and enabling flexible data sharing between health systems, third parties, and patients.

Application programming interfaces (APIs) address several interoperability limitations. APIs are software intermediaries that allow near real-time data transmission between applications, retrieving information on demand, as opposed to the HL7 model, in which information is constantly transmitted. This process—pull approach—allows for more efficient data transmission that minimizes transmission of PHI. Data is “pulled” as opposed to constantly “pushed,” as in the case of the HL7 model, enhancing efficiency and minimizing protected health information exposure by transmitting data only when specifically requested.

There are both custom and standard APIs. Within the subset of standard APIs lie the Fast Health Interoperability Resources (FHIR) APIs, which are a common set of APIs established so that healthcare platforms can better communicate and participate in data sharing21,22. Healthcare information is packaged as “resources” in a standardized way, enabling these resources to be more easily identified and exchanged. The Substitutable Medical Applications and Reusable Technologies (SMART) on FHIR API enables the transport of these resources to and from other systems, facilitating the launch of external applications directly from an EHR’s interface23.

While FHIR standards aim to reduce complexity without compromising information integrity, their adoption, implementation, and complexity challenges remain22. Many external applications do not fully integrate with EHR workflows. A proliferation of SMART on FHIR apps could place extra burdens on end-users to choose from numerous applications and navigate to external interfaces, resulting in app fatigue and workflow interruptions22,24.

Creating solutions

Emerging standards can address some of these challenges. CDS Hooks is a FHIR-based HL7 standard that can assist in recommending relevant SMART on FHIR apps to end-users24. “Hooks” within workflows can trigger prefetched information to present synchronously within the user’s workflow, or they can trigger the launch of an external application25.

Ideally, a truly interoperable EHR would enable 3rd party applications to seamlessly integrate within the EHR instead of simply permitting data to be “pulled” into the application’s external environment. SMART Web Messaging is another new companion standard that allows for communication from the external application to the EHR, order placement, and communication of actions that occur within the application back to the parent EHR25. Even though these are still maturing, there is momentum behind such standards. Further advancement in FHIR-based approaches will help improve workflow integration, user interfaces, and user experience and ultimately increase adoption across health systems.

Further, existing standards and EHRs focus on clinical entities, such as hospitals, clinics, pharmacies, or software developers, in the case of FHIR26. Innovative EHR developers must incorporate other data types, including wearable devices. That will enable the EHRs to generate more nuanced patient- and population-level insights.

While APIs create a technical pathway for data exchange, the presence of an API alone does not guarantee interoperability. APIs can be designed in inconsistent or proprietary ways that limit their utility and even add new layers of fragmentation27. Without alignment on standards, governance, and implementation practices, APIs risk functioning as narrow point-to-point connections rather than true enablers of system-wide interoperability. In this sense, an API is best understood as conduit, its value depends on consistency, structure, and intent of the data it transmits.

The ongoing move toward FHIR-based APIs illustrates how standards transform APIs from simple interfaces into meaningful interoperability tools. By coupling APIs with standardized data models, governance frameworks, and workflow integration strategies, health systems can avoid the pitfalls of “API sprawl” and ensure that external applications contribute to, rather than complicate, clinical care28. Addressing these higher-level design and implementation questions will be essential if APIs are to move beyond being technical functions and become effective instruments for advancing interoperability in healthcare.

Alerts and clinical decision support

Understanding the problem

CDS systems can improve clinical decision-making by facilitating access to scientific knowledge, patient data, and clinical suggestions at the point of care29,30. CDS systems communicate with clinicians by generating computerized alerts, guidelines, order sets, digital decision trees, documentation templates, and patient data reports30. There have certainly been interventions that have improved medication management and safety, but not all modalities are effective. Alerts, for example, often disrupt workflow, are ineffective or irrelevant, and sometimes excessive. They tend to fire indiscriminately based on high-level preset rules31. Similarly, digital decision trees can provide a centralized repository for institution-endorsed management algorithms, but they can also disrupt clinical workflows and require increased cognitive effort30. In addition, some CDS tools, such as patient dashboards, may be too complex and rely on users’ digital literacy30. Further complicating the issue is the heterogeneity of user preferences—what may be disruptive to one end-user may be helpful to another, depending on their particular role, context, and practice patterns.

Machine learning (ML) advances have created new opportunities for CDS because they can uncover previously unknown associations that can assist clinicians in decision-making and solve several of the above-mentioned challenges32. While many CDS algorithms have been developed to help with order placement, point-of-care alerts, targeted information display, and workflow support, most exist as point solutions, and few are effectively incorporated into practice31,33.

Creating solutions

If intelligently designed, EHRs can facilitate rather than hinder the application of ML CDS within practice. As mentioned earlier, ML CDS solutions have not been extensively incorporated into clinical practice. A significant barrier to adopting ML CDS is the ability to integrate into EHRs effectively. Deploying algorithms requires a unique skillset of different production environments, networks and protocols, and file formats34. The emergence and practice of machine learning operations (MLOps) will help support CDS algorithms in deployment, monitoring, and governance to facilitate more effective use of an algorithm at the correct time and for the correct patient35.

Once integration and deployment issues are more readily solved, EHRs can facilitate more advanced ML-driven CDS systems. Whereas many CDS solutions are point solutions in that they predict a single outcome, developers need to create CDS systems that mirror a clinician’s decision-making process in real time. As the user navigates through the EHR, such systems synchronize meaningful guidance and recommendations with a workflow.

Visual display and data presentation

Understanding the problem

EHR interfaces frequently present information in confusing, cluttered, and counter-intuitive ways, imposing a substantial cognitive burden on clinicians11. Data is often fragmented across different displays or consolidated into a single cluttered display requiring sequential viewing—designs that mimic obsolete paper-based workflows34. Users must navigate interfaces searching for information while retaining other information in working memory, increasing cognitive load and impairing access to data required for clinical functions34,36. This task fragmentation impedes effective navigation by dividing a single clinical task into disjointed steps34. For instance, if a physician orders an imaging study, the system may inquire about the history of contrast allergies and use of sedation, forcing the clinician to abandon the order screen to retrieve the necessary information elsewhere and then return to the order—sometimes repeatedly—to manually input the requested data.

EHRs are designed to capture and store data, but cannot effectively synthesize and communicate insights from the data16. The EHR data visualizations remain rudimentary, offering limited clinical utility through simple graphs and charts that are cumbersome to manipulate or customize to specific clinical questions. Clinical systems lack sophisticated methods to capture and represent the heterogeneity, vastness, and temporal nature of a longitudinal patient record, thereby placing the burden of interpretation solely on the clinician37.

Beyond their role in clinical workflows, EHRs also hold considerable promise as a source of data for clinical and translational research. The breadth of longitudinal and multimodal patient information could be used to study disease patterns, treatment outcomes, and system performance at scale. However, this potential is often unrealized because of limitations that directly constrain research capacity38. Documentation practices frequently produce incomplete or inconsistent data, while variation in coding across institutions reduces comparability. Extracting analyzable information requires extensive preprocessing, harmonization, and technical expertise, slowing research efforts and limiting who can participate. Privacy protections and governance restrictions, though essential, can further fragment data access and prevent integration across sites. Together, these issues significantly limit the extent to which EHRs can be leveraged as a reliable research resource39.

Standardized data models can improve consistency and facilitate multi-institutional studies, while large language models and NLP tools offer ways to transform unstructured notes into analyzable data. Federated data networks offer an alternative to centralized repositories by allowing analyses across sites without moving sensitive information40. These innovations point toward a future in which EHRs contribute more directly to scientific discovery and evidence generation. Yet realizing that the future will require not only technical advances but also sustained investment in infrastructure, stronger interoperability standards, and collaborative governance to ensure that EHRs can support research in a scalable and trustworthy way.

Creating solutions

Today’s EHRs lack the adaptability to learn from and respond to different contexts, specialties, providers, and times of day. For example, a primary care physician may begin their encounter by reviewing information in the chart pertinent to the patient’s documented chief complaint (such as a history of recurrent abdominal pain) and then look through the chart according to their preferred process of the problem list, vital signs, and then recent labs. A spine surgeon may have a similar approach but may wish to view the most recent spine imaging. While clinicians vary their practice based on patient and context, most develop consistent, personalized routines that they follow. The EHR should be able to learn these patterns and present relevant information in one centralized location. Many EHRs contain a standard or customizable summary sheet, but these views are often cluttered with extra details and displayed inefficiently. A better system would present relevant, salient, and critical information for each stage of the decision-making process34. Taken a step further, autonomous methods in deep learning could be employed to learn an individual user’s practice pattern to optimize their navigation41. If a particular user learns about a patient by progressing through the chart in a particular order, autonomous deep learning algorithms—ML algorithms that can continuously learn from data without human supervision or intervention—could learn these practice patterns and then use them to inform the display of relevant information via a usable, efficient, and centralized interface. The EHR can be smarter than a static printout, but often doesn’t fully leverage the tools to elevate it beyond what could be done on paper.

Innovative EHR developers will need to take advantage of the emerging science of data visualization, which explores how computer technology transforms data into interactive, effective, and efficient visual representations that amplify cognition19. These images or graphs should improve a user’s ability to digest and understand complex relationships across data elements. Simple data visualizations, such as trees, charts, tables, 2D diagrams, and graphs, may not adequately represent longitudinal healthcare data’s rich complexity and diversity35. Ideally, visualizations would contain numerical and categorical data and permit some degree of interactivity and flexibility that facilitate a more comprehensive understanding of a patient’s clinical record and a decisional framework to assist in management42.

System automation and defaults

Understanding the problem

EHRs often contain unexpected, unpredictable, or unwanted system defaults11 that can waste time and potentially harm patients. For example, when a clinician orders a medication infusion with a duration set to one “day,” some EHRs consider this increment of time to represent 24 h while others may default this to mean “end of the day,” as at 11:59 p.m. on the day of order placement. Alternatively, a prescribed medication may default to begin the next day and not “now,” resulting in a similar scenario. Similarly, if a clinician enters an order for a lab to be drawn for the next five consecutive days, they may place an occurrence of “5” in the default lab order, but the system may flag this as an invalid order. The physician must then cancel the order and locate a separate “daily labs” order to permit the five-day occurrence.

Creating solutions

Mitigating medication errors necessitates the need for “smart” EHRs that are continually sensing and learning. Beyond implementing ML solutions, they should learn from users and contexts. Suppose multiple users follow identical click patterns to correct for a recurring defaulted error. In that case, the system should recognize such signals and either flag potential addressable issue for system administrators or automatically implement the appropriate correction. Self-supervised learning (SSL) is a subset of ML that could accomplish such tasks. Whereas many types of ML require a substantial amount of labeled data, SSL models offer advantages as they can train themselves on unlabeled data and do not require initial data inputs to set parameters43. They automatically generate labels and use them for supervised learning tasks. If implemented continuously, SSL models could monitor the EHR environment in real time by employing user actions to identify problematic defaults and other system limitations. However, the significant cost of re-architecting EHRs to incorporate SSL models at scale presents a substantial implementation barrier.

User-centered design processes

Understanding the problem

The issues discussed above share fundamental shortcomings: inflexibility and inadequate personalization. Often, these limitations stem from suboptimal design and development processes. The rigidity in customization across clinician types and clinical contexts further exacerbates cognitive burden and increases error risk. For example, different clinician types may evaluate certain conditions differently across various contexts. Suppose a patient presents with anemia in the outpatient primary care setting. In that case, the clinician may find an order set entitled “Anemia Panel,” reasonably expecting it to include all relevant lab values, such as complete blood count (CBC), iron levels, total iron binding capacity, and B12 levels. However, this institution’s given panel may not contain CBC or B12, which would require additional orders in the future. Conversely, a hematologist assessing an inpatient for anemia may prefer a more specialized anemia panel with a reticulocyte count. Depending on the vendor, they may be unable to customize this order set. In either case, the specific workflows are not considered, thus forcing a “one-size-fits-all” solution.

This lack of customization reflects a more fundamental problem: clinical workflows vary substantially across different specialties and settings, making it challenging to build a single solution that adequately meets diverse needs44. The scale and complexity of this design challenge are underestimated, and clinicians are not often as involved during EHR design, rollout, and follow-up as they should be44. Further, insufficient institutional resources, such as support for clinician participation and IT support, may constrain an institution’s ability to participate in such processes effectively.

Creating solutions

Ideally, user-centered design processes and principles should inform all phases of EHR development—from initial design and development through monitoring and continuous improvement. However, such evidence-based user-centered principles are either underutilized or ignored and are often not iterative and user-centric, contrary to usability best practice guidelines. Lack of systematic clinician engagement throughout this process can result in interfaces that fail to meet the specific needs of different types of clinicians, resulting in practice-related frustrations8. Numerous studies confirm the need for an iterative EHR design and improvement approach to ensure sufficient usability45. EHR vendors should partner with experts in systems engineering and human factors to address usability issues and account for the diverse needs and abilities of the healthcare workforce46,47. Further, improvement processes should be reimagined. Various optimization programs have been trialed and met with early success. The University of Colorado incorporated multidisciplinary teams consisting of a project manager, a clinical informaticist, a physician informaticist, ambulatory-certified trainers, and EHR analysts. Individual clinics participate in periodic “Sprints,” during which group and individual training, and iterative EHR optimization processes occur over a finite period43. Most EHR requests (88%) could be addressed during or immediately after the completion of the Sprint period. Clinicians interfaced with the team directly over this period, and the authors reported positive clinician feedback regarding the process.

Another critical component to alleviating EHR-induced burdens is appropriate resource allocation. Healthcare systems must invest in their clinical and IT workforce by ensuring they have sufficient time to meet the administrative demands of EHRs and maintain competency for these systems. EHRs evolve and change, and users should receive adequate individual and group training and re-training for optimal use of new features. As the key stakeholders involved in EHR use, clinicians should be allocated protected time to provide feedback, and physician clinical informaticists must be engaged in the system’s governance.

Technology maturity

Understanding the problem

Seamless integration of diverse elements within EHRs is often described as a pathway to improvement, but when these components are immature or poorly aligned with clinical needs, they can introduce new problems. Conventional tools such as APIs and data visualization modules, as well as newer innovations like ambient intelligence, generative AI, and ML-based decision support, all hold promise39. However, APIs themselves are not standalone technologies but rather mechanisms that allow different software systems to interact, and their effectiveness depends on being standardized, well-governed, and implemented in ways that meaningfully support interoperability. When deployed prematurely or without careful attention to workflow, these approaches can create fragmented interfaces, inconsistent functionality, and added complexity for end-users. In this way, the maturity and usability of these elements become critical factors that shape the overall usability of EHR systems.

Artificial intelligence illustrates this tension clearly. While AI has the potential to enhance clinical care through improved insights and automation, its integration into EHRs is still evolving and improving. There are many AI technologies that are gaining traction, particularly within the radiology and ambient listening spaces, and there is a growing body of literature that demonstrates the positive impact of such technologies. However, many barriers to the adoption of AI solutions still persist. These barriers are often multifaceted, ranging from workflow integration constraints to algorithm complexity and lack of transparency that can ultimately limit trust and adoption40,48. Safety and ethical concerns may also limit adoption of certain solutions; flaws in training data, suboptimal development practices, or even societal perceptions can lead to systematic errors or bias, which may propagate inequities if embedded into clinical workflows49. Until these maturity and trustworthiness issues are addressed, the promise of AI as a solution may paradoxically add to the usability challenges of EHRs rather than alleviating them.

While many AI algorithms are complex and lack of transparency and explainability can limit their adoption in clinical care, it is important to recognize that not all AI methods share the same limitations. Conventional ML approaches, such as logistic regression, decision trees, or rule-based systems, are often far more transparent and interpretable than large neural networks or generative AI models. Their reliance on explicit features or predefined rules allows end-users to understand how outputs are derived, which has facilitated earlier adoption in certain clinical contexts. In contrast, newer models such as large language models introduce novel challenges around opacity, bias, and workflow integration. Recent work has shown that GPT-4 perpetuates racial and gender biases across diagnostic reasoning, treatment planning, and patient assessment, highlighting the risk that generative AI may amplify inequities if deployed prematurely50.

Recognizing such risks, researchers and clinicians are exploring various frameworks for trustworthiness in generative AI evidence synthesis, emphasizing the need for accountability, fairness, transparency, and governance before such systems can be safely integrated into clinical workflows51,52. Together, these findings underscore that while AI encompasses a spectrum of approaches, from explainable, rule-based systems to opaque generative models, the unique risks posed by the latter demand careful evaluation and tailored safeguards before widespread adoption in health care.

Further, assumptions or sub-optimal societal practices can also result in systematic errors that can potentially skew algorithm performance and result in bias. Such bias can perpetuate or propagate healthcare inequities, which pose serious ethical challenges that must be more comprehensively explored and (when possible) addressed before deployment.

Creating solutions

Fortunately, many clinicians, institutions, and government agencies recognize the urgency of addressing these issues. The Coalition for Health AI (CHAI)—comprised of several of these entities—was launched to establish standards, guidelines, and guardrails for the safe and ethical use of AI in healthcare. Recommendations from CHAI and similar multidisciplinary groups aim to provide guidance in addressing bias, fairness, and equity issues.

Tools are emerging to act on these recommendations, as well. For example, AI evaluation systems to evaluate training data, model performance, and impact on health equity have been proposed53. Such systems would help communicate potential bias more clearly to the end-user. AI validation labs to externally validate algorithms and perform in-depth bias assessments are beginning to form. The results of these analyses should be standardized and communicated clearly to the end-user. In this way, clinicians may better understand the strengths and weaknesses of a given AI model to more appropriately judge appropriate applications in clinical practice. It will be important that such validation tools keep pace with other more advanced AI algorithms, such as imaging, NLP, and large language models.

Policy and regulatory considerations

Understanding the problem

While improved processes and mature technologies will improve many usability and efficiency challenges posed by EHRs, it is critical to remember that these alone will likely fall short of a complete solution. Due to the nature and complexity of the healthcare system, EHRs must comply with various regulatory, compliance, billing, public health, and reimbursement constraints54. Such business and government regulations have sometimes been prioritized over clinical needs and preferences; initially, such requirements-directed development of EHR systems to automate business processes and expanding requirements resulted in cumbersome processes. The burden of such processes often falls on the end-users using these systems. As these systems developed, functionality mirrored paper-based methods. Consequently, many systems were not designed with sufficient clinician engagement, workflow analysis, and usability standard adherence, which provided minimal cognitive support to clinicians8,27.

Regulatory and documentation requirements continued to grow in complexity, placing additional demands on clinicians to meet such requirements through specific EHR workflows that addressed process requirements, as opposed to achieving outcomes that lead to tangible benefits. Burdensome regulatory and policy constraints also limited the abilities of EHRs to innovate54. They were forced to allocate resources to developing processes to meet technical, functional, and workflow requirements, further shifting focus away from the end-user experience54.

Creating solutions

While policy requirements certainly expedited widespread adoption of EHRs, we are seeing that this may have been achieved against a backdrop of increased clinician workloads and a physician burnout epidemic8. For EHRs to reach their full potential to facilitate a secure and efficient flow of information—and to improve clinical practice for end-users through streamlined interfaces—technology, business, and policy needs must all align. Stringent government requirements must refocus on achieving meaningful outcomes, and cumbersome EHR processes must be reimagined to incorporate clinical needs and user-centered design.

Conclusions

Meaningful EHR usability and safety improvement requires a coordinated approach encompassing human-centered processes, workflow optimization, and technological innovation. The ideal EHR will empower its end user to make optimal decisions efficiently by presenting the pertinent and salient information while surfacing sophisticated insights in readily interpretable forms. Such systems will deftly adapt to the individual users, specialty, context, and even temporal workflows. By functioning as a flexible foundation for seamless innovation integration, next-generation EHRs will truly augment clinicians’ ability—not impede them—to deliver personalized, high-quality patient care.