Despite the near ubiquitous adoption of electronic health record (EHR) systems to replace paper files in hospitals and doctors’ offices across the country, minimal data exist on the capabilities of different technologies, including the safety of these products. That omission inhibits the ability of EHR developers, health care providers, and government to address deficiencies in technology that contribute to patient harm. Greater information on the functions of EHRs could help provide solutions to existing gaps prevalent across many products, encourage technology developers to address deficiencies, and provide comparative data for hospitals and clinician offices that purchase electronic medical record systems.
To foster this type of transparency, Congress—through the 21st Century Cures Act—created a program to collect information from technology developers and clinicians that can be used to assess EHR performance. The federal agency that oversees EHRs, the Office of the National Coordinator for Health Information Technology (ONC), will administer the program by collecting data on the design of products, security, information exchange among systems, and other capabilities of different technologies. The agency will then publish findings on its website to illuminate the strengths and weaknesses of EHR systems, and trends across the industry.
ONC collected public input on factors to prioritize in the EHR reporting program in 2018. In response, health information technology experts, clinicians, and key medical organizations emphasized that the program should address patient safety challenges born out of poor EHR usability—how doctors, nurses, and other staff interact with systems. Usability-related safety problems can result in patients obtaining the wrong drug dose, delays in care, and myriad other potentially deadly events. These usability challenges can occur as a result of EHR design, customizations by facilities, and varying workflows within sites of care. For example, recent data gathered from three hospital systems indicate that approximately a third of the health information technology-related medication safety events occurred in part because of EHR usability.
Given the broad interest in using the EHR reporting program to reduce harm, The Pew Charitable Trusts and the MedStar Health National Center for Human Factors in Healthcare investigated how ONC could incorporate patient safety into the usability aspects of the initiative. To identify and assess the safety-related data to include in the reporting program, Pew and MedStar Health conducted a literature search and interviewed usability experts, EHR vendors, policymakers, and health care providers. That analysis led to the identification of 15 examples of data to collect through the EHR reporting program that could shed light on usability-related safety issues.
By adopting some of these recommendations as criteria in the EHR reporting program, ONC can fill a critical gap in the information available on how medical record systems function—including their contributions to medical errors. Greater transparency on system functions can ensure that better information exists to identify industry-wide gaps, encourage an enhanced focus on safety by product developers, and give clinicians greater insight on the functions of the digital systems that they use. These measures could help make certain that patients entering the hospital are less likely to face harm associated with the computer systems that physicians and nurses use.
Opportunities exist throughout the EHR life cycle to remedy usability challenges with electronic systems. During design, technology developers can adopt best practices to identify and address usability deficiencies, such as by testing new functions. The implementers of EHRs—including executives at hospitals and doctors’ offices—can also apply strategies to detect and resolve poor usability. Given the contribution of site-specific factors such as unique workflows or customizations, health care providers can also unearth problems by monitoring usability and safety issues.
When unaddressed during development or implementation, EHR usability challenges can contribute to two key safety problems.1 First, the usability of systems can directly contribute to medical errors. For example, researchers evaluated 9,000 health information technology-related medication safety events across three pediatric health care facilities. The researchers found that subpar EHR usability contributed to 3,243 of those events, often related to patients obtaining or at risk of receiving an inappropriate drug dose. In one case, inadequate usability contributed to delays in a necessary blood transfusion for a newborn. In another case, a transplant patient missed several days of an organ rejection medication. Second, deficient usability can lead to clinician burnout when using EHRs. In turn, clinicians who experience higher rates of burden are more susceptible to making medical errors.
EHR reporting program offers opportunity to address usability, safety
Recognizing the importance of usability to the effective implementation of EHRs, Congress included this topic as a central aspect of the EHR reporting program, alongside security, interoperability (e.g., the exchange of health data), conformance of the technology to certification criteria outlined in federal regulations, and other factors as deemed appropriate.
Better data through the EHR reporting program would have three main benefits:
In the 21st Century Cures Act, Congress did not specify the type of data that ONC should collect. Instead, Congress instructed ONC to determine the data to obtain from the developers of EHRs. Technology developers that fail to supply data could lose certification for their products. EHR developers seek product certification so that health care providers can use these systems to participate in certain federal payment programs, such as those administered through Medicare.
ONC may also obtain information from other sources such as health care providers or the accrediting bodies that certify EHRs to ONC criteria. Similarly, ONC may already have some information, including data submitted to the agency for the Certified Health IT Product List (CHPL), a database that contains some information on systems though is not intended for comparison across technologies.
Although Congress did not explicitly reference safety, the usability-related criteria developed in the EHR reporting program could focus on ways to reduce medical errors given the clear association between system design and medical errors. Therefore, ONC should embed safety into the usability-related criteria developed in the program.2
Pew and MedStar Health collaborated to develop examples of how ONC could embed safety into the usability criteria of the EHR reporting program. The example criteria were designed based on a review of EHR safety and usability journal articles and other literature. In addition, MedStar Health interviewed 18 experts from academia, government, health technology development, and other organizations, including from outside health care, to provide ideas from other industries.
The example criteria fall into four categories:
The first category reflects criteria that would address various EHR functions. Meanwhile, research has shown that the latter three categories—alerts, data entry, and visual display—are commonly associated with safety and usability problems. Prior research examined a database with more than 1.7 million patient safety reports and identified those three EHR usability-related functions as the ones most commonly associated with errors.3 More than half of all the EHR usability and safety issues reported were related to these categories. Consequently, focusing reporting criteria on these issues would address known patient safety-related usability challenges.
Each category includes an assessment of example criteria with the following information:
Many of the data that could be used for the EHR reporting program are already developed and captured as part of the safety-enhanced design (SED) requirements in ONC’s health technology certification program. SED requirements include reporting on the types of participants used to evaluate systems, the test results of different tasks, narrative assessments of the system, and many other factors that can provide data on the usability and safety of technology.
Though important, SED may lack certain data such as the number of clicks it takes to perform certain tasks or videos of different functions. Through the EHR reporting program, enhancements to SED could generate meaningful comparative data across products.
Standard reporting of existing and expanded SED requirements to a range of safety-focused criteria would meet the goals of the EHR reporting program. However, many of the approaches taken by EHR developers for SED differ; for example, technology vendors may not use the same test scenarios. Therefore, the program should ensure that at least some of the test case scenarios are the same across products to ensure accurate comparisons across vendors through the EHR reporting program.
Pew, MedStar Health, and the American Medical Association convened EHR developers, health care providers, usability experts, and other stakeholders to define rigor for test case scenarios and created 14 such assessments.10 The developed test cases focus on areas of known usability and safety issues. ONC should consider requiring use of these test case scenarios and expanding SED requirements to them. Such an approach would provide meaningful data on both the general usability processes and the three known risk areas—alerts, data entry, and visual display—previously mentioned.
Criteria on general UCD process and usability testing are not related to specific functionalities but rather focus on processes that can improve the overall safety of systems. These criteria provide insight into the rigor of the UCD process and testing being used, particularly by system developers. Overall, these criteria mostly rely on data that already exist, though often are not reported or publicly released.11
EHR alerts can give clinicians critical information to avert medical errors, such as prescribing drugs to which an individual is allergic. However, alerts that are not accurate, trigger at the wrong time, or are ambiguous can have negative patient safety implications. Clinicians may dismiss—or reflexively ignore—alerts, resulting in health care providers missing critical information. Alerts that do not trigger at the right time may not guide the clinician appropriately, and may occur too early or too late to be effective.12 In one case examined in prior research, a patient had an allergy to gelatin that was documented in the EHR, yet an alert did not trigger to the clinician when a medication order was submitted that could cause harm.13 Clinicians may ignore alerts for a range of reasons, including that they were not designed properly or if the health care facility policies required alerts at inopportune times.
Reporting criteria focused on alerts can provide data on whether they are evidence-based and triggered in high-risk situations in a manner most useful to the end users. Alerts should present information to the user clearly, concisely, and accurately, and should not be interruptive unless the situation warrants it.
The use of test case scenarios—with SED requirements—can provide meaningful data on alert practices. Additional data on the utility of alerts, including for both the designed and implemented product, can provide information on whether institutional practices or the base technology affect the utility of alerts.
EHR developers should ensure that clinicians can enter data intuitively, with users inputting the correct information into the appropriate fields on the interface.Difficult data entry can result in clinicians entering information in the wrong place within the EHR or omitting data because the user cannot determine where to record it. In one case identified in prior research, a physician attempted to place an order for an X-ray of the left elbow, wrist, and forearm, but because of a confusing display, ordered the images for the right arm, exposing the patient to unnecessary radiation.14
The EHR visual display should not be confusing, cluttered, or present inaccurate information to the user.Confusing visual displays can lead to the wrong medication, lab, or diagnostic image order. These displays can also precipitate the wrong medication prescribed or medications administered at the incorrect time. As an example of this challenge identified in previous research, a physician attempted to order 500 mg of a pain medication to be provided orally, but because of a confusing visual display that listed more than 70 different types of the drug, the clinician selected the wrong product.15
The analysis and development of these example criteria for the EHR reporting program illustrated four key themes to consider as part of data collection.
1. Incorporate safety-enhanced design and standard safety tests. ONC should include safety—as outlined in the tables above—in the usability measures of the EHR reporting program. Many subject matter experts interviewed underscored that the program offers a critical opportunity to enhance patient safety, as also reflected in written feedback many organizations provided ONC in 2018.
SED criteria from ONC’s existing certification program could provide meaningful data. However, ONC should expand SED requirements to areas of known safety risk and standardize the test case scenarios used so that the assessments are comparable across technologies. Similarly, ONC should build on the SED requirements, including by expanding the data submissions (for example, to incorporate the number of clicks it takes to complete tasks and to include video images).
2. Leverage data collected. ONC should ensure that the program not only inform potential purchasers of systems, but also serve as a tool for policymakers and EHR developers to identify nationwide gaps and product-specific flaws. Several experts interviewed indicated that the EHR reporting program represents a promising opportunity to identify common usability and safety challenges that persist across many systems so that researchers, technology developers, and policymakers can identify solutions. In addition, the identification of industry-wide challenges can signal to health care providers the areas on which to focus during implementation and what to monitor once systems are in use. In parallel, EHR developers can use the collected data as a guide on how their products and processes compare to other vendors. Where they lag, developers can make adjustments to adopt best practices and further enhance the safety of their systems.
3. Collect data on implementation. ONC should ensure that measures in the EHR reporting program reflect both the designed products (e.g., pre-implementation) and those systems in use to identify customization and implementation challenges. Testing prior to implementation can identify usability and safety issues during EHR development so that the vendors can make necessary adjustments. However, many experts said that assessments of implemented products can provide even greater value, though this would likely require more dedicated resources. In addition, technology developers expressed some concern that variations in product implementation inaccurately reflects the designed product—a factor typically outside their control. However, some technologies may be more susceptible to usability and safety errors once customized than other systems. Data from the EHR reporting program can shed light on whether health care providers should take extra precautions when deciding on whether and how to customize certain systems. As a result, data collection from both phases of development and implementation would collect the most meaningful information.
To obtain data on implemented products, ONC should allow health care providers to submit information. As currently designed, data submission to the EHR reporting program on implemented products would be voluntary from providers. Health care facilities could choose to respond to surveys or submit their own test results given that many organizations already evaluate their products, as evidenced by the thousands of sites that have used a medication-ordering test developed by the Leapfrog Group. Additionally, health care providers could submit data from their audit logs, which likely reflect the best opportunity to obtain real-world data on the performance of implemented systems. ONC should work with physicians and vendors to develop standard approaches to audit logs so that the information can be uniformly and easily submitted to the EHR reporting program and measured.
In the future, vendors could submit data on implemented products such as via the collection of log file data on their systems. In addition, data on implemented products collected by EHR testing and accreditation bodies could also inform the program.
4. Enhance the program over time. Once ONC launches the EHR reporting program, the agency should build on the initial design of the initiative in the future. For example, ONC could focus the first iteration of the program on SED criteria and other recommendations from the tables where data already exist or could be more readily obtained. Future versions of this program should expand on those initial criteria by, for example, collecting log file data and incorporating the recommendations in the tables that ONC elects not to include in the initial iteration of the initiative.
EHRs affect and can improve nearly every aspect of patient care, yet when problems occur, they can be devastating—even deadly. However, little data exist on the performance of EHRs and critical functions, including the contribution of these systems to medical errors, such as individuals obtaining the wrong dose of a medication.
Congress recognized the gap in data on EHR functions and created a reporting program, which can equip product developers with new information to understand deficiencies in technology, and give health care providers more information when purchasing or implementing new systems.
ONC now has an opportunity to leverage this program to collect better data to improve the usability—and, consequently, safety—of care. The first iteration of the EHR reporting program should incorporate some of these safety-focused usability criteria to begin informing EHR developers and health care providers on opportunities to reduce medical errors. ONC could begin with those criteria that either already have data available or would provide the greatest insights. As the initiative evolves, ONC should build on these criteria to collect even more robust data on the usability of systems.
Through the reporting program, ONC has an opportunity to collect data on how EHRs function to equip clinicians and technology developers with more robust information that can improve system usability and reduce patient harm.
A collection of resources to help federal, state, and local decision-makers set an achievable agenda for all Americans
Data-driven state policy innovations across America