Effective Reporting Could Improve Safe Use of Electronic Health Records

New government effort can collect data to help reduce patient harm

Navigate to:

Effective Reporting Could Improve Safe Use of Electronic Health Records
Electronic Health Records
Tom Werner/Getty Images

Overview

Despite the near ubiquitous adoption of electronic health record (EHR) systems to replace paper files in hospitals and doctors’ offices across the country, minimal data exist on the capabilities of different technologies, including the safety of these products. That omission inhibits the ability of EHR developers, health care providers, and government to address deficiencies in technology that contribute to patient harm. Greater information on the functions of EHRs could help provide solutions to existing gaps prevalent across many products, encourage technology developers to address deficiencies, and provide comparative data for hospitals and clinician offices that purchase electronic medical record systems.

To foster this type of transparency, Congress—through the 21st Century Cures Act—created a program to collect information from technology developers and clinicians that can be used to assess EHR performance. The federal agency that oversees EHRs, the Office of the National Coordinator for Health Information Technology (ONC), will administer the program by collecting data on the design of products, security, information exchange among systems, and other capabilities of different technologies. The agency will then publish findings on its website to illuminate the strengths and weaknesses of EHR systems, and trends across the industry.

ONC collected public input on factors to prioritize in the EHR reporting program in 2018. In response, health information technology experts, clinicians, and key medical organizations emphasized that the program should address patient safety challenges born out of poor EHR usability—how doctors, nurses, and other staff interact with systems. Usability-related safety problems can result in patients obtaining the wrong drug dose, delays in care, and myriad other potentially deadly events. These usability challenges can occur as a result of EHR design, customizations by facilities, and varying workflows within sites of care. For example, recent data gathered from three hospital systems indicate that approximately a third of the health information technology-related medication safety events occurred in part because of EHR usability.

Given the broad interest in using the EHR reporting program to reduce harm, The Pew Charitable Trusts and the MedStar Health National Center for Human Factors in Healthcare investigated how ONC could incorporate patient safety into the usability aspects of the initiative. To identify and assess the safety-related data to include in the reporting program, Pew and MedStar Health conducted a literature search and interviewed usability experts, EHR vendors, policymakers, and health care providers. That analysis led to the identification of 15 examples of data to collect through the EHR reporting program that could shed light on usability-related safety issues.

By adopting some of these recommendations as criteria in the EHR reporting program, ONC can fill a critical gap in the information available on how medical record systems function—including their contributions to medical errors. Greater transparency on system functions can ensure that better information exists to identify industry-wide gaps, encourage an enhanced focus on safety by product developers, and give clinicians greater insight on the functions of the digital systems that they use. These measures could help make certain that patients entering the hospital are less likely to face harm associated with the computer systems that physicians and nurses use.

Usability and patient safety are intertwined

Opportunities exist throughout the EHR life cycle to remedy usability challenges with electronic systems. During design, technology developers can adopt best practices to identify and address usability deficiencies, such as by testing new functions. The implementers of EHRs—including executives at hospitals and doctors’ offices—can also apply strategies to detect and resolve poor usability. Given the contribution of site-specific factors such as unique workflows or customizations, health care providers can also unearth problems by monitoring usability and safety issues.

When unaddressed during development or implementation, EHR usability challenges can contribute to two key safety problems.1 First, the usability of systems can directly contribute to medical errors. For example, researchers evaluated 9,000 health information technology-related medication safety events across three pediatric health care facilities. The researchers found that subpar EHR usability contributed to 3,243 of those events, often related to patients obtaining or at risk of receiving an inappropriate drug dose. In one case, inadequate usability contributed to delays in a necessary blood transfusion for a newborn. In another case, a transplant patient missed several days of an organ rejection medication. Second, deficient usability can lead to clinician burnout when using EHRs. In turn, clinicians who experience higher rates of burden are more susceptible to making medical errors.

EHR reporting program offers opportunity to address usability, safety

Recognizing the importance of usability to the effective implementation of EHRs, Congress included this topic as a central aspect of the EHR reporting program, alongside security, interoperability (e.g., the exchange of health data), conformance of the technology to certification criteria outlined in federal regulations, and other factors as deemed appropriate.

Better data through the EHR reporting program would have three main benefits:

  1. Identifying industry-wide gaps and opportunities. The aggregation of data on EHR functionality in a single location can help illuminate gaps across multiple products. For example, findings that few technology developers involve a breadth of different user types—such as physicians and nurses with different specialties—in the testing of systems can signal that EHRs on the market may not effectively consider the diverse group of end users. Similarly, data indicating an emerging approach by some vendors for quality improvement—such as aggregating and analyzing data to identify care gaps—may spur more EHR developers to add in that capability.
  2. Encouraging developers to address challenges. Transparency on the functions of EHRs may also highlight those technology developers that adopt best practices to improve system performance, and those vendors that may lag. Highlighting that discrepancy can encourage developers with less favorable public data to address their deficiencies and prioritize improvements, particularly those related to safety.
  3. Offering purchasing support to providers. The reporting program can also give the purchasers of systems—such as hospital administrators or clinicians who operate their medical practice—the data they need to compare the capabilities of different systems. The information can also help shed light on the strengths of different products in certain settings—such as for a specific medical subspecialty—so that purchasers can select the EHR system most appropriate for their practice. These data may be particularly meaningful for smaller practices or hospitals in underserved communities that may lack resources or expertise to conduct robust comparisons across products they intend to purchase.

In the 21st Century Cures Act, Congress did not specify the type of data that ONC should collect. Instead, Congress instructed ONC to determine the data to obtain from the developers of EHRs. Technology developers that fail to supply data could lose certification for their products. EHR developers seek product certification so that health care providers can use these systems to participate in certain federal payment programs, such as those administered through Medicare.

ONC may also obtain information from other sources such as health care providers or the accrediting bodies that certify EHRs to ONC criteria. Similarly, ONC may already have some information, including data submitted to the agency for the Certified Health IT Product List (CHPL), a database that contains some information on systems though is not intended for comparison across technologies.

Although Congress did not explicitly reference safety, the usability-related criteria developed in the EHR reporting program could focus on ways to reduce medical errors given the clear association between system design and medical errors. Therefore, ONC should embed safety into the usability-related criteria developed in the program.2

Proposed criteria for the EHR reporting program

Pew and MedStar Health collaborated to develop examples of how ONC could embed safety into the usability criteria of the EHR reporting program. The example criteria were designed based on a review of EHR safety and usability journal articles and other literature. In addition, MedStar Health interviewed 18 experts from academia, government, health technology development, and other organizations, including from outside health care, to provide ideas from other industries.

The example criteria fall into four categories: 

  1. General processes used to ensure usability and safety.
  2. Effectiveness of alerts to potential safety concerns.
  3. Data entry capabilities, such as entering medications.
  4. Visual display of information, which refers to the ability to retrieve information documented in systems. 

The first category reflects criteria that would address various EHR functions. Meanwhile, research has shown that the latter three categories—alerts, data entry, and visual display—are commonly associated with safety and usability problems. Prior research examined a database with more than 1.7 million patient safety reports and identified those three EHR usability-related functions as the ones most commonly associated with errors.3 More than half of all the EHR usability and safety issues reported were related to these categories. Consequently, focusing reporting criteria on these issues would address known patient safety-related usability challenges.

Each category includes an assessment of example criteria with the following information:

  • General criteria. Describes the criterion topic.
  • Rationale. Explains background and justification for why ONC should consider each example criterion.
  • Usability assessment method. Includes which one of four common ways to assess usability would be employed to evaluate each recommended criterion. The four common usability assessment methods are: 
    • User-centered design (UCD) processes. UCD involves understanding the needs of the intended user population through observations, development of personas (which refer to fictional characters used to depict common roles in testing systems), designing prototypes, and refining technology based on user feedback.4
    • Objective usability testing. This often involves using test scenarios to objectively evaluate whether clinicians can effectively interact with technology, and should resemble the actual EHR systems that clinicians would use.5
    • Subjective assessments of usability. These assessments capture information on perceptions of usability, as opposed to measures of actual usability, through the use of surveys, focus groups, or interviews.6 Developers of EHRs or organizations that test EHRs for conformance to federal criteria could embed these types of subjective evaluations into product development or reviews of different systems, respectively.7
    • EHR data on user behaviors. This approach uses data collected within the EHR, such as audit log information, to understand how clinicians actually use systems.8 These data indicate what happens within an EHR—for example, the buttons pressed or the precise time that clinicians enter orders—and can be used to identify challenges in system design.9
  • Data sources. Outlines whether the data already exist or whether new data will need to be created for analysis.
  • Specific criteria. Describes in depth the specific criteria that ONC could embed in the EHR reporting program and how to measure or assess the data received.

Criteria can build on safety-enhanced design

Many of the data that could be used for the EHR reporting program are already developed and captured as part of the safety-enhanced design (SED) requirements in ONC’s health technology certification program. SED requirements include reporting on the types of participants used to evaluate systems, the test results of different tasks, narrative assessments of the system, and many other factors that can provide data on the usability and safety of technology.

Though important, SED may lack certain data such as the number of clicks it takes to perform certain tasks or videos of different functions. Through the EHR reporting program, enhancements to SED could generate meaningful comparative data across products.

Standard reporting of existing and expanded SED requirements to a range of safety-focused criteria would meet the goals of the EHR reporting program. However, many of the approaches taken by EHR developers for SED differ; for example, technology vendors may not use the same test scenarios. Therefore, the program should ensure that at least some of the test case scenarios are the same across products to ensure accurate comparisons across vendors through the EHR reporting program.

Pew, MedStar Health, and the American Medical Association convened EHR developers, health care providers, usability experts, and other stakeholders to define rigor for test case scenarios and created 14 such assessments.10 The developed test cases focus on areas of known usability and safety issues. ONC should consider requiring use of these test case scenarios and expanding SED requirements to them. Such an approach would provide meaningful data on both the general usability processes and the three known risk areas—alerts, data entry, and visual display—previously mentioned. 

Electronic Health Records
Getty Images

General user-centered design process and usability testing criteria

Criteria on general UCD process and usability testing are not related to specific functionalities but rather focus on processes that can improve the overall safety of systems. These criteria provide insight into the rigor of the UCD process and testing being used, particularly by system developers. Overall, these criteria mostly rely on data that already exist, though often are not reported or publicly released.11

Electronic Health Records
iStock

Alerting-based criteria

EHR alerts can give clinicians critical information to avert medical errors, such as prescribing drugs to which an individual is allergic. However, alerts that are not accurate, trigger at the wrong time, or are ambiguous can have negative patient safety implications. Clinicians may dismiss—or reflexively ignore—alerts, resulting in health care providers missing critical information. Alerts that do not trigger at the right time may not guide the clinician appropriately, and may occur too early or too late to be effective.12 In one case examined in prior research, a patient had an allergy to gelatin that was documented in the EHR, yet an alert did not trigger to the clinician when a medication order was submitted that could cause harm.13 Clinicians may ignore alerts for a range of reasons, including that they were not designed properly or if the health care facility policies required alerts at inopportune times.

Reporting criteria focused on alerts can provide data on whether they are evidence-based and triggered in high-risk situations in a manner most useful to the end users. Alerts should present information to the user clearly, concisely, and accurately, and should not be interruptive unless the situation warrants it.

The use of test case scenarios—with SED requirements—can provide meaningful data on alert practices. Additional data on the utility of alerts, including for both the designed and implemented product, can provide information on whether institutional practices or the base technology affect the utility of alerts.

Data entry-based criteria

EHR developers should ensure that clinicians can enter data intuitively, with users inputting the correct information into the appropriate fields on the interface.Difficult data entry can result in clinicians entering information in the wrong place within the EHR or omitting data because the user cannot determine where to record it. In one case identified in prior research, a physician attempted to place an order for an X-ray of the left elbow, wrist, and forearm, but because of a confusing display, ordered the images for the right arm, exposing the patient to unnecessary radiation.14

Visual display of information

The EHR visual display should not be confusing, cluttered, or present inaccurate information to the user.Confusing visual displays can lead to the wrong medication, lab, or diagnostic image order. These displays can also precipitate the wrong medication prescribed or medications administered at the incorrect time. As an example of this challenge identified in previous research, a physician attempted to order 500 mg of a pain medication to be provided orally, but because of a confusing visual display that listed more than 70 different types of the drug, the clinician selected the wrong product.15

Emerging themes offer guidance for the EHR reporting program

The analysis and development of these example criteria for the EHR reporting program illustrated four key themes to consider as part of data collection.

1. Incorporate safety-enhanced design and standard safety tests. ONC should include safety—as outlined in the tables above—in the usability measures of the EHR reporting program. Many subject matter experts interviewed underscored that the program offers a critical opportunity to enhance patient safety, as also reflected in written feedback many organizations provided ONC in 2018.

SED criteria from ONC’s existing certification program could provide meaningful data. However, ONC should expand SED requirements to areas of known safety risk and standardize the test case scenarios used so that the assessments are comparable across technologies. Similarly, ONC should build on the SED requirements, including by expanding the data submissions (for example, to incorporate the number of clicks it takes to complete tasks and to include video images).

2. Leverage data collected. ONC should ensure that the program not only inform potential purchasers of systems, but also serve as a tool for policymakers and EHR developers to identify nationwide gaps and product-specific flaws. Several experts interviewed indicated that the EHR reporting program represents a promising opportunity to identify common usability and safety challenges that persist across many systems so that researchers, technology developers, and policymakers can identify solutions. In addition, the identification of industry-wide challenges can signal to health care providers the areas on which to focus during implementation and what to monitor once systems are in use. In parallel, EHR developers can use the collected data as a guide on how their products and processes compare to other vendors. Where they lag, developers can make adjustments to adopt best practices and further enhance the safety of their systems.

3. Collect data on implementation. ONC should ensure that measures in the EHR reporting program reflect both the designed products (e.g., pre-implementation) and those systems in use to identify customization and implementation challenges. Testing prior to implementation can identify usability and safety issues during EHR development so that the vendors can make necessary adjustments. However, many experts said that assessments of implemented products can provide even greater value, though this would likely require more dedicated resources. In addition, technology developers expressed some concern that variations in product implementation inaccurately reflects the designed product—a factor typically outside their control. However, some technologies may be more susceptible to usability and safety errors once customized than other systems. Data from the EHR reporting program can shed light on whether health care providers should take extra precautions when deciding on whether and how to customize certain systems. As a result, data collection from both phases of development and implementation would collect the most meaningful information. 

To obtain data on implemented products, ONC should allow health care providers to submit information. As currently designed, data submission to the EHR reporting program on implemented products would be voluntary from providers. Health care facilities could choose to respond to surveys or submit their own test results given that many organizations already evaluate their products, as evidenced by the thousands of sites that have used a medication-ordering test developed by the Leapfrog Group. Additionally, health care providers could submit data from their audit logs, which likely reflect the best opportunity to obtain real-world data on the performance of implemented systems. ONC should work with physicians and vendors to develop standard approaches to audit logs so that the information can be uniformly and easily submitted to the EHR reporting program and measured. 

In the future, vendors could submit data on implemented products such as via the collection of log file data on their systems. In addition, data on implemented products collected by EHR testing and accreditation bodies could also inform the program.

4. Enhance the program over time. Once ONC launches the EHR reporting program, the agency should build on the initial design of the initiative in the future. For example, ONC could focus the first iteration of the program on SED criteria and other recommendations from the tables where data already exist or could be more readily obtained. Future versions of this program should expand on those initial criteria by, for example, collecting log file data and incorporating the recommendations in the tables that ONC elects not to include in the initial iteration of the initiative.

Conclusion

EHRs affect and can improve nearly every aspect of patient care, yet when problems occur, they can be devastating—even deadly. However, little data exist on the performance of EHRs and critical functions, including the contribution of these systems to medical errors, such as individuals obtaining the wrong dose of a medication.

Congress recognized the gap in data on EHR functions and created a reporting program, which can equip product developers with new information to understand deficiencies in technology, and give health care providers more information when purchasing or implementing new systems. 

ONC now has an opportunity to leverage this program to collect better data to improve the usability—and, consequently, safety—of care. The first iteration of the EHR reporting program should incorporate some of these safety-focused usability criteria to begin informing EHR developers and health care providers on opportunities to reduce medical errors. ONC could begin with those criteria that either already have data available or would provide the greatest insights. As the initiative evolves, ONC should build on these criteria to collect even more robust data on the usability of systems.

Through the reporting program, ONC has an opportunity to collect data on how EHRs function to equip clinicians and technology developers with more robust information that can improve system usability and reduce patient harm. 

Endnotes

  1. R.M. Ratwani et al., “Identifying Electronic Health Record Usability and Safety Challenges in Pediatric Settings,” Health Aff (Millwood)  3 7,   no. 11 (2018): 1752-59; J.L. Howe et al., “Electronic Health Record Usability Issues and Potential Contribution to Patient Harm,” JAMA 319, no. 12 (2018): 1276-78,  https://doi.org/10.1001/jama.2018.1171; A. Linsky and S.R. Simon, “Medication Discrepancies in Integrated Electronic Health Records,” BMJ Quality and Safety 22, no. 2 (2013): 103-9; E. Sparmon and W.M. Marella, “The Role of the Electronic Health Record in Patient Safety Events” (Pennsylvania Patient Safety Authority, 2012), https://pdfs.semanticscholar.org/3ffb/d9116fae50af37627c5d4a2a1734b11e5d9c.pdf.
  2. R.M. Schumacher and S.Z. Lowry, “NIST Guide to the Processes Approach for Improving the Usability of Electronic Health Records” (National Institute of Standards and Technology, 2010), https://www.nist.gov/document/guidefinalpublicationversionpdf-0.
  3. Howe et al., “Electronic Health Record Usability Issues.”
  4. C.M. Johnson, T.R. Johnson, and J. Zhang, “A User-Centered Framework for Redesigning Health Care Interfaces,” Journal of Biomedical Informatics 38, no. 1 (2005): 75-87.
  5. A. Baravalle and V. Lanfranchi, “Remote Web Usability Testing,” Behavior Research Methods, Instruments, & Computers 35, no. 3 (2003): 364-68, https://doi.org/10.3758/BF03195512; N. Vicente Oliveros et al., “A Continuous Usability Evaluation of an Electronic Medication Administration Record Application,” Journal of Evaluation in Clinical Practice 23, no. 6 (2017): 1395-400; M. Wiklund, “Usability Testing: Validating User Interface Design,” Medical Device and Diagnostic Industry (2007), https://www.mddionline.com/usability-testing-validating-user-interface-design.
  6. J. Brooke, “SUS: A ‘Quick and Dirty’ Usability Scale,” in Usability Evaluation in Industry ed. B.T. P. W. Jordan, B. A. Weerdmeester, I.L. Mc-Clelland (London: Taylor & Francis, 1996),  https://books.google.com/books?hl=en&lr=&id=IfUsRmzAqvEC&oi=fnd&pg=PA189&dq=sus+a+quick+and+dirty+usability+s-cale&ots=GapxFcoq0l&sig=C3pgn9C1jmX2AKC8QHJhQfqP_A0#v=onepage&q=sus%20a%20quick%20and%20dirty%20usabili-ty%20scale&f=true.
  7. M.F. Walji et al., “Are Three Methods Better Than One? A Comparative Assessment of Usability Evaluation Methods in an EHR,” International Journal of Medical Informatics 83, no. 5 (2014): 361-7.
  8. D.T.Y. Wu et al., “Using EHR Audit Trail Logs to Analyze Clinical Workflow: A Case Study from Community-Based Ambulatory Clinics,” AMIA - Annual Symposium Proceedings 2017 (2018): 1820-27, https://www.ncbi.nlm.nih.gov/pubmed/29854253.
  9. J.S. Adelman et al., “Understanding and Preventing Wrong-Patient Electronic Orders: A Randomized Controlled Trial,” Journal of the American Medical Informatics Association 20, no. 2 (2012): 305-10, https://doi.org/10.1136/amiajnl-2012-001055.
  10. The Pew Charitable Trusts, “Ways to Improve Electronic Health Record Safety” (2018), https://www.pewtrusts.org/-/media/assets/2018/08/healthit_safe_use_of_ehrs_report.pdf.
  11. R.M. Ratwani et al., “A Framework for Evaluating Electronic Health Record Vendor User-Centered Design and Usability Testing Processes,” Journal of the American Medical Informatics Association 24, no. e1 (2016): e35-e39, https://doi.org/10.1093/jamia/ocw092.
  12. J. Chan et al., “Usability Evaluation of Order Sets in a Computerised Provider Order Entry System,” BMJ Quality & Safety 20, no. 11 (2011): 932-40; J. Chan et al., “Does User-Centred Design Affect the Efficiency, Usability and Safety of CPOE Order Sets?” Journal of the American Medical Informatics Association 18, no. 3 (2011): 276-81; J. Nielsen, Usability Engineering (AP Professional: Academic Press Limited, 1993); D.A. Norman, User Centered System Design (1986).
  13. Johnson, Johnson, and Zhang, “A User-Centered Framework.”  http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=15694887.
  14. L. Faulkner, “Beyond the Five-User Assumption: Benefits of Increased Sample Sizes in Usability Testing,” Behavior Research, Methods, Instruments, & Computers 35, no. 3 (2003): 379-83.
  15. J. Sauro and J. Lewis, Quantifying the User Experience: Practical Statistics for User Research (Elsevier, 2012); A.W. Kushniruk, V.L. Patel, and J.J. Cimino, “Usability Testing in Medical Informatics: Cognitive Approaches to Evaluation of Information Systems and User Interfaces,” Proceedings of the American Medical Informatics Association Symposium (1997): 218-22, https://www.ncbi.nlm.nih.gov/pubmed/9357620.
  16. J. Nielsen, Usability Engineering.
  17. Kushniruk, Patel, and Cimino, “Usability Testing in Medical Informatics: Cognitive Approaches to Evaluation of Information Systems and User Interfaces.”
  18. M. Topaz et al., “Rising Drug Allergy Alert Overrides in Electronic Health Records: An Observational Retrospective Study of a Decade of Experience,” Journal of the American Medical Informatics Association 23, no. 3 (2016): 601-8.
  19. Ratwani et al., “A Framework for Evaluating Electronic Health Record Vendor User-Centered Design.”
  20. Topaz et al., “Rising Drug Allergy Alert.”
  21. K. Miller et al., “Interface, Information, Interaction: A Narrative Review of Design and Functional Requirements for Clinical Decision Support,” Journal of the American Medical Informatics Association 25, no. 5 (2018): 585-92.
  22. R.M. Ratwani et al., “A Usability and Safety Analysis of Electronic Health Records: A Multi-Center Study,” Journal of the American Medical Informatics Association 25, no. 9 (2018): 1197-201, https://www.ncbi.nlm.nih.gov/pubmed/29982549.
  23. R. Koppel et al., “Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors,” JAMA 293, no. 10 (2005): 1197-203.
  24. Adelman et al., “Understanding and Preventing Wrong-Patient Electronic Orders: A Randomized Controlled Trial.”
  25. R. Khajouei and M.W. Jaspers, “The Impact of CPOE Medication Systems’ Design Aspects on Usability, Workflow and Medication Orders: A Systematic Review,” Methods of Information in Medicine 49, no. 1 (2010): 3-19.
  26. Ratwani et al., “A Usability and Safety Analysis of Electronic Health Records: A Multi-Center Study.”
  27. A.J. Quist et al., “Analysis of Variations in the Display of Drug Names in Computerized Prescriber-Order-Entry Systems,” American Journal of Health-System Pharmacy 74, no. 7 (2017): 499-509.
  28. J. Guo et al., “Usability Evaluation of an Electronic Medication Administration Record (EMAR) Application,” Applied Clinical Informatics 2, no. 2 (2011): 202-24.
Health IT
Health IT
Article

Medical Groups Ask Medicare to Encourage Safety in Use of Health Record Systems

Quick View
Article

The Centers for Medicare & Medicaid Services (CMS) earlier this year asked for input on how to improve patient safety, specifically when it comes to the usability of electronic health record (EHR) systems or how they are designed and integrated into a hospital’s workflow. In response, stakeholders and advocates weighed in to support policies intended to reduce medical errors through improvements to the implementation and use of EHRs.