Ways to Improve Electronic Health Record Safety
Rigorous testing and establishment of voluntary criteria can protect patients
Overview
Electronic health records have transformed modern medicine, giving doctors and nurses better data to guide care, supporting enhanced patient safety through new automated tools, and creating more efficient processes by connecting different health systems.
However, the design, customization, and use of electronic health records (EHRs) by doctors, nurses, and other clinicians can also lead to inefficiencies or workflow challenges and can fail to prevent—or even contribute to—patient harm. For example, an unclear medication list could result in a clinician ordering the wrong drug for a patient. Laboratory tests that are displayed without the date and time of the results could lead to clinical decisions based on outdated information. And failures of systems to issue alerts about harmful medication interactions—situations that can stem from changes made by facilities, how clinicians enter data, or EHR design—could lead to medical errors.
These safety hazards can be associated with EHR usability, which refers to the design and use of the technology and how individuals interact with it. Usability challenges can frustrate clinicians because they make simple tasks take longer, lead to workarounds, or even contribute to patient safety concerns. These challenges can stem not only from the layout of EHRs, but also from how the technology is implemented and operated in health care facilities; how clinicians are trained to use it; and how the EHR is maintained, updated, and customized. Each stage of EHR development and use—the software life cycle from development through implementation and use in a health care environment—can affect the usability and safety of the technology.
While usability and patient safety are related, not every usability challenge will represent a risk to patients, and not every risk to patients stems from an EHR usability problem. In fact, some changes to EHRs might improve safety but result in less-efficient workflows—for example, if clinicians were prompted to enter “lbs.” or “kg.” every time they entered a patient’s weight. But when a system is challenging to use or patient information is difficult for a clinician to find, safety risks could occur.
As part of federal criteria that provide the certification standards for EHRs, technology developers must state that they engage end users and conduct usability testing during design and development. However, the certification requirements can fall short in two ways when it comes to assessing whether the use of products contributes to patient harm.
First, current federal testing criteria do not address circumstances in which customized changes are made to an EHR as part of the implementation process or after the system goes live. Instead, current rules focus only on the design and development stage of the EHR. While federal regulations mandate the testing of certain safety-related features—such as medication-allergy checks—the requirements do not focus on whether those functions operate in a safe way.
The second key challenge is the absence of requirements and guidance on how to test clinician interaction with the EHR for safety issues. Clinical test cases, which are scenarios that reflect realistic patient conditions and how health care providers treat individuals, can help detect hazards. However, there are no clear criteria for what constitutes a rigorous test scenario. Similarly, some of the scenarios for certification, while testing that certain functions work, may not effectively evaluate the EHR for usability or safety. Current certification test cases can be too specific, lack relevant details, or may not test aspects of the EHR that are recognized as posing safety risks.
Unlike many other high-risk sectors, such as the airline and medical device industries, there is no standard for routinely testing health care software for safety issues and concerns.
To address these two challenges, The Pew Charitable Trusts, MedStar Health’s National Center for Human Factors in Healthcare, and the American Medical Association conducted a literature review and convened a multidisciplinary expert panel composed of physicians, nurses, pharmacists, EHR vendors, patients, and health information technology experts. This information led to the development of:
- Recommendations on how to advance usability and safety throughout the EHR software life cycle, which can be used as the foundation for a voluntary certification process for developers and EHR implementers.
- Criteria detailing what constitutes a rigorous safety test case and the creation of sample test case scenarios based on reported EHR safety challenges.
Use of the voluntary certification tenets and test cases by health care facilities and technology developers can improve the usability and safety of EHRs. They also allow for the proactive identification of potential harm associated with the implementation and customization of EHRs.
Existing usability tests for certification fall short
EHR software must meet minimum certification criteria established by the federal government to ensure that it can share data, provide key capabilities to clinicians, and protect patient privacy. The current EHR certification program—implemented by the federal Office of the National Coordinator for Health Information Technology (ONC)—is intended to set the baseline standards that EHRs must meet so that hospitals and health care providers can confidently adopt and use the technology to meet requirements in certain federal programs.
Under current certification requirements for EHR usability, which were released in 2015, developers must:
- Document how they consider the needs of doctors, nurses, and other clinicians in developing the product submitted for certification. Known as user-centered design, this process focuses on understanding the needs of the intended users throughout software development and deployment to improve usability of the product.1 EHR developers attest to and describe their user-centered design process in this documentation.
- Conduct formal usability testing for certain capabilities identified by the federal government.2 These tests include measures of efficiency, effectiveness, and satisfaction as clinicians complete representative test cases of certified criteria using the EHR product 3 Developers can use test cases established by the National Institute of Standards and Technology (NIST) or develop their own scenarios.4 When developing scenarios, developers must conduct a risk analysis of the product and build test cases to address challenges identified.5
Regardless of which test cases are used, ONC’s usability criteria require the testing of certain EHR functions, such as the ability to order medications electronically and receive medication alerts. The NIST-developed test cases do not explicitly address each of the required functions laid out in ONC’s certification regulations.6 Because these cases do not overlap with high-prevalence safety hazards, some important EHR features may not be sufficiently evaluated.
EHR Certification Program
The Health Information Technology for Economic and Clinical Health (HITECH) Act, passed in 2009, established the Medicare and Medicaid EHR Incentive Program—known as Meaningful Use—to provide financial incentives for hospitals and doctors’ offices to purchase and utilize electronic health records. The program has provided more than $37 billion7 for the adoption of EHRs.
To obtain those incentive payments, hospitals and doctors had to demonstrate that they used EHRs in certain ways—such as by sending prescriptions to pharmacies electronically instead of on paper.
To provide assurances to hospitals and doctors that the EHRs they use can perform those functions, HITECH also created the EHR certification program administered by ONC. This establishes technological requirements for EHRs, including their functions, how to record information, and security features. ONC has periodically updated the requirements. The initial certification requirements were published in 2010 (2011 Edition), with updates in 2012 (2014 Edition) and 2015 (2015 Edition).
To obtain certification, EHR developers submit data on their products to accredited testing labs for review. The labs forward their findings to an ONC-authorized certification body, which issues a certification based on the lab’s finding.
In September 2017, ONC modified its testing requirements to require health IT developers to state that they meet 30 certification criteria (approximately half of the total) instead of having them reviewed by an accredited testing lab.8 This reflected a major change in the approach to vendor certification.
In 2016, ONC issued regulations giving it direct authority to enter health care facilities and review and test EHRs that posed serious risks to patients. If a risk was found, ONC could require the health IT developer to fix the problem or suspend certification of products with unresolved problems.9 In 2017, ONC announced it would no longer require testing labs to perform random surveillance to allow labs to focus on complaint-based investigations.10
ONC has also indicated that it will not update the 2015 Edition.11 However, the 21st Century Cures Act requires health IT products to meet new criteria for EHRs used in the care of children, which affords another opportunity to improve safety.12
Best practices already adopted to improve safety
Many EHR developers and health care providers have engaged in additional efforts to improve usability and safety throughout the product life cycle:
- Members of the Electronic Health Records Association commit to adhere to a code of conduct that includes patient safety and usability as key focus areas.13
- EHR developers commit to review safety incidents with designated patient safety officers and groups of product users, and to share information across health care facilities.14
- Health care providers use safety experts to advise on implementation, customization, and use of the EHR.15
- Health care providers conduct EHR safety surveys or have safety teams to identify potential problems.16
- EHR developers and health care providers use enhanced training methods, such as simulated clinical environments or expert trainers observing clinicians to provide direct feedback.17
- The Leapfrog Group, a nonprofit organization founded by large employers and health care purchasers to improve health care safety and quality for their employees, sponsors an EHR test utilizing test cases in its annual hospital safety survey.18
- Medical associations recommend using standardized test cases, developing a set of measures for adverse events, and performing formal usability assessments.19
These practices can serve as the foundation for a more comprehensive certification program during product development and after implementation to improve EHR safety.
Opportunities to improve usability certification test cases
Several factors throughout the EHR life cycle affect usability and safety. Current certification tests are focused on evaluating the usability of key system requirements. According to published articles and experts consulted, best practices for testing, which are not required, include:
- Consideration of all key tasks. Developer usability testing performed for certification focuses on EHR functions required by ONC. Some vendors develop test cases that include tasks to evaluate safety, but this practice is not pervasive.20 Test cases should also focus on more key tasks in which the use of these systems can affect safety.21 Risks that are discovered should inform future test cases.
- Involvement of representative end users. ONC’s most recent certification criteria require that vendors include at least 10 participants in testing system usability. The agency recommends that they be representative of end users but does not require it.22
- Real-world testing. Usability testing performed for certification is intended to be conducted under reproducible laboratory conditions that do not replicate the actual clinical use of the product, which can limit the tester’s ability to discover risks that reflect real-world situations.23
- Assessments of the total product life cycle. Certification testing is performed on the EHR product presented to the evaluating lab. Various stages of the product life cycle, including how the product is modified by health care facilities and how software upgrades are implemented, can present different usability and safety challenges.24
- Focus on the socio-technical environment. Certification testing, conducted before implementation in health care facilities, focuses on the released EHR product and may not control for other factors that can influence safety.25 For example, the type of training clinicians receive determines their knowledge of the EHR’s features, including how to order medications, diagnostic images, and lab tests efficiently and safely. Additionally, the health care facility may make decisions during EHR implementation about how to organize information in the system, which affects how clinicians interact with the technology.26
Given the gaps of current regulations and practices implemented by health care facilities and technology developers, the literature review and expert panel discussions identified additional initiatives that could improve EHR safety.
Criteria to support usability and safety throughout the EHR life cycle
Several additional best practices, criteria, and factors emerged from the literature review and expert panel discussions that could help EHR developers and health care facilities improve product usability and safety. These criteria could provide a foundation for a voluntary certification program. Given that both EHR developers and health care providers have roles in ensuring the safe use of products, specific criteria were established for each. Additional standards for improving safety are also under development by the American Association of Medical Instrumentation.
Several discrete actions and criteria were also identified that developers and health care providers should undertake. A voluntary certification program that encompasses these components could ask developers and providers to consider each criterion and, where appropriate, to adopt and implement these methods and processes. While the criteria provide a framework for factors that can be included in voluntary certification programs, each institution creating such a program would have to tailor it to its specific goals and mission.
By focusing on the entire EHR life cycle and having specific criteria in place to improve usability and safety, the voluntary certification framework can augment the current certification process. Adherence to these recommendations by EHR developers and health care providers can reduce the likelihood of unintended patient harm from clinician use of this technology.
Establishing rigorous, safety-focused test case scenarios
To identify and address usability and safety challenges with EHRs before health care facilities use them in patient care, one method developers typically use is to evaluate their products with clinical test cases.27 These cases are scenarios that reflect realistic patient conditions as well as the clinician tasks that would occur in caring for an individual. The scenarios allow for the observation of clinicians interacting with the EHR so that specific usability and safety challenges can be identified and addressed.
Given the importance of detecting safety challenges early, test cases should focus on EHR interactions that have the potential for serious harm in addition to low-risk but frequent interactions that are unlikely to adversely affect the patient. These test cases should also reflect real-world clinical interactions so that unique workflows and the opportunity for clinicians to make mistakes can be factored in.
Challenges with current certification usability test case scenarios
Despite the importance of test case scenarios for evaluating and improving EHR usability and safety, the usability scenarios submitted for certification can lack rigor. They were simple, did not reflect realistic clinical conditions, or included prescriptive instructions that may not be present in a clinical setting—making it more difficult to identify challenges that may arise when using the technology for actual care.28 Here is an example of a test case scenario that lacks rigor:29
“Looking at patient John Leeroy’s record, enter a new lab order for the patient.”
Specific shortcomings of this type of test case include:
- It does not reflect how a clinician would actually use the EHR because clinicians do not select any lab order but rather have a specific order or set of orders they are looking for.
- It does not reflect the complexities of clinical care.
- There is no clear way to evaluate whether usability and safety challenges exist since selection of any order would be identified as a success.
Developing test case criteria and relevant cases
To address the need for enhanced safety-focused test cases, specific criteria were developed to help guide the creation of rigorous scenarios. Fourteen cases, based on seven identified EHR usability and safety challenges, were developed based on the criteria.
Making these criteria and test cases available for use by both EHR developers and health care providers can help clinician interaction with EHRs be tested more effectively to identify usability and safety challenges before patients are harmed. These test case scenarios can be used in conjunction with other tools—such as the tests from Leapfrog or safety-related guides from ONC—to evaluate safety.
Criteria development for rigorous test cases
The insights from the EHR developers, clinicians (including physicians and nurses), researchers, and other stakeholders on the expert panel were integrated with existing literature, and four general features of a rigorous test case were defined. They should:
- Be representative. Test cases should be representative of the expected users of the technology, address key socio-technical factors—such as how different members of the care team may interact with EHRs—and represent realistic clinical care processes to identify usability and safety challenges that may occur when treating patients.
- Contain concrete goals and measures. Each test case should be shaped around a clinically oriented goal, with clear measures of success and failure. The lack of such goals and measures for each test case could complicate the ability to assess the use of EHRs in concrete ways, although goals and measures may vary from implementation to implementation.
- Test areas of risk or inefficiency. The test cases must include known risk areas, functions that contribute to inefficiency, frequently used tasks, tasks that are unfamiliar to the user, or intrinsically challenging tasks (such as drug dose tapering). Testing known areas of risk will help to identify these challenges and prevent them from persisting in the product. Focusing on areas that could produce inefficiencies or challenging tasks can help to implement corrections that can address clinician concerns.
- Define the audience. The test cases should be designed for use by a specific set of stakeholders (e.g., vendors, providers) and should clearly stipulate the intended participants (e.g., nurses, physicians, technicians). If test cases do not provide this information, they may be used with the wrong clinical audience, which could invalidate the results.
Each of these feature categories were further divided into subfeatures.
Developing and using test cases that adhere to these criteria will provide greater rigor to the evaluation of clinician interaction with EHRs and can serve to better highlight specific usability and safety challenges in the design, customization, or use of products before patients are harmed.
Test cases for prevalent usability and safety challenges
The criteria were used to develop 14 test use cases to demonstrate how scenarios can adhere to the identified principles and address prevalent usability and safety challenges. This includes two use cases each for seven prevalent patient safety hazards. The safety challenges were previously identified through an analysis of 557 patient safety event reports—free-text descriptions of potential patient safety hazards submitted by health care facilities—related to EHRs.30
For each of these usability and safety challenges, we developed both a basic and advanced test. The basic scenarios are more narrowly scoped tasks that represent a single aspect of the entire clinical workflow focused on a single clinical process and generally do not involve interaction with other EHR components or clinical processes. The advanced cases represent a more detailed aspect of the clinician’s workflow, including factors such as teamwork and communication with other clinicians.
The use of both basic and advanced cases helps test the range of EHR capabilities and supports evaluation of a product early in design. Basic cases can help evaluate single EHR features but should be used in combination with advanced ones throughout development and implementation. The advanced cases should be used to test broader workflows that involve several system features and interactions with multiple clinicians. The use of both types of test cases during development and after implementation can help detect problems.
The test cases include both inpatient and outpatient clinical settings. Each test case is provided on a template to support use by EHR developers, health care providers, or other stakeholders. The template provides a standardized format for each of the test cases, detailing:
- The usability topic to be tested.
- How safety could be affected.
- A rough estimate of the time needed for the test.
- Clinical setting addressed.
- Necessary users and participants.
- How the test case demonstrates robustness.
- Scenario descriptions.
- Tasks that need to be performed.
- Scoring guide.
Sample test case scenario
Below is an example of a basic scenario. The remaining scenarios are listed in the appendix.
This example describes challenges with how clinicians may enter allergy information. Based on how clinicians enter data and the design of systems, prescribing medication to which the patient is allergic should trigger an alert. This example is a basic scenario that tests the usability and safety of the allergy alerting function if allergies are entered in the EHR using the free-text option. Many allergies are entered as structured text, which is generally predefined allergies already in the system. However, clinicians sometimes need to enter free-text descriptions when structured options are not available or hard to find.
As a basic scenario, this test case aims to represent a single aspect of the clinical workflow and identify whether the EHR supports the provider’s expectation that an allergy alert will be triggered if there are relevant known allergies. This basic test case reflects the mental process a clinician would use with the EHR and the complexities of clinical care in a way that can be clearly evaluated.
The background information provided in the testing and robustness sections enable nonclinical moderators to understand the nature of the test. The clear scoring definition helps testers establish whether the system passed or failed the assessment.
Inclusion of the nonspecific EHR terms, necessary participants, and estimated completion time support use of this test case across institutions and EHR systems. This type of test case is one example of a more rigorous case that better represents how EHRs might be used in the actual clinical environment.
The test scenarios are based on actual patient safety reports that involve technology and potential harm. Having these test cases available as examples of more rigorous scenarios will allow both developers and providers to create their own usability and safety assessments.
Conclusion and next steps
Usability challenges associated with EHRs frustrate clinicians and can pose safety risks that contribute to patient harm.31 These challenges stem from the design of EHRs, decisions of health care facilities that implement the technology, and how clinicians use the systems. While the government, EHR developers, and health care providers have initiatives focused on improving the usability and safety of EHRs, gaps still exist with the scope and depth of federal requirements and in the test cases used to evaluate systems.
We sought to fill these gaps through the development of a more comprehensive certification framework focused on the entire EHR life cycle that engages both developers and providers, and by developing test case criteria with examples based on prevalent usability and safety challenges. Achieving the benefits of both the certification framework and test cases requires their adoption by EHR developers and health care providers. Once adopted, the test cases should be evaluated for their ability to detect safety events, assessed for challenges that arise in their use, and adjusted accordingly.
Adoption of the voluntary criteria
Some EHR developers and health care providers may choose to adopt the criteria to improve the safety and usability of systems. However, their adoption by other organizations may also require some financial or nonmonetary incentives, since resources will be required to adhere to the recommendations. We examined four potential approaches for adoption:
- The ONC could recognize elements of this certification as alternatives to its current requirements. However, given that these voluntary certification criteria include provisions that surpass those in federal regulations and address the entire life cycle and provider roles, such recognition by ONC is unlikely, though the agency could highlight private sector efforts.
- EHR developers and health care organizations could voluntarily adopt the criteria as an indication that they prioritize safety. EHR developer adoption would provide greater transparency to purchasers—such as hospitals—on the actions that vendors take to enhance safety. Meanwhile, health care facilities adopting these requirements would communicate to patients that safety is an institutional priority and help mitigate hazards that could become liabilities, such as high-risk customization made despite EHR developer concerns. To assist organizations in knowing what and how to evaluate their systems and processes, third parties could create voluntary certification programs to offer guidance and certificates upon meeting the expectations.
- Organizations that are already prioritizing health IT safety could embed these recommendations into their programs. For example, The Leapfrog Group has encouraged hospitals to take its computerized physician order entry tool to analyze the ability of clinicians to safely prescribe medicines.32 Hospitals take these tests and strive to achieve good scores, because the results are made public, fueling adoption throughout the industry. Similarly, the Association for the Advancement of Medical Instrumentation is publishing several standards associated with health IT safety and encourages their adoption. Adherence to the standards can be used to show that health IT safety is a priority.33
- Organizations that have some role in overseeing health care facilities, including the Joint Commission (which serves to accredit health care providers in order to promote safety and more effective care), may be able to drive health care providers to incorporate these recommendations and pressure EHR vendors to also incorporate best practices. The Joint Commission could incorporate these criteria into its requirements, so that its inspectors seek evidence that health care facilities—and perhaps the technology they use—adhere to best practices. While not all health care organizations receive Joint Commission accreditation, its program is influential and provides guidance for all organizations on how to improve safety.
Use of the test case criteria and sample test cases
Adoption of the test case criteria and sample test cases by EHR developers and health care facilities can enhance safety by better evaluating products throughout the system life cycle.
For developers, use of these test cases can take place early in design and development and as the product matures. Because federal regulations do not stipulate the rigor needed in these scenarios, the thoroughness and depth of the test cases used show wide variability. The test scenario criteria can serve as a potential standard for the EHR accrediting bodies and a resource for developers. Adoption of these criteria by the accrediting bodies as to the level of rigor for test cases could immediately improve the current certification process and help identify safety risks before use of the product. Similarly, these criteria could be incorporated into future updates to ONC’s EHR certification requirements for usability scenarios or if other organizations develop their own criteria and test products.
Health care organizations can use the test criteria and sample cases to evaluate the usability and safety of their product during the implementation phase, after changes are made, and to inform customization decisions. The criteria can be used to develop test cases for specific areas that are recognized by the health care provider as potential areas of risk. Organizations can immediately leverage the example test cases to quickly evaluate system safety to identify challenges and prevent harm.
The future of health IT
EHRs have revolutionized health care delivery by giving clinicians and patients better tools to foster safe, higher-quality care. Despite the benefits, however, system design, health care organization implementation decisions, and their use by clinicians can contribute to unintentional safety challenges. The adoption of best practices—including the tenets of the safety-focused certification criteria and more robust testing scenarios—can help give EHR developers and health care facilities better information to detect challenges and reduce the potential of avoidable patient harm.
Endnotes
- Office of the National Coordinator for Health Information Technology, “2015 Edition Health Information Technology Certification Criteria, 2015 Edition Base Electronic Health Record (EHR) Definition, and ONC Health IT Certification Program,” 80 Fed. Reg. 62602 (Oct. 16, 2015), https://www.gpo.gov/fdsys/pkg/FR-2015-10-16/pdf/2015-25597.pdf.
- Ibid.
- Svetlana Z. Lowry et al., “NISTIR 7804: Technical Evaluation, Testing, and Validation of the Usability of Electronic Health Records,” National Institute of Standards and Technology (2012), https://nvlpubs.nist.gov/nistpubs/ir/2012/NIST.IR.7804.pdf.
- Svetlana Z. Lowry et al., “NISTIR 7804-1: Technical Evaluation, Testing and Evaluation of the Usability of Electronic Health Records: Empirically-Based Use Cases for Validating Safety-enhanced Usability and Guidelines for Standardization,” National Institute of Standards and Technology (2015), http://nvlpubs.nist.gov/nistpubs/ir/2015/NIST.IR.7804-1.pdf.
- Office of the National Coordinator for Health Information Technology, “2015 Edition.”
- Ibid.
- Centers for Medicare and Medicaid Services, “April 2018 – EHR Incentive Program Active Registrations” (2018), https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/April2018_SummaryReport.pdf.
- Office of the National Coordinator for Health Information Technology, “Self-Declaration Approach for ONC-Approved Test Procedures” (2017), https://www.healthit.gov/sites/default/files/policy/selfdeclarationapproachprogramguidance17-04.pdf.
- The Pew Charitable Trusts, “Improving Patient Care Through Safe Health IT” (December 2017), http://www.pewtrusts.org/-/media/assets/2017/12/hit_improving_patient_care_through_safe_health_it.pdf.
- Elise Sweeney Anthony and Steven Posnack, “Certification Program Updates to Support Efficiency & Reduce Burden,” Health IT Buzz (blog), Office of the National Coordinator for Health Information Technology, Sept. 21, 2017, https://www.healthit.gov/buzz-blog/healthit-certification/certification-program-updates-support-efficiency-reduce-burden/.
- Ibid.
- Pub. L. 114-255, 21st Century Cures Act (2016), https://www.congress.gov/114/plaws/publ255/PLAW-114publ255.pdf.
- HIMSS Electronic Health Record Association, “EHR Code of Conduct” (2016), http://www.ehra.org/resource-library/ehr-code-conduct.
- Cheryl McDonnell, Kristen Werner, and Lauren Wendel, “Electronic Health Record Usability: Vendor Practices and Perspectives” (2010), Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, https://healthit.ahrq.gov/sites/default/files/docs/citation/EHRVendorPracticesPerspectives.pdf; eClinicalWorks, “Patient Safety and the Use of eCW’s Electronic Health Records Software,” news release, Dec. 6, 2016, https://www.eclinicalworks.com/eclinicalworks-patient-safety-use-of-electronic-health-records-software; Tarah Hirschey, “The Role EHR Vendors Play in Patient Safety” (2014), Athenahealth, https://www.athenahealth.com/blog/2014/03/04/the-role-ehr-vendors-play-in-patient-safety.
- James Walker et al., “EHR Safety: The Way Forward to Safe and Effective Systems,” Journal of the American Medical Informatics Association 15, no. 3 (2008): 272–77, http://dx.doi.org/10.1197/jamia.M2618.
- Ibid.; Shailaja Menon et al., “Safety Huddles to Proactively Identify and Address Electronic Health Record Safety,” Journal of the American Medical Informatics Association 24, no. 2 (2017): 261–67, http://dx.doi.org/10.1093/jamia/ocw153.
- Vishnu Mohan et al., “Using Simulations to Improve Electronic Health Record Use, Clinician Training and Patient Safety: Recommendations From a Consensus Conference,” AMIA Annual Symposium Proceedings (2016): 904–13, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5333305.
- Jane Metzger et al., “Mixed Results in the Safety Performance of Computerized Physician Order Entry,” Health Affairs 29, no. 4 (2010), https://doi.org/10.1377/hlthaff.2010.0160.
- Blackford Middleton et al., “Enhancing Patient Safety and Quality of Care by Improving the Usability of Electronic Health Record Systems: Recommendations From AMIA,” Journal of the American Medical Informatics Association 20, no. e1 (2013): e2–e8, http://dx.doi.org/10.1136/amiajnl-2012-001458.
- Raj M. Ratwani et al., “Electronic Health Record Usability: Analysis of the User-Centered Design Processes of Eleven Electronic Health Record Vendors,” Journal of the American Medical Informatics Association 22, no. 6 (2015): 1179–82, https://academic.oup.com/jamia/article/22/6/1179/2357601; National Center for Human Factors in Healthcare, MedStar Health, “EHR User-Centered Design Evaluation Framework,” https://www.medicalhumanfactors.net/ehr-vendor-framework.
- McDonnell, Werner, and Wendel, “Electronic Health Record Usability.”
- Raj M. Ratwani et al., “Electronic Health Record Vendor Adherence to Usability Certification Requirements and Testing Standards,” Journal of the American Medical Association 314, no. 10 (2015): 1070–71, http://dx.doi.org/10.1001/jama.2015.8372; Office of the National Coordinator for Health Information Technology, “2015 Edition”; Raj M. Ratwani et al., “A Framework for Evaluating Electronic Health Record Vendor User-Centered Design and Usability Testing Processes,” Journal of the American Medical Informatics Association 24, no. e1 (2017): e35–e39, http://dx.doi.org/10.1093/jamia/ocw092.
- Middleton et al., “Enhancing Patient Safety”; Elizabeth M. Borycki and Andre W. Kushniruk, “Towards an Integrative Cognitive-Socio- Technical Approach in Health Informatics: Analyzing Technology-Induced Error Involving Health Information Systems to Improve Patient Safety,” Open Medical Informatics Journal 4 (2010): 181–87, http://dx.doi.org/10.2174/1874431101004010181.
- Derek W. Meeks et al., “An Analysis of Electronic Health Record-Related Patient Safety Concerns,” Journal of the American Medical Informatics Association 21, no. 6 (2014): 1053-59, http://dx.doi.org/10.1136/amiajnl-2013-002578.
- Raj M. Ratwani et al., “Mind the Gap: A Systematic Review to Identify Usability and Safety Challenges and Practices During Electronic Health Record Implementation,” Applied Clinical Informatics 7, no. 4 (2016): 1069–87, https://www.ncbi.nlm.nih.gov/pubmed/27847961.
- Min Soon Kim et al., “Usability Challenges and Barriers in EHR Training of Primary Care Resident Physicians” (2014), https://link.springer.com/chapter/10.1007/978-3-319-07725-3_39.
- Office of the National Coordinator for Health Information Technology, “2015 Edition.”
- Ratwani et al., “A Framework for Evaluating.”
- Drummond Group, “EHR Usability Test Report of Amazing Charts Version 7.0” (2014), 45.
- Jessica L. Howe et al., “Electronic Health Record Usability Issues and Potential Contribution to Patient Harm,” Journal of the American Medical Association 319, no. 12 (2018): 1276–78, http://dx.doi.org/10.1001/jama.2018.1171.
- Middleton et al., “Enhancing Patient Safety”; Maryam Zahabi, David B. Kaber, and Manida Swangnetr, “Usability and Safety in Electronic Medical Records Interface Design: A Review of Recent Literature and Guideline Formulation,” Human Factors 57, no. 5 (2015): 805–34, http://dx.doi.org/10.1177/0018720815576827.
- The Leapfrog Group and Castlight Health, “Preventing Medication Errors in Hospitals: Data by Hospital on Nationally Standardized Metrics” (2016), http://www.leapfroggroup.org/sites/default/files/Files/Leapfrog-Castlight%20Medication%20Safety%20Report.pdf.
- Association for the Advancement of Medical Instrumentation, “AAMI Launches Health IT Standards Initiative,” AAMI News, August 2015, http://www.aami.org/productspublications/articledetail.aspx?ItemNumber=2663.