Artificial Intelligence Has Helped to Guide Pandemic Response, But Requires Adequate Regulation

With its increasing use in health care, FDA should adapt to ensure product safety and effectiveness

Navigate to:

Artificial Intelligence Has Helped to Guide Pandemic Response, But Requires Adequate Regulation
AI face scan
Getty Images

Health care providers, researchers, and technology companies have deployed a wide range of artificial intelligence (AI) technologies over the course of the pandemic to prevent, track, diagnose, and treat COVID-19. AI, broadly defined as computer programs that simulate human problem-solving, has proved valuable in the pandemic response by reducing the amount of time needed to sift through complex datasets and helping researchers make important discoveries.

The events of the past year, however, have highlighted several challenges in ensuring AI’s safe and effective use, as well as the important role of the Food and Drug Administration in regulating products driven by these emerging technologies to ensure that their benefits outweigh any risks posed to patients. The agency is currently considering how to adapt its processes in response to AI’s growing role and should move forward on updating its approach as quickly as feasible.

AI plays pivotal role in COVID-19 research and mitigation

AI-driven software can be used to curate and analyze data, as well as to identify patterns in large datasets with multiple variables. Such reviews often can be too complicated or take too long for human researchers to process. This capability has been vital in the fight against COVID-19 by helping researchers comb through large amounts of data to learn about the nature of the novel coronavirus.

The data include how the virus transmitted, how the disease manifests in patients, and how it might be treated. For example, researchers first confirmed anosmia, or a lack of smell, as a potential symptom using a computer sorting through patient health records. AI also was used to analyze genetic data from samples of the virus and discovered a distinct DNA code that scientists have used to understand how the virus binds to cells. Finally, AI has helped identify drug therapies with the potential to be repurposed to treat COVID-19.

This ability to detect patterns has also contributed to public health efforts to track and contain the disease. For example, a Canadian health monitoring platform used an algorithm that was part of a disease surveillance program to detect the first outbreaks in Wuhan, China, and notified customers about a week ahead of public warnings from other public health organizations. The algorithm also correctly predicted how the virus would spread during the early days of the pandemic using global airline ticketing data.

Predictive and diagnostic AI models face reliability challenges

Some hospital systems have developed their own AI tools to help diagnose COVID-19 and identify patients at higher risk of negative outcomes. Still, large datasets and careful vetting are necessary to avoid inaccurate or biased results.

Researchers have developed at least 75 models to support the diagnosis of COVID-19 or associated pneumonia by analyzing patient images, such as lung scans. Many experts hoped that these types of programs would help offset the shortage of diagnostic tests in the early days of the pandemic, but questions were raised about their accuracy and usefulness. In the first weeks of the outbreak, there was little data on the disease or its typical progression, and few such scans available from verified COVID-19 patients.

This meant that the earliest AI algorithms probably did not represent the general U.S. population, raising concerns that the software would inaccurately diagnose people or fail to distinguish between COVID-19 and other diseases such as influenza that may present similarly in CT scans or X-rays. As more images of COVID-19 patients are collected, however, the performance of these algorithms is likely to improve. Still, they will require careful evaluation within clinical settings for use among diverse patient populations.

Developers have also created—from scratch and by modifying existing algorithms—AI models that aim to predict individual outcomes, such as identifying which patients are at higher risk of requiring the use of a ventilator. But predictive models can face the same data challenges as diagnostic algorithms, leading to similar concerns about accuracy and reliability. For example, a study of prediction models found that nearly all of them carried high or uncertain risk of bias, meaning that they may perform poorly when used to predict outcomes for certain populations. AI could be biased for many different reasons, including the use of nonrepresentative patient populations in the development of a model.

Over time, researchers have gained access to larger and more representative datasets while applying more rigorous methods to predict who will develop serious complications from COVID-19. However, some of these algorithms have yet to be tested in environments outside of their original clinical settings, and the quality of their performance remains to be seen.

FDA must continue to set risk-based standards for AI regulation

The challenges of deploying AI in clinical settings to combat COVID-19 have underscored the need for adequate regulations to ensure that benefits outweigh risks. Although not all health-related software is FDA-regulated, those products that are intended to treat, diagnose, cure, mitigate, or prevent disease and other conditions generally must undergo agency review to assess safety and effectiveness before they can be sold commercially. FDA also can grant emergency use authorizations during a public health emergency and has issued three for AI products that predict certain outcomes for COVID-19 patients, including cardiac complications, ventilator use, and respiratory failure.

FDA has been considering how to adapt its regulatory regime for these unique products. In 2019, the agency released a white paper describing a potential shift in its approach to premarket review, and recently published an action plan that outlines next steps, which include supporting efforts to identify and eliminate bias. Addressing bias in AI is important given its potential to perpetuate underlying and systemic biases if not tested for and monitored carefully.

As use of these AI-enabled products proliferates, FDA should play an important role in setting standards for the industry at large and working with other stakeholders to develop an oversight framework that keeps pace with these rapidly evolving and potentially lifesaving technologies.

Liz Richardson directs The Pew Charitable Trusts’ health care products project.

America’s Overdose Crisis
America’s Overdose Crisis

America’s Overdose Crisis

Sign up for our five-email course explaining the overdose crisis in America, the state of treatment access, and ways to improve care

Sign up
Quick View

America’s Overdose Crisis

Sign up for our five-email course explaining the overdose crisis in America, the state of treatment access, and ways to improve care

Sign up
Composite image of modern city network communication concept

Learn the Basics of Broadband from Our Limited Series

Sign up for our four-week email course on Broadband Basics

Quick View

How does broadband internet reach our homes, phones, and tablets? What kind of infrastructure connects us all together? What are the major barriers to broadband access for American communities?

What Is Antibiotic Resistance—and How Can We Fight It?

Sign up for our four-week email series The Race Against Resistance.

Quick View

Antibiotic-resistant bacteria, also known as “superbugs,” are a major threat to modern medicine. But how does resistance work, and what can we do to slow the spread? Read personal stories, expert accounts, and more for the answers to those questions in our four-week email series: Slowing Superbugs.