4 takeaways from the FDA’s first digital health advisory committee

The Food and Drug Administration grappled with questions about how to regulate generative artificial intelligence in medical devices at its first digital health advisory committee meeting.

To date, the agency has authorized nearly 1,000 AI-enabled medical devices, but none of those devices use adaptive or generative AI. However, the technology is being explored in other healthcare applications not regulated by the FDA, such as generating clinical notes for physicians

“For us to be most effective in our jobs as protectors of public health, it’s essential that we embrace these groundbreaking technologies, not only to keep pace with the industries we regulate but also to use regulatory channels and oversight to improve the chance that they will be applied effectively, consistently and fairly,” FDA Commissioner Robert Califf said at the November meeting. 

Generative AI can mimic input data to create text, images, video and other content. The technology poses unique challenges; models are often developed on such large datasets that developers may not know everything about them. Generative AI models also may change rapidly over time and can generate false content to meet a user’s prompt, known as “hallucinations.”

Experts met on Nov. 20 and 21 to discuss how the FDA can regulate this new technology and set a framework for safe and effective use of generative AI in healthcare.

Here are four takeaways from their discussion. 

1. Patients want to know when AI is used

When Grace Cordovano, founder of the patient advocacy group Enlightening Results, received her latest mammogram results, they mentioned that “enhanced breast cancer detection software” had been applied.

Although the results were normal, the test picked up a number of benign findings. Cordovano called the imaging center because she wanted to learn where these findings were coming from, or if she could get a copy of the mammogram without the AI applied to compare them.

“I got a ‘ma’am, we don’t do that,’” Cordovano told the advisory committee. “A month later, I got a confusing letter saying, based on your mammogram, you now need to go for an MRI. So there’s just discrepancies, and I care, and I’m digging in.”

Cordovano said the majority of patients, about 91% according to a recent survey, want to be informed if AI is used in their care decisions or communications. Patients should also be recognized as the end users of generative AI-enabled medical devices and must have the opportunity to provide structured feedback when it’s applied to their care.  

“I think it’s wonderful that the patient voice is being included, but we’re at a complete disadvantage here,” Cordovano said. “We have no idea where [AI is] being applied in our care. We don’t know at what point who’s doing it.”

The FDA’s digital health advisory committee agreed it may be important for patients and providers to know when they are using a generative AI-enabled device. The panel also recommended that patients be told how such a device contributed to their care and what information the device used in its decision-making.

A person speaks at a podium.

Michelle Tarver, director of the FDA’s Center for Devices and Radiological Health, spoke at Advamed’s The Medtech Conference on Oct. 17, 2024.

Elise Reuter/MedTech Dive

2. Health equity is at the heart of the debate around generative AI

Michelle Tarver, director of the FDA’s Center for Devices and Radiological Health, discussed the promise of AI in extending care to people in communities with fewer resources — people who are older, are racial and ethnic minorities, and who live in small towns further away from healthcare facilities.

However, that promise is matched by concerns the technology could amplify existing health inequities.

“This is really a collective conversation on how do we move things forward in an equitable and ethical way, and that requires including everyone at the table,” Tarver said. 

Jessica Jackson, an advisory committee member and the founder and CEO of Therapy is for Everyone, said equitable device performance should be a metric, not just a nice-to-have. 

“Historically, we have not been equitable in including data from marginalized communities in clinical trials and we have felt that that was still good enough,” Jackson said. “That needs to change for gen AI because we will be training future models on the data that is coming from these.”