FDA, Health Canada give sneak peek into future AI regs

TORONTO — Medical device regulators from the U.S. and Canada addressed one of the industry’s buzziest topics Tuesday at Advamed’s The MedTech Conference: artificial intelligence.

The Food and Drug Administration has authorized nearly 1,000 AI-enabled medical devices, and Health Canada has authorized hundreds. But there are still several unanswered questions about the technology, including how to monitor the performance of algorithms that can adapt over time and how to regulate generative AI tools. 

FDA and Health Canada regulators shared their thoughts about these challenges and provided a glimpse at future regulations during panel discussions at the medical device conference. 

One of the FDA device center’s top priorities for its fiscal year 2025 is a draft guidance on lifecycle management and premarket submission recommendations for AI-enabled device software functions. 

“The problem that we were seeing is that people were very excited about the capabilities, not necessarily what happens once you actually develop and integrate the product itself,” said Troy Tazbaz, director of the FDA’s Digital Health Center of Excellence. 

The agency highlighted a need for this approach in a JAMA article published Tuesday, noting that, “Given the capacity for ‘unlocked’ models to evolve and AI’s sensitivity to contextual changes, it is becoming increasingly evident that AI performance should be monitored in the environment in which it is being used.” 

One of the challenges is how to monitor an algorithm’s performance, and who should be responsible for that task. For example, health systems could take on this role, but their clinical information systems are currently unable to monitor the ongoing and long-term safety and effectiveness of these interventions, the FDA wrote. 

An example of postmarket review by a manufacturer is with the first sepsis detection tool authorized by the FDA. The agency established a special control that focuses on the manufacturer, Prenosis, providing a postmarket performance management plan to show how the product continues to work, given the risk of bias for that product type, said Jessica Paulsen, associate director for digital health at the FDA’s Center for Devices and Radiological Health.

Currently, all AI/machine learning devices authorized by the FDA are locked, so the algorithms cannot be changed without the FDA’s approval. However, Congress gave the FDA authority in 2022 to authorize certain pre-specified changes to medical devices through a policy called predetermined change control plans (PCCP), documentation describing what modifications can be made to a device and how modifications will be assessed. 

Four people sit on a stage on couches.

From left to right: Cassie Scherer of Medtronic; Marc Lamoureux of Health Canada, Troy Tazbaz of the FDA and Diane Johnson of J&J.

Elise Reuter/MedTech Dive

Canada to implement PCCP for AI

Canada’s medical device regulator plans to issue guidance on machine learning-enabled medical devices after reviewing feedback from a draft shared in 2023. Marc Lamoureux, manager of the Digital Health Division for Canada’s Medical Devices Directorate, said during a conference session that the final guidance will be posted “in the coming months.”

The guidance covers bias, representativeness of training data and transparency, Lamoureux said. It also includes predetermined change control plans, which the regulator is now accepting. 

The regulator can also put conditions on the license for certain medical devices to make sure performance is monitored appropriately, and that an AI model is generalized to the Canadian population.

Medical device regulators in the U.S., Canada and the U.K. have collaborated in the past on good machine learning practices, transparency and PCCPs. 

The FDA issued a draft guidance earlier this year on PCCPs and hopes to issue a final guidance soon, said Paulsen.