FDA official sets out approach to AI in medical devices

Dive Brief:

  • Less than a week after the Food and Drug Administration published best practices for transparency in machine learning-enabled medical devices, a leader with the Center for Devices and Radiological Health shared more detail on how the agency is thinking about development and quality assurance for artificial intelligence.
  • In a Monday blog post, Troy Tazbaz, director of CDRH’s Digital Health Center of Excellence, said establishing quality assurance practices to ensure AI models are accurate, reliable, ethical and equitable should be a top priority. 
  • Tazbaz said solutions include continuous monitoring before, during and after deployment of AI models, and identifying data quality and other issues before a model’s performance becomes unsatisfactory. 

Dive Insight:

The FDA has clarified its thinking through guidance documents and standards as it regulates a growing number of medical devices with an AI or machine learning component. In 2021, the agency collaborated with Health Canada and the U.K.’s Medicines and Healthcare products Regulatory Agency to set out guiding principles for good machine learning practice. Last week, the agencies shared guiding principles on transparency, such as providing users with information on how an AI model came up with a result. 

In 2022, the FDA clarified what clinical decision support tools must be regulated by the agency as medical devices, noting that tools that predict the risk of sepsis or stroke should be under its purview. Last year, the agency issued a draft guidance on predetermined change control plans that would allow developers to make changes to an AI model after it is marketed, within bounds agreed upon ahead of time by the FDA. 

The FDA is also co-leading a working group with the International Medical Device Regulators Forum on AI/ML-enabled medical devices. 

AI has the potential to significantly improve patient care and medical professional satisfaction, advance research in medical device development, and enable personalized treatments, CDRH’s Tazbaz wrote. 

“At the FDA, we know that appropriate integration of AI across the health care ecosystem will be paramount to achieving its potential while reducing risks and challenges,” he added. 

The FDA’s Digital Health Center of Excellence wants to ensure AI technologies, when used as medical devices, are safe and effective, and foster a collaborative approach around AI in healthcare. 

One way of reducing risk is by adopting standards and best practices for the AI development lifecycle, Tazbaz wrote. For example, that approach would involve ensuring data suitability, collection and quality match the intent and risk profile of the AI model being trained. 

The healthcare community could also agree on common methodologies to provide information to users — including patients — on how a model was trained, deployed and managed. 

Tazbaz also outlined how the FDA is thinking about quality assurance for AI in medical devices, adding that the agency plans to issue future publications to add to the discussion. Those papers will address standards and best practices, quality assurance laboratories, transparency and accountability, and risk management.