Health officials outline industry role in AI oversight in JAMA article

This audio is auto-generated. Please let us know if you have feedback.

Dive Brief:

  • Food and Drug Administration leaders have described their position on the regulation of artificial intelligence in a paper published Tuesday in JAMA.
  • In the paper, FDA Commissioner Robert Califf and two colleagues warn that the “scale of effort” needed to repeatedly evaluate AI models “could be beyond any current regulatory scheme.”
  • The health officials said that, while the FDA will play a key role in assessing AI, the pace of change in the sector means industry will need to “ramp up assessment and quality management of AI.”

Dive Insight:

Since the FDA approved the first partially AI-enabled medical device in 1995, the level of activity involving AI in devices has increased in recent years. So far, the agency has authorized around 1,000 AI-enabled devices, with the most common uses being radiology and cardiology.

With the figure poised to rise, Califf and his FDA colleagues Haider Warraich and Troy Tazbaz have set out the agency’s perspective on the regulation of AI.

The limitations of what the FDA can do on its own is a theme of the paper. In a section about keeping up with the pace of AI, the leaders explain how the agency’s total product life cycle approach and Software Precertification Pilot Program both reflect the need for adaptive regulatory schemes and show the limits of the FDA’s powers.

Successfully developing and implementing pathways such as the software program may require the FDA to be given new statutory authorities, the leaders said, and “the sheer volume of these changes and their impact” suggests a need for industry to ramp up AI assessment. 

One section addresses the responsibilities of regulated industries. Regulation of AI “begins with responsible conduct and quality management by sponsors,” the leaders said. That position is in line with the regulation of all medical devices but poses specific challenges when applied to AI. 

“It is important to evaluate whether health benefits of AI applications accrue to patients when they are being used to inform, manage or treat patients,” Califf and his colleagues wrote. “Currently, however, neither the development community nor the clinical community is fully equipped for the recurrent, local assessment of AI throughout its life cycle.” 

The task of recurrently evaluating AI models is also beyond any current regulatory scheme, the leaders said. Califf, Tazbaz and Warraich present the evolution of AI models as a “major quality and regulatory dilemma” and call for regulated industries, academia and the FDA to develop and optimize tools for assessing the ongoing safety and effectiveness of AI in healthcare.

Most of the paper refers to AI in general, but one section deals with problems posed by large language models (LLM) and generative AI tools such as ChatGPT. The FDA has yet to authorize an LLM. Use of LLMs can have “unforeseen, emergent consequences,” the leaders cautioned, because even models that only summarize medical notes “can hallucinate or include diagnoses not discussed in the visit.” 

Califf and his colleagues want to avoid unduly burdening clinicians with oversight responsibilities, leading them to favor the use of tools that enable “better assessment of LLMs in the contexts and settings in which they will be used.”