The Food and Drug Administration wants the developers of medical devices that rely on artificial intelligence to disclose much more detail about how their devices were developed and tested, and what must be done to guard against safety risks in medical settings.
In a new draft guidance, the FDA calls on makers of AI devices to describe the sources and demographics of data used to train and validate their products, and disclose blindspots and potential biases that might impair performance. The information would be included in applications for approvals from the agency.
advertisement
Although the document is advisory in nature and does not impose new rules on device makers, it aims to set a higher bar for companies that, until now, have gained approvals for AI products without fully describing their training, testing, and limitations. It remains to be seen whether its recommendations will be endorsed by the incoming Trump administration, or what level of cooperation the agency will ultimately get from industry.
STAT+ Exclusive Story
Already have an account? Log in
This article is exclusive to STAT+ subscribers
Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.
Already have an account? Log in
To read the rest of this story subscribe to STAT+.