Regulating artificial intelligence doesn’t have to be complicated, some experts say

Artificial intelligence has the potential to revolutionize how drugs are discovered and change how hospitals deliver care to patients. But AI also comes with the risk of irreparable harm and perpetuating historic inequities.

Would-be health care AI regulators have been spinning in circles trying to figure out how to use AI safely. Industry bodies, investors, Congress, and federal agencies are unable to agree on which voluntary AI validation frameworks will help ensure that patients are safe. These questions have pitted lawmakers against the FDA and venture capitalists against the Coalition for Health AI (CHAI) and its Big Tech partners.

advertisement

The National Academies on Tuesday zoomed out, discussing how to manage AI risk across all industries. At the event — one in a series of workshops building on the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework — speakers largely rejected the notion that AI is a beast so different from other technologies that it needs totally new approaches.

STAT+ Exclusive Story

STAT+

This article is exclusive to STAT+ subscribers

Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.

Already have an account? Log in

Already have an account? Log in

View All Plans

Get unlimited access to award-winning journalism and exclusive events.

Subscribe