BEVERLY HILLS, Calif. — Artificial intelligence is increasingly infused into many aspects of health care, from transcribing patient visits to detecting cancers and deciphering histology slides. While AI has the potential to improve the drug discovery process and help doctors be more empathetic towards patients, it can also perpetuate bias, and be used to deny critical care to those who need it the most. Experts have also cautioned against using tools like generative AI for initial diagnosis.
Brian Anderson is the CEO of the recently launched Coalition for Health in AI, a nonprofit established to help create what he calls the “guidelines and guardrails for responsible AI in health.” CHAI, which is made of academic and industry partners, wants to set up quality assurance labs to test the safety of health care AI products. He hopes to build public trust in AI and empower patients and providers to have more informed conversations around algorithms in medicine. On Wednesday, CHAI shared its “Draft Responsible Health AI Framework” for public review.
advertisement
But lawmakers have concerns over whether CHAI, whose members include AI heavy weights like Microsoft and Google, is essentially the AI industry policing itself, and other experts have supported alternative AI regulatory frameworks that are more localized.
STAT+ Exclusive Story
Already have an account? Log in
Get unlimited access to award-winning journalism and exclusive events.