The Food and Drug Administration’s responsibilities don’t end when a drug hits the market after it’s approved: the agency is also continuously assessing products after they’re widely available for any safety issues. And a group of researchers — including two from the FDA’s Center for Drug Evaluation and Research — think artificial intelligence could uncover more signs of these issues, including from electronic health records, social media posts, and clinical databases referencing certain drugs.
In an analysis in JAMA Network Open, researchers suggest the agency could use large language models or LLMs to enhance Sentinel, the surveillance system for drugs and devices it regulates. Sentinel draws upon clinical records and insurance claims, and the agency uses its analyses to adjust drug labels, convene advisory committees and disseminate drug safety communication, the authors noted.
advertisement
AI, the authors suggested, could extract reports of drug safety events from a wider set of sources, including free-text data from electronic health records. But there are still some risks, the biggest of which is the so-called hallucination that LLMs have been known to introduce — by generating false information, LLMs could over- or understate the risks for certain products, for instance.
STAT+ Exclusive Story
Already have an account? Log in
Get unlimited access to award-winning journalism and exclusive events.