Why AI may not help decrease radiologists’ burnout

You’re reading the web edition of STAT’s Health Tech newsletter, our guide to how technology is transforming the life sciences. Sign up to get it delivered in your inbox every Tuesday and Thursday.

How FDA categories may keep AI from relieving clinician burnout

In a counterintuitive result, using AI did not reduce radiologists’ burnout, a recent JAMA Network Open study of over 6,700 radiologists in China found. The surveys of burnout and AI use were taken between May and October 2023 and actually found a positive correlation between burnout and AI use.

advertisement

The study authors noted that this association was particularly pronounced in radiologists with high workload and those with low AI acceptance, so it’s hard to tell how much those factors complicated each other.

But Keith Dreyer, a radiologist and chief data science officer at Mass General Brigham, has a possible explanation for this phenomenon.

Most of the 950+ AI-enabled medical devices cleared by the U.S. Food and Drug administration are imaging-related, and most of those imaging tools are classified as “CADt,” or computer-aided detection, triage. Why? According to Dreyer, these are some of the “easiest” products to get through FDA approval because they don’t require a comparative effectiveness study and instead just need a performance test.

advertisement

How might these tools contribute to burnout, rather than ease a physician’s workload? The FDA has a restriction that these triage tools can only tell the physician which patient scans to prioritize looking at, but can’t show what it found suspicious on the scan, said Dreyer.  If the AI tool were to highlight a particular spot on the image for the radiologist to take a closer look at, that would mean the AI is doing the detecting or diagnosing, which is a different and higher risk category of device.

It’s like TSA officials at the airport being told to screen a certain passenger first, but not being able to circle what the AI found suspicious, he said.

“You can imagine, one, that doesn’t improve your throughput because you have to read every case still,” he said. “And in fact, it slows you down because now […] you have to scour that image or images to try and rule out this thing, but it won’t tell you where it thinks it is.” Where a circled spot would be easy to rule in or out as a mistake by the AI or a true finding, the triage AI, because of the odd strictures of its approval, actually makes more work for radiologists.

While radiology AI tools may be better than no specialist consultation at all for emergency department doctors, overturning decisions made by non-specialists based on AI suggestions can still impede workflow and communication and cause more work, said Dreyer, who noted that Mass General actually removed an AI-powered medical image detection software because of this issue.

FDA meeting on how to regulate generative AI: It’s very complicated

Speaking of the complicated ways that AI regulation impacts health care, last week, the FDA’s digital health advisory committee met for the first time to discuss how the agency might regulate AI.

The conclusion? Generative AI will stretch the bounds of the FDA’s current regulation structures.

When language models are probability based and give different answers to the same question, how do you evaluate the accuracy of a model? When health systems’ technology wealth levels already vary wildly, how do you institute post-market reporting that captures all adverse events? When generative AI models act like doctors but don’t learn the same way humans do, how do you test their abilities?

What does substantial equivalence look like for generative AI devices? And who will be motivated to pay for all of this?

Read more about our team’s takeaways here.

Government watchdog: OCR hasn’t checked for HIPAA compliance for the last seven years

The Office for Civil Rights hasn’t conducted a HIPAA audit on a health care organization since 2017, the Department of Health and Human Services’ Office of the Inspector General says in a report released this week.

The watchdog office also said that the audits the office had conducted before then — which are mandated by the 2009 HITECH Act — only looked for compliance on up to eight out of 180 different HIPAA rules requirements and were inadequate for actually figuring out whether health care organizations are actually equipped to safeguard patient information.

In the same report, OCR admits that large data breaches have increased 35,950% from 2010 to 2023, while its staff — and resulting ability to keep tabs on the industry’s security — has decreased. Read more from me here.

Neuralink’s new brain implant trial

Neuralink announced Monday a new feasibility trial to see if their brain implant can control a robotic arm. The company plans to cross-enroll participants from their ongoing PRIME study, which is investigating how an implanted chip can help patients control digital devices. Other teams in the brain-computer interface field are testing similar devices.

The company also recently announced their first international trial. The company has started recruiting in Canada and plans to install their implant into six people who have difficulty moving their arms and legs.

Earlier in 2024, Neuralink faced questions about the device’s accuracy, when officials revealed that some electrodes had moved after implantation. Currently, two people have been implanted with the device in the United States. —Timmy Broderick

Other news:

  • Epic Systems and Particle Health have agreed on dates for motions to dismiss and replies to that motion in their ongoing antitrust lawsuit. The last of those motions is to be filed by January 24.
  • Cradle, a generative AI protein engineering platform based in Amsterdam and Zurich, has closed a $73 million series B.
  • Device maker Philips has expanded its partnership with Amazon Web Services to both offer its diagnostics tools in the cloud, and also to use Amazon Bedrock foundation models to power generative AI workflows, such as “conversational reporting”: speaking aloud conversationally to generate a structured report.

What we’re reading

  • Inside Clear’s ambitions to manage your identity beyond the airport, MIT Tech Review
  • Watch: Inside the telehealth-fueled GLP-1 alternative market, STAT
  • Can AI make medicine more human?, Harvard Medicine Magazine
  • UnitedHealth pays its own physician groups considerably more than others, driving up consumer costs and its profits, STAT