Microsoft’s Peter Lee says ChatGPT shouldn’t be used for initial diagnosis

SAN FRANCISCO — Surging interest in generative AI among medical professionals since ChatGPT’s launch is perhaps a testament to its potential — but it could also lead clinicians, and patients to experiment with it before there is wider consensus on how to navigate its biases and other pitfalls, Microsoft’s head of research Peter Lee said Thursday at STAT’s Breakthrough Summit.

About a month after OpenAI  — Microsoft has a significant financial stake in the company — launched its chatbot in 2022 on November 30, which happened to be Lee’s birthday, he got a note from a doctor friend who said he’d used it so his receptionist could file after-visit summaries for patients. While Lee and a handful of other doctors at Microsoft had been impressed with the GPT-4’s capabilities, they knew it had weaknesses.

advertisement

“The emotions were really mixed. I mean we were thrilled,” he said, but then “just within minutes there’s a feeling of dread. We had already come to the realization that GPT-4 would be prone to issues, issues of bias, hallucination, mathematical and logical errors,” he said. “And so, there’s an immediate question: ‘Should we allow this to happen?’”

STAT+ Exclusive Story

STAT+

This article is exclusive to STAT+ subscribers

Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.

Already have an account? Log in

Already have an account? Log in

View All Plans

Get unlimited access to award-winning journalism and exclusive events.

Subscribe