In this exclusive video, Harlan Krumholz, MD, SM, of the Yale School of Medicine and Yale New Haven Hospital in Connecticut, discusses uses of large language models like ChatGPT in healthcare settings and how to use new artificial intelligence (AI) technology responsibly.
Krumholz is the director of the Center for Outcomes Research and Evaluation, the Harold H. Hines Jr. Professor of Medicine, and a professor in the Institute for Social and Policy Studies, Investigative Medicine and of Public Health.
The following is a transcript of his remarks:
I’m always surprised that so many people have heard of ChatGPT and other AI platforms but there are still a lot of people who haven’t tried it yet, so they really don’t know what the experience is like and maybe they’re not really familiar or clear about it.
Hi, I’m Harlan Krumholz from the Yale School of Medicine, and I just wanted to take a few minutes to talk about these new models and to provide a little context in case you’re one of those who hasn’t had a chance to really try them out yet.
These AI models, like ChatGPT, are called large language models. Sometimes you hear people offhand saying LLMs — large language models — it makes you sound like you’re in the know if you say that. Well, these are foundation models that are trained on massive, diverse, large datasets and can be applied for numerous downstream tasks.
For medicine, these can be used as chatbots for patients. What are chatbots? Chatbots are automated tools that can interact with patients to elicit information and even give advice without the need for a human being involved. They can be used for interactive note-taking or they can augment the performance of people doing procedures. They can generate reports like radiology reports, they’re helping basic scientists identify targets for drug development, they can be used for bedside decision support.
Sound too good to be true? Well, at least at this stage, it is a bit too good to be true. I mean, it’s a remarkable tool. I think we’re at a juncture in history; it’s a step function with what these are able to achieve compared with what I was seeing was able to be achieved with AI before this. Now, this is about taking unstructured data — not data that fits into a case report form or into a given field — but take unstructured data like notes and texts and be able to make sense of it and then be able to feed it back appropriately when people are asking questions.
Now, you may know that there are a lot of people who are raising warnings about this AI. They’re thinking that we’re moving too fast. There are even some people who think this could be the end of humanity. Now, lest you think that this is only about people who are untethered to the reality of the world, these are actually very smart people, very well-steeped in AI that are expressing concerns, and governments are stepping forward and wondering who should really be able to control these kind of tools and how they might be applied.
There’s another feature to these — well, let me say, it’s both a feature and a bug — that is, these models are capable of creativity. So you could say, “I want to write a note to someone and I want it to be done in the form of a Shakespearean sonnet.” Well, it can actually produce that, or in the style of anyone that you can think of. That’s creativity.
But every once in a while you’ll be asking it a question about reality — and believe it or not, these are called hallucinations of the AI — that is, it can provide an answer that may even seem reasonable, but actually it’s made up. This happened to a lawyer who was using it to develop references for a court case, and it turned out that some of the references were actually made-up references, quite to the embarrassment of the individual and perhaps to some professional harm.
So there’s a lot to sort out with these, and there are a range of possibilities. Like I said, it’s a juncture in history, but I urge you to give it a try. For example, in clinic this weekend — I do a clinic with medical students — we were taking some of the more perplexing patients and putting their symptoms into this. By the way, you just put in, “I have a patient with these symptoms.” You want to be careful by the way, this isn’t mostly HIPAA [Health Insurance Portability and Accountability Act] overseen, so we were being vague but still put in the symptoms. It generated a pretty good differential diagnosis.
This can be used for a wide variety of teaching, but you do need some expertise to know whether or not what it’s giving you is real and whether you can trust it and rely on it.
A recent New York Times article came out and I was quoted as saying, “You’d be crazy not to try this.” And I’ll say this, too, I really urge clinicians and healthcare professionals to become familiar with this kind of technology. Give it a try, take it for a ride. Get familiar with what’s coming out, because I think on the horizon you’re going to see this in medicine and you’re going to see this being applied in a wide variety of ways.
You’re already seeing Epic making an agreement with OpenAI who developed this platform, ChatGPT, to be able to integrate it into Epic for many of the tedious tasks that need to be done — report generation and so forth — that that could actually make lives better for physicians and make the chart more complete.
Now, we talk a lot about ChatGPT, but just by the way, there are a lot of these large language models out there. ChatGPT is just one of them. And many people are suggesting that in future generations, these are only going to get more powerful and there will be competition among the tech companies and other newly emerging companies that are going to try to leverage the rapid advances. In the last 6 months, again by the way, people have said that there’s been more advances in the field of AI than they’d seen in a decade.
So, I think we’re on the cusp of a different moment in history. This issue in medicine is going to be very important. Also, what needs to be regulated? What is it that can come into medicine that doesn’t need to be regulated? And what can we really trust? As clinicians, we don’t really need to understand exactly how it works, but we need to know whether we can trust it. That’s going to take some additional time and testing. We’re not quite sure how it’s going to fit, but I’m sure in the end this will help transform medicine, hopefully for the betterment of patients. We need to be involved to ensure that there’s not unintended harm that occurs as new technology gets introduced.
But I’m optimistic about the possibilities and, again, I urge you to give them a try.
-
Emily Hutto is an Associate Video Producer & Editor for MedPage Today. She is based in Manhattan.
Please enable JavaScript to view the