The AMA’s new president on medicine’s ‘AI era’ and the uncertain future of telehealth

Much has been made of Jesse Ehrenfeld’s career of firsts. The current president of the American Medical Association, inaugurated in June, has long advocated for safe and equitable care for sexual and gender minorities, leading to an inaugural National Institutes of Health award for his research in the area. The anesthesiologist was the youngest-ever officer of the Massachusetts Medical Society, and this year became the first openly gay president of the AMA.

Adding to that list: Ehrenfeld is the first board-certified clinical informaticist to take on the AMA’s top role — bringing a set of skills that could be particularly useful as medicine reacts to the rapid evolution of technology.

advertisement

“It’s clear to everybody on the planet that we are entering into an AI era, and digital technologies offer so much potential to transform health care — not just for how physicians practice, but for how our patients experience it,” said Ehrenfeld. And as an informaticist, he said, he has “a deeper understanding of some of the foundational technologies that a lot of these platforms are built on.”

That may give him a leg up in discussions about policy and regulation as technologies rapidly evolve and are incorporated into medical practice, introducing new concerns about data use, safety, and privacy. As editor of the Journal of Medical Systems, a clinical informatics publication, Ehrenfeld has already issued editorials on everything from AI’s role in combating Covid-19 to the potential for blockchain in medical records.

As Ehrenfeld spends his presidential year tackling America’s health care crisis from the physician’s perspective, his experience as an informatician could help inform digital solutions, from telehealth to backend systems.

advertisement

And, more pressingly, careful leadership could help prevent some of the errors of the past. “Frankly, the experience of electronic health records, how they were incentivized, how they were rolled out, has been a bumpy ride,” said Ehrenfeld, a member of the 2023 STATUS List. “We don’t want to repeat that mistake with other digital technologies, particularly those that are AI-enabled.”

STAT spoke to Ehrenfeld about three major areas of health technology — telehealth, electronic health records, and artificial intelligence — and the potential and progress yet to be made in each of them. “We want to make sure that these technologies work for everybody involved and are an asset, not a burden,” he said. The conversation has been edited for length and clarity.

Artificial intelligence is at the top of lots of health leaders’ minds, but what technology, other than AI, are you most excited about changing the practice of medicine?

So telehealth has exploded. We in the pandemic made probably more progress in six months than we had in the previous 10 years out of necessity. We’ve taken a technology that was limping along, hadn’t really been embraced, and found with the right accelerants, we could really start to use it in some new and exciting ways.

What do you think are the most important reforms necessary in the practice of telemedicine to make physicians’ lives easier? 

One is we still have a lot of barriers to using these technologies. We’ve got these pandemic-related flexibilities through 2024, and we want to make sure that there’s legislation so that those flexibilities are permanent. It is so frustrating when you can’t plan, when you can’t organize a practice or a system because there’s uncertainty in the regulatory space. So we need to make sure that there is a regulatory framework, a pathway to payments, and that there’s not uncertainty that if you invest the time to redeploy your resources, to change your workflows, to set up your systems, that in 18 months it’s going to all go away and you’ll have to be back to where you started.

As a corollary, what are the most important reforms that are necessary to improve the quality of care for patients via telemedicine?

One of the challenges that we’re seeing unfortunately now continues to be around the use of what I would call unfortunate incentives from third-party payers shunting patients to telehealth-only contracts. As telehealth has become integrated into physician practices, the perpetuation of separate telehealth networks is just not justified. It’s confusing. It threatens continuity of care and the patient-physician relationship. So cost sharing should not be used by third-party payers to incentivize care from certain providers. Reducing cost sharing for select telehealth providers who do not also provide in-person care inappropriately steers patients away from their current physicians. It’s fragmenting the health care system further and threatening continuity of care. So we really believe that telehealth should be a supplement to, not a replacement for, in-person provider networks.

How have the financial realities of practicing medicine influenced physicians’ choices to work for these telehealth-only networks?

Recently we released the latest in our physician practice benchmark survey, and physicians are less likely to work in a private practice than 10 years ago. That is because of economic, administrative, and regulatory burdens. Those parties are driving physicians to shift away from traditional business models for medical practice and into employed arrangements and other kinds of places.

The last time we talked, we focused on the invisibility of gender and sexual minorities in the way clinical data is often collected in the U.S. What progress has been made, if any, on the representation of those groups in data?

So, tremendous progress, but there’s more work to be done. In my health system, even though we’ve been collecting preferred name — which is important for lots of patients, not not just our transgender patients — we didn’t make it visible. It wasn’t showing up on wristbands that were printed out, it wasn’t on work lists at the unit level for a care unit. It was leading to misgendering and people being called the wrong names. We finally just fixed that last week, even though we’ve been collecting preferred name for a long time at my particular health system.

I think the data systems have gotten better, and the data collection is more consistent. But it’s not where it needs to be. There’s still a lot of progress to be made in making sure that the data we collect is actually brought to life and that we’re actually using these data to inform clinical care, inform clinical decision support, and help patients stay healthy.

Widening the vantage point on EHRs: So much effort has been poured into establishing federal guidelines for interoperability and establishing meaningful information blocking rules. Do you see all that work having a concrete impact on physicians?

You know, there are definitely certain moments where I am shocked that I log into my electronic health record and I can find information from a different system. There is definitely information flowing across information exchanges in ways that it didn’t happen 10 years ago. But it’s not seamless. And unfortunately, as more information gets pushed into a patient’s record, it becomes harder and harder to find the things that you need.

Let me give you an example. I’m an anesthesiologist, and I was meeting this patient in the pre-op holding area for surgery. I’ve gone into the electronic health record. I like to think I’m a pretty sophisticated user. I look at the notes, I look at their prior anesthesia records, surgery records, labs. I go in, I talk to the patient. We have a great conversation, I obtain informed consent for the anesthesia. I’m about to leave, and the patient says to me, “One more thing, doc. I just don’t want what happened the last time to happen today.” And so I pause and I say, “Well, ma’am, what happened the last time?” She said, “Well, they told me I had a cardiac arrest in the recovery room.”

So I go back and I look and there is no structured information about this arrest. It is buried in a nursing note in the chart in a place that I obviously didn’t look, didn’t think to look, or wouldn’t normally look. One would think that a cardiac arrest would be a relatively important event to be categorized and coded and flagged. But those experiences are the norm. And unfortunately, increasingly, it’s looking for the proverbial needle in the haystack as you’re trying to understand what’s going on with the patient. It is a place where I think AI will be helpful as we try to sift through and lift up the most important details, particularly if there’s unstructured data that you’re trying to ingest as a clinician.

Of course we had to get back to AI. What forms and applications of AI do you think will change the practice of medicine the most in the next few years?

Well, 20% of practices tell us in our latest survey data that they’re using AI today. Most of them are using it for the unsexy back office operations stuff. Supply chain, scheduling, billing, which are obvious applications. I think more interesting places that have not come to fold are things like helping support a clinical decision being made, which is really exciting to think about. I don’t believe that it’s too far off when every radiology film, MRI, CT scan is primarily read by a machine first and then overread by a human — that seems to be pretty straightforward.

But even the most advanced algorithms and AI-enabled tools still can’t diagnose and treat diseases. And I think we’re all waiting to see what the FDA does, but the forthcoming regulatory framework for AI-enabled devices is proposing to be much more stringent on AI tools that make diagnoses and recommendations, especially if it’s an algorithm that adapts and learns over time, these so-called continuous learning systems. So I think from a technological and a regulatory standpoint where AI can excel and should be leveraged is unburdening physicians, detethering us from our computers, allowing me to spend more time with my patients and getting rid of some of these administrative hassles.

You know, there was a widely reported study that ChatGPT passed the United States Medical Licensing Examination, and those large language models are great for a textbook patient, for solving a very narrow clinical question. But patients aren’t standardized questions. They’re human beings with thoughts, with emotions, with complex medical and social and psychiatric backgrounds. And they rarely follow the textbook. I’m excited about what these tools will do; we’re just now seeing what’s possible in the future. But I think we also have to be realistic about how these things will and will and won’t come into medical practice.

One of the hopes that came out of a resident workshop incorporating GPT-4 at a hospital recently is that there will be guidelines to help inform the appropriate usage of LLMs. What could the AMA’s role be there?

So there are two things that are really important. First, we have to have a regulatory framework that works and makes sure that we only have safe and effective products in the marketplace for FDA-regulated products. The second is that we’ve got to make sure that patients and physicians understand what these technologies do. And there are limitations. Large language models are designed to predict what you want them to tell you, but the challenge is that most of these publicly available approaches do not prioritize accuracy.

I happen to edit in my non-AMA life an informatics journal, and I got a call from a researcher who said, “We’re just so confused, because one of these LLMs online is giving us these references from your journal.” They weren’t real references, they were hallucinations. If you know what the technology is doing, you don’t fall into that trap. But there are a lot of consumers, there are a lot of health care professionals, there are a lot of physicians who right now see this tool, but they don’t really understand what it’s doing or how it has those kinds of limitations.

We have an implementation playbook series, and we’re working on one for AI in practice because we know that there’s demand. We know that our members are trying to figure out how to implement these technologies in appropriate ways. So that is something that the AMA is going to be putting out soon.

What’s something less-known about AI in medicine that you want to be sure that people are aware of? 

We’re seeing a recent federal proposal that seeks to hold physicians solely liable for harms resulting from an algorithm if they rely on the output of those algorithms. We think that that will be a market killer. Liability is a potential barrier to the uptake of AI. Liability ought to be placed with the people who are best suited to mitigate it. And that may be the developer, it may be the implementer. It often is not going to be the end user. So we need to figure out what happens when something goes wrong, who is liable when there is a problem with the output of an algorithm. And that is a really, really important open question right now.