Almost as soon as ChatGPT was released to the public, doctors began focusing on how they could harness artificial intelligence to improve patient care. Yet even as AI is providing doctors with increasingly sophisticated data, the information available to patients has stagnated.
The stark reality is that there’s far more information available today to guide you in betting a few dollars on the performance of your local sports team than in betting your life on the performance of your local hospital. Whether you have cancer, need a knee replacement, or face serious heart surgery, reliable comparative information is difficult to find and use. And what is available is often years old.
advertisement
In contrast, ChatGPT and its generative AI cousin from Google, Bard, together offer a wealth of information unavailable elsewhere. You can identify the Chicago surgeon who does the most knee replacements and his infection rate, find the survival figures for breast cancer patients at a renowned Los Angeles medical center, or get recommendations for cardiac surgeons in New York City.
The difficulty, of course, lies in that word “reliable.” Generative AI has an unfortunate tendency to hallucinate, at times making up both information and seemingly solid information sources. As a result, some replies are accurate, some aren’t, and many are impossible to verify.
Yet this new and easy access to quality-of-care information that is at once tremendously helpful and frustratingly unreliable may, paradoxically, be good news. It shines a spotlight on the tantalizing possibility of using AI to give patients immense new informational power precisely at a time when AI regulation and legislation have become a policy priority.
advertisement
By now, doctors and patients alike are accustomed to discussing treatment advice found on the internet; a ChatGPT or Bard suggestion is in many ways simply an authoritative-sounding summary of a Google search. But “What’s the best way to heal me?” is a very different conversation than “I found data about your past work, and I’m worried. Will you heal me or hurt me?”
If a patient can suddenly cite data about the doctor’s proverbial batting average — such as their infection rate or the hospital’s surgical mortality rate for “patients like me” — the doctor faces three very uncomfortable choices: plead ignorance of the actual figures, decline to disclose, or reveal and discuss accurate information.
Only the last choice, to be as open as AI but more reliable, will maintain the vital element of doctor-patient trust. In effect, AI is set to act as a transparency forcing function, stripping away information control.
However, while a more equitable relationship with patients is something to celebrate, individuals and institutions that have benefited financially from controlling information, whether to protect market share or avoid bad publicity, will not easily relinquish their power. They, and others that mean well, may argue that the public needs to be “protected” from possibly inaccurate information about which doctors and hospitals provide good care and which do not. As long-time proponents of patient-centered innovation, we strongly urge a different course.
Rather than seeking to repress information, the response should be to ensure that any AI tool offering responses to clinical performance prompts is as accurate and understandable as possible. Patients must be full information partners and included in a collaboration among all stakeholders to define information roles, rules, and relationships more clearly in the digital age. Despite years of slogans about “consumerism” and “patient-centered care,” that hasn’t happened.
For example, while hospitals are required to post prices for 300 common procedures, that’s pretty much where informing the public about “value” has ended. Data about the other half of the value equation — the quality of care you’re getting for your money — is sparse at best. Medicare’s Compare website provides hospital death rates for just six specific conditions and complication or infection rates for only a handful more, and it’s unclear how recent the data are. Yet surely no patient being wheeled into surgery ever thought, “I may not make it out of here alive, but I got a great price!”
The risk from information that may mislead or could be misused is always present: Witness the controversy over the U.S. News & World Report hospital rankings. But fears of chaos or confusion cannot be allowed to justify delaying tactics that deny patients the extraordinary potential of AI. A revolution is underway.
Government, the private sector, and patients should all be part of a focused collaboration to ensure a policy and practice environment where this extraordinary new technology can be harnessed to improve the life of every American. Radical information transparency, uncomfortable as it may be, must be one the highest priorities for medicine’s dawning AI information age.
Michael L. Millenson, a long-time health policy activist, is the author of “Demanding Medical Excellence: Doctors and Accountability in the Information Age.” Jennifer Goldsack is the founder and chief executive officer of the Digital Medicine Society (DiMe).