I am both excited and terrified by the entrance of artificial intelligence into my primary care practice.
AI’s enormous potential to help clinicians become more focused on patients, available, diagnostically accurate, and efficient feels like a dream. Yet memories of the nightmarish introduction of the electronic heath record into clinicians’ work lives looms like a dark cloud. Since EHRs swept into clinical life with passage of the American Recovery and Reinvestment Act of 2009, we have toiled away trying to make them as useful for clinical care as they are for billing, legibility, and data storage. That goal still seems like the wet pavement mirage on the highway: never getting closer as we speed along.
advertisement
Leadership from both the clinical and business worlds are essential in keeping history from repeating itself as AI moves into patient care: there is enormous potential benefit, but some clinical fundamentals are at risk.
From the clinician’s-eye view, AI is already extremely helpful as a partner in diagnosis, having demonstrated impressive accuracy in a virtual care setting. In complex cases, the ability of AI to quickly review vast amounts of information may help add possible diagnoses that aren’t being considered. Clinicians are adept at discovering the unusual hiding beneath the familiar, but have human limitations. An indefatigable AI chatbot can help fight some impediments, like fatigue, cognitive biases, and fund of knowledge, and provide an asset that will only improve with unrestricted access to facts and published studies. For example, an AI platform in development at Penn Medicine, where I work, helps connect patients with rare diseases to FDA-approved treatments.
AI is also helping lead primary care toward urgently needed reduction in EHR documentation and clerical work, such as refilling prescriptions, reviewing voluminous test results, and typing responses to patient portal messages, tedious tasks that steal clinical presence and disrupt patient care, and are major sources of clinician burnout. Mining information within the recesses of patient electronic charts and presenting it in a useful form will be Shangri La compared to the current state of scrolling through multiple entries — an especially daunting chore when treating someone with a long and complex medical history.
advertisement
But there are also areas in which clinical partnership with AI may be problematic. During a recent collegial conversation, the provocative point was raised that AI chatbots may eventually be well suited to supplant clinicians in some circumstances, assessing patients’ more straightforward problems and offering standard, evidence-based treatment plans. Offloading these “simple” issues to a digital assistant could limit investment in a larger clinician workforce.
This scenario has quickly moved from speculative science fiction to an imaginable possibility. Sharing the care of straightforward patient problems with AI is appealing on many levels, but begs an essential question: What, exactly, is a simple, straightforward medical problem?
One of the most important lessons I have learned in practice over the years is that simple complaints can often be portals to deeper worries: the patient with a urinary tract infection who also worries about a family history of kidney failure; the middle-aged woman with abdominal bloating who lies awake at night wondering if the fertility drugs she took in her 30s may predispose her to ovarian cancer.
The elemental skills that primary care clinicians use to reveal and address the full breadth of the stories our patients bring to us are uniquely human, and it’s hard to imagine delegating them to a machine.
Patients often do not share their underlying concerns with clinicians right away. Sometimes darting eyes, furrowed brows, and fidgeting or defensive posture alert us to other undisclosed issues. Clinicians must use well-honed communication skills, body language, and compassionate curiosity to help create a safe environment to draw them out. Probing questions like, “What are you most worried could be wrong?” when asked with sensitivity can be liberating for someone who feels nervous or uncertain.
It is within the richness of these interactions that trusting relationships form, and where the wonder and reward of primary care lies for many patients and physicians alike. Moreover, observing this interplay is revelatory for medical students who rotate through primary care offices on their clinical clerkships. As I often say during teaching sessions, signs and symptoms can become repetitive and routine, but patient stories are always unique and interesting. I’m not sure I want to share this aspect of care with AI, even if it becomes possible.
The way AI will be deployed in clinical interactions will depend on balancing business considerations with acknowledgment (or not) of the relationship-building and educational value of nuanced clinical conversations and whether AI is eventually able to duplicate that. Unchecked financial interests could favor workforce-sparing automation, time efficiency, and wage savings over the humanistic and diagnostic gain of discovering what lies beneath many “simple” patient complaints.
I am ready to embrace the astounding potential of AI in many aspects of primary care. But patients and clinicians should be active participants in how this partnership plays out, unlike our experience with the EHR. Better to iterate gradually and collaboratively with business and technology interests, rather than creating another winter of our discontent.
Jeffrey Millstein, M.D., is a primary care physician and regional medical director of Penn Primary Care.