

Imagine a physician and a patient sitting quietly together in an examination room. The physician’s eyes are focused on a computer screen as she speaks in brief sentences about elevated A1C levels and the challenges of managing blood sugar through lifestyle changes and medication. The patient nods along silently and anxiously while holding an incomprehensible sheet of lab results and struggling to process her Type 2 diabetes diagnosis.
The entire interaction lasts five minutes. The physician, already 17 minutes behind schedule, moves to her next patient frustrated that she couldn’t explain the diagnosis more clearly. The patient leaves with a new prescription and instructions for monitoring her blood sugar levels, then spends the afternoon trying to understand the implications of her condition and the details of the treatment plan.
Encounters like these form the foundation of a deepening crisis in American healthcare. One study found that trust in physicians and hospitals has plummeted from 71.5% in April 2020 to 40.1% in January 2024 — an erosion partly tied to the COVID-19 pandemic. Meanwhile, 60% of Americans grade the healthcare system C or worse, and 70% express a desire for stronger relationships with their healthcare providers (HCPs).
This erosion of trust occurs as advancements in artificial intelligence (AI) are changing how we view healthcare and look for information about our conditions and treatment options. Despite concerns surrounding data biases and potential errors, generative AI tools can help rebuild trust in medical establishments and strengthen the patient-provider relationship — if providers are committed to using these tools ethically and responsibly.
Building better clinical relationships
Clinicians are in a tough situation: Because they’re stretched so thin, maintaining a high quality of care has become increasingly challenging from logistical and psychological standpoints.
Many are turning to AI to help. A recent survey determined that 76% of physicians have started incorporating large language models (LLMs) into their clinical decisions.
There are countless benefits to using AI in clinical settings. AI tools can handle documentation and treatment planning, so clinicians can focus on patient care. Additionally, AI-powered ambient clinical intelligence can transcribe patient encounters in real-time, allowing physicians who use these services to have more meaningful patient conversations.
Increasing the patient’s understanding
The moments after a medical appointment often bring more questions than answers. Patients struggle to recall their physician’s explanation, understand their diagnosis, or make sense of their treatment instructions.
Clear communication is vital to strengthening their relationship. AI can convert medical terminology from an eleventh-grade reading level to a sixth-grade reading level (the accepted standard for health literacy), thereby offering patients a clearer understanding of their diagnosis and treatment.
One emergency room doctor tried unsuccessfully to explain to an elderly patient’s children why their treatment suggestions would worsen their mother’s condition, so he turned to ChatGPT. “As I recited the AI’s words, their agitated expressions immediately melted into calm agreeability,” he wrote.
Confusion and frustration are magnified when physicians and patients don’t speak the same language. Language barriers have been shown to result in more frequent adverse events, reduced access to health information, and diminished care satisfaction. Beyond basic translation, AI-powered services can be trained to understand cultural nuances and medical terminology across different dialects — and they’re only getting stronger.
AI can also help overcome fundamental access restrictions. Specialized medical chatbots, including one for cancer patients, may offer on-demand, cost-effective preliminary diagnostic guidance and health information to patients who lack immediate access to care. They can also alert patients when their condition requires in-person medical attention.
AI therefore can put knowledge in patients’ hands. It can deliver customized content about conditions, treatments, and preventive care. Patients can show up for appointments prepared with a greater understanding of their illnesses, and physicians can verify their diagnoses or find common ground with patients.
Detailed treatment explanations enable more informed healthcare decisions — and a feeling that your doctor is there for you.
Ensuring safety and privacy is crucial
Make no mistake, AI needs considerable human oversight and rigorous safeguards to be effective in healthcare settings. Clinicians must address privacy concerns and assure the quality of any output as well as the quality of the data sources utilized if they wish to use AI to rebuild and maintain patient trust.
AI implementation must be systematic and thoughtful. More than 200 guidelines exist globally to direct appropriate AI use in healthcare settings, including some laid out by the U.S. Food and Drug Administration (FDA). Providers recognize that AI and LLMs in particular still require human oversight: 97% of them report consistently vetting LLM outputs before clinical application.
Any clinical AI tool must comply with the most stringent patient data encryption requirements, including HIPAA. Clinicians may also wish to receive patient consent before using AI in order to maintain transparency. Deloitte found that 80% of patients want to know how their providers use AI in delivering care.
Once a physician begins using AI, its outputs must be reviewed continually to verify their accuracy. Errors must be tracked to improve the models. All staff members on a clinical team must undergo training to understand AI’s capabilities and limitations.
Most importantly, the focus must remain on augmenting, rather than replacing, human medical expertise. Like any other tool, AI is a resource that should help HCPs be more efficient, leaving them more time for meaningful and empathetic patient interactions. Providers must maintain the essential human elements of medical care to give patients what they need and want and to preserve the heart of the patient-provider relationship.
Embracing a future with AI
Consider again the physician and diabetic patient in that examination room. AI now offers tools to transcribe their conversation, explain complex lab results in clear terms, and provide the patient with understandable information about diabetes management. The physician spends less time documenting and more time answering questions. The patient leaves with confidence in her treatment plan and renewed assurance in the provider’s care.
As healthcare systems implement AI tools thoughtfully and securely, they create opportunities for stronger connections between clinicians and patients, leading to restored trust in medical care and improved health outcomes. Utilizing models with trustworthy, diverse data sets, and constant validation and improvement will be critical to ensuring the best AI outcomes.
About Maria Vassileva, PhD
Maria Vassileva is the Chief Science and Regulatory Officer for DIA. Dr. Vassileva has decades of experience with managing complex multi-stakeholder biomedical research programs. She spent most of her career in the nonprofit sector, leading the Science Team at the Arthritis Foundation, and working at the Foundation for NIH and the American Association for the Advancement of Science. She was also on the leadership teams of two health research organizations, serving as project director on multiple government contracts. Her areas of expertise include musculoskeletal, metabolic, immunity and inflammation disorders, as well as patient engagement. She received her PhD in Biochemistry and Cell Biology from Johns Hopkins.
About Stephanie Rosner
Stephanie Rosner is the Scientific Program Manager of Artificial Intelligence for DIA, where she is dedicated to fostering ethical AI design and advancing technology with a human-centric approach. Rosner has held project management and business development roles at Mathematica Policy Research and Optum, working with stakeholders to ensure ethical and equitable outcomes and policies related to advancements in health projects.