As the buzz around ChatGPT and generative-AI continues, it’s important for health leaders to look past their excitement around these new tools and remain committed to ensuring patient trust is a top priority. Consider that six out of every ten patients are reportedly uncomfortable with providers relying solely on AI for specific healthcare needs, and while ChatGPT may appear to provide impressive responses, there’s still room for error and data misinformation, which can result in patient dissatisfaction and overall negative healthcare outcomes.
To ensure patient information is utilized in a proper, secure manner, a cautious approach is needed when implementing generative-AI solutions. Though useful, these tools won’t solve all of healthcare’s problems. Instead, implementing a healthy mix of human touch and precautionary measures, like strengthened AI regulations around patient data, is a safer way to create significant improvements throughout the entire healthcare ecosystem.
Acknowledging existing limitations within generative-AI tools like ChatGPT
Currently, AI-models like ChatGPT primarily possess the ability to reference specific data they’re trained on; at the same time, they lack some of the cognition to understand meaning. One survey found that of the ChatGPT-generated responses used to develop medical content, 47% were fabricated, 46% were authentic with inaccuracies, and only 7% of responses proved completely authentic and accurate.
AI-models like ChatGPT also face a multitude of other complications when it comes to language and meaning, which must be addressed in order to avoid negative consequences. When these models are asked a question with complex and specific word choices, the response may lack both true reasoning power and accuracy, proving potentially detrimental to a patient’s health. Only 38% of U.S. adults feel utilizing AI like ChatGPT to diagnose diseases and recommend treatments would lead to better health outcomes, with 33% feeling the tool would result in worsened healthcare outcomes. For example, if someone is looking for answers about a rare condition that is not within the ChatGPT’s data training wheelhouse, its responses could result in a misdiagnosis, negatively impacting the patient’s health condition.
Adopting caution to improve overall patient outcomes
Patient trust is paramount, and right now, that trust is lacking. Reportedly, 50% of patients are not fully sold on the medical advice provided to them through AI, but they are open to a combination of the tool with guided human input, striking a cautionary balance between utility and safety when it comes to handling a patient’s personal healthcare information. Providers can use their medical training and background, as well as their innate understanding of humans, to weed out inaccuracies provided by ChatGPT responses. The right combination of AI and human interaction can potentially improve a patient’s overall healthcare journey.
Another way to improve a patient’s experience and outcome when working with these tools is by molding generative-AI models, like chatbots, to fit a specific health system’s need. This combination is a win-win for patients and providers, reducing administrative burden by improving simple tasks like appointment scheduling, pre-and post-visit intake forms, billing and statements, etc., and providing patients with a seamless way to access non-urgent healthcare questions in a timely manner.
Current regulation landscape
There are still gaps across the industry when it comes to regulating generative-AI, like ChatGPT, in healthcare, which can be detrimental to a patient and potentially risk breached data and public exposure of private, sensitive healthcare information, protected by HIPAA laws. There are existing ways of utilizing this tool in a HIPAA-compliant manner to properly secure patients’ personal data and provide an added barrier of security and peace of mind.
While there is still much to be learned about generative-AI models, these tools can be useful to healthcare workers if introduced and utilized properly. Through a careful combination of providers practicing caution when using these tools, including looking for any data errors or misinformation, we can begin to see an improvement in overall patient healthcare outcomes and satisfaction.
About Matt Cohen
Matt Cohen, Director of AI at Loyal, is passionate about improving the healthcare experience through intelligent software. Before Matt joined Loyal, he spent several years performing research in areas that include machine learning, speech, signal and audio signal processing at MIT Lincoln Laboratory and the University of Maryland, College Park. He worked as a software engineer and application support engineer at MathWorks, with a focus on machine learning, and was initially hired as a Software Engineer, Applied Machine Learning, at Loyal. As the Director of AI, Matt oversees the company’s machine learning strategy and the AI team, finding new ways to “…provide technology that guides individualized healthcare actions at scale, and creates efficiency within operations.”