Artificial intelligence (AI) is earning recognition for its potential benefits to the healthcare sector at large, from its ability to improve care and increase efficiency, to its ability to lower costs throughout the industry. For example, applications of AI include detecting distress in hospital workers’ conversations, improving providers’ revenue cycle operations, transforming healthcare supply chains – and mitigating existing racial biases in medicine.
Much of the attention surrounding AI, of course, concerns generative AI programs, such as ChatGPT, which use deep learning algorithms to learn patterns and features to write text, compose music, and create art, among other applications. For example, Claude from Anthropic, a generative AI engine from Anthropic, is able to process the equivalent of about 75,000 words in a minute, the length of the average novel, according to a report from McKinsey. Harnessing this power for the improvement of the patient experience and the overall operations of healthcare organizations seems like a logical step.
However, the public has concerns about AI use in healthcare – and AI more broadly. For example, one consumer study found that three out of four Americans do not trust AI in a healthcare setting, according to Carta Healthcare. Similarly, a consumer study of AI by the Pew Research Center found that a wide array of Americans continue to harbor concerns about AI more broadly, with nearly 45% of Americans saying that they are equally concerned and excited regarding the increased use of AI in daily life. However, with appropriate planning and oversight, AI can be safely used in the healthcare environment.
Three Use Cases for AI in Healthcare
AI undoubtedly has a broad range of applications across healthcare, but here are just three ways it could bring value to the industry:
- Addressing staffing challenges: Burnout is a significant issue across the healthcare industry and impacts about half of all healthcare workers according to recent studies. Hospitals can implement AI tools to optimize staffing and patient allocation for patient navigators who provide education, advocacy, and support to specific patients on their care journeys. Leveraging AI tools to solve complex resource planning obstacles can help to alleviate demand on healthcare workers while ensuring timely and empathetic care for patients.
- Assisting with diagnosis and treatment: Healthcare data is often scattered across multiple sources, preventing clinicians from obtaining a full, 360-degree understanding of patient health. AI can alleviate this problem by gathering data from multiple sources, marrying data from personal and family medical history, lifestyle and habits, and even data from smart devices — to better identify potential issues and recommend next steps for treatment. In fact, AI has a long history in medical research, dating back to the 1970s when MYCIN helped to identify blood infection treatments. The guarded use of AI in the diagnosis and treatment of conditions has significant potential to advance patient care.
- Communicating with patients: Some providers have begun to use AI to answer patients’ simple questions. For example, one recent study found that 78% of patients preferred answers from an AI virtual assistant over an actual physician because the responses seemed to be more empathetic and in-depth. (It should be noted that the use of AI in this context should be appropriately disclosed to patients.) Implementing such a tool also carries with it the ability to help alleviate the demand on healthcare workers to respond to patient inquiries, again alleviating some of the previously identified issues with burnout.
What to consider before adopting AI
Of course, AI is not a panacea for healthcare. When applied correctly, AI certainly offers numerous benefits, but providers should keep in mind the following three considerations prior to AI adoption:
- Staff education and training: Organizations should invest in comprehensive training programs that allow staff to learn the responsible use of healthcare AI within applications. It is also important to form interdisciplinary teams to collaborate on the most helpful types of applications and to act as “AI ambassadors” to others in the organization. Above all else, transparency and open communication with employees is key to making any new initiative a success and to mitigate risks that come with misuse.
- Protecting patient privacy and security: Any new AI (or AI-enabled) applications used in healthcare must deliver the utmost security to ensure the privacy of sensitive information. While the success of AI in healthcare is dependent upon patient data to improve algorithms, all personally identifiable information must be protected under law and will continue to be protected even as AI uses general information to train its systems. One potential way to safeguard patient information is to de-identify data before it is entered in any AI-run system. However, the de-identification process is uniquely challenging because as AI gets smarter, it can make connections itself that will reveal the identity of patients. Encryption methods will thus need to become increasingly sophisticated to better protect sensitive information.
- Weigh the benefits versus risks: As more and more cybercriminals adopt AI, healthcare data breaches are likely to increase, according to many data security experts. For example, bad actors can use AI tools to comb through large amounts of personal data stolen from multiple sources, enabling them to craft increasingly sophisticated and targeted phishing attacks on healthcare workers, potentially compromising highly sensitive patient and organizational information. Conducting a risk assessment, as would be done in the adoption of any new technology, will help establish a realistic view for the organization of the balance of risk and reward in adopting AI.
At the end of the day, AI tools that are thoughtfully implemented and monitored have the potential to benefit the healthcare industry in many ways. From easing staff burdens to assisting with diagnosis and treatment, AI can improve the overall patient experience. However, it is important to assess and mitigate the risks, including safeguarding patient information and effectively educating employees to ensure the technology is being used effectively. With the proper strategy, AI can be a valuable tool in healthcare that improves both the patient and staff experience.
About Heather Randall, PhD, CCEP
Heather Randall, PhD, CCEP brings two decades of payments industry experience to managing regulatory compliance and information security at TrustCommerce, a Sphere company. As CCO, she is responsible for ensuring that our products, platforms, and processes enable and support compliance with regulation and industry standards. Prior to joining, she held compliance positions, most recently at TSYS.