Ensuring Healthcare AI/ML Models Aren’t “Failed Science Fair Projects”

Andrew Eye, CEO & Founder of ClosedLoop.ai

Artificial Intelligence has become a board-level priority, dominating healthcare technology conversations. However, despite widespread interest in the technology, the healthcare industry is still lagging when it comes to deploying and maintaining successful AI and machine learning (ML) models. 

According to a survey by the Society of Actuaries, 93% of healthcare organizations (HCOs) believe leveraging predictive analytics is key to the future of their business, but a staggering 87% of all data science projects never actually make it into production. HCOs implementing AI must also address healthcare-specific challenges, accounting for everything from defining episodes of care with dynamic healthcare terminologies to ensuring clinical adoption of predictions. AI models that fail to address these challenges never reach production and are no different from failed science fair projects. They present an idea and a potential solution but fall flat when it comes to actually achieving results.

There’s no room for failed science fair projects in healthcare. Stakes are too high and resources are too few. Healthcare organizations need effective AI/ML models that enable them to target proactive interventions with greater efficiency, ultimately driving improved health outcomes and lowering costs. The right AI/ML applications should help HCOs anticipate adverse events or complications before they ever occur. However, this is no easy task – building effective predictive analytics capabilities requires a thoughtful process. Below are core pillars to ensuring predictive models are driving positive change.

Hone Your AI Focus
It’s important to start by sharpening the focus of a predictive analytics project, as AI initiatives that try to boil the ocean from the outset are bound to fail. Unrealistic expectations regarding the scope of the project and unclear criteria for success will set the initiative up for failure. From the outset, data scientists need to work closely with clinicians and healthcare leaders to identify specific, actionable use cases and ensure predictive models are developed with this context in mind. This means establishing a complete understanding from all parties of the problem to be solved, how models will be used in production, how results will impact outcomes, and how improvement will be measured.

The well-known SMART acronym used in goal-setting – which stands for specific, measurable, achievable, realistic, and timely – provides an excellent framework for achieving a more focused AI project from the start. Stakeholders and data scientists must set clear goals and expectations rather than attempting to solve broad issues with vague criteria for success. For example, consider the following two statements that both attempt to frame the same AI initiative:

“We need to use AI to reduce our inpatient admissions for asthma.” VS “Compared to the national average of 5.5 per 10,000, our inpatient admission rate associated with poorly managed asthma is 8.4. To reduce our rate, we will implement the asthma management program recently published in Annals of Allergy, Asthma & Immunology, and we’ll identify patients for program enrollment with an AI/ML model that predicts the risk of experiencing unplanned admissions with asthma as a primary or secondary diagnosis code. Using the model to precisely target our intervention, we expect a 10% reduction in asthma-related admissions within six months of program implementation.”

AI/ML Models Need to Be Explainable
The purpose of predictive models in healthcare is to drive positive health outcomes. This can’t be achieved if the clinicians themselves, who are often the end users, don’t understand or trust predictions and don’t use them as a result. It’s no longer enough for software vendors to offer “black box” AI solutions and proprietary algorithms that are inaccessible to customers. Clinicians must understand how certain predictions are made.

Explainability at the population level isn’t sufficient; Clinical stakeholders need granular, individual-level evidence to maximize the impact of their limited resources. For example, an individual may be flagged as high risk for heart failure in the coming months, but it may be difficult to understand what distinguishes them from the larger population. However, if AI-based models are able to surface the raw data from their health history, such as a decline in their left ventricle ejection fraction, care teams will better understand impact ability. This specific evidence is essential to target and personalize interventions.

Continuous Monitoring & Maintenance
Attention to a project can’t stop once it’s up and running. It’s essential that data scientists continually monitor and audit a model even after it’s deployed. If a model goes unchecked, data quality issues, shifts in underlying population data, and the dynamic nature of healthcare terminologies (eg., new drug codes in prescription fills data as new drugs hit the market) and quality measure values can all degrade accuracy, leading models to make inaccurate and potentially harmful predictions.

The pandemic is a textbook example of why models must be monitored and retrained. If a model was only trained on 2019 data for respiratory illness, that model quickly became outdated once the pandemic began. Without being retrained, such a model would produce inaccurate predictions, jeopardizing clinician trust in AI and potentially contributing to worse health outcomes.

The Future of AI in Healthcare
Right now, all eyes are on AI and its promises and drawbacks. Within the healthcare industry, there are specific, critical challenges that need to be overcome for AI-powered predictive models to succeed. Building models that make it off the science fair floor and into operation requires a strategic approach, a focus on explainability, and ongoing performance monitoring. With these tenets in mind, healthcare organizations and their data scientists have a promising opportunity to harness AI and ML for measurably better healthcare outcomes at a lower cost.


About Andrew Eye

Andrew Eye is the CEO and Co-Founder of ClosedLoop, the leading healthcare data science platform. His executive and entrepreneurial experience spans 20 years in B2C and B2B, including startups and Fortune 500 companies.