Dive Brief:
- About 65%, or 1,696, of U.S. hospitals surveyed reported using artificial intelligence or predictive models integrated with their electronic health record (EHR) system, according to a study published in Health Affairs on Monday. The study used data from the 2023 American Hospital Association Annual Survey Information Technology Supplement.
- Predictive models can include an AI or machine learning component. Hospitals reported using the models for care-related decisions, such as predicting health risks for patients. They were also used for administrative tasks including billing and scheduling.
- More hospitals need to check models for accuracy and bias using their own data, the study’s authors wrote. Of hospitals that reported using predictive models, 61% evaluated the models for accuracy and just 44% assessed them for bias using local data.
Dive Insight:
The use of predictive models has come into focus as regulators develop new policies to address concerns about transparency and bias for AI in healthcare.
Earlier this week, the Food and Drug Administration issued draft guidance explaining what information developers should include in premarket submissions for AI-enabled devices.
A final rule by the Assistant Secretary for Technology Policy, formerly known as the Office of the National Coordinator for Health Information Technology, went into effect in January. The rule requires health IT companies to provide certain information about decision support tools, such as how the models were validated and approaches to reduce bias, and encompasses predictive tools that aren’t regulated as medical devices.
The study didn’t show how many predictive models used by hospitals were medical devices, but it described how the technology is being used and who developed it.
The tools were most commonly used to predict health trajectories or risks for inpatients (92%), identify high-risk outpatients for follow-up care (79%) and for scheduling (51%).
About 79% of hospitals that used predictive models said they came from their EHR developer, while 59% used other third-party tools and a little over half reported using self-developed models.
Hospitals that developed their own models were more likely to evaluate them locally for accuracy and bias, the researchers found.
Local evaluation is important because models trained on certain datasets might not be effective or useful in different settings. Algorithmic bias can make health inequities worse by adding barriers to care or underrepresenting patients.
“It is concerning that 56 percent of these hospitals did not report evaluating their deployed models for bias and therefore were not systematically protecting patients from possibly biased or unfair AI,” the researchers wrote.
They also cautioned against the idea that administrative tools are lower risk than clinical tools, pointing to studies that showed most patients are uncomfortable with models used to predict bill payment or missed appointments.
A limitation of the study is that excitement about the potential of AI and concerns about algorithmic bias could have led hospitals to overstate their use or evaluation of AI models.
The authors concluded that independent hospitals with fewer resources need support to ensure the use of accurate and unbiased AI, and that “the growth and broad impact of providers’ self-developed models that are currently outside the scope of federal regulation could warrant additional consideration.”