Stop allowing MA plans to use AI to deny care without review, lawmakers urge CMS

A bipartisan group of lawmakers is urging the Biden administration to prohibit Medicare Advantage insurers from using artificial intelligence tools to deny care until it completes a systematic review of their accuracy and effects on patients.

In a letter sent Tuesday, the lawmakers cited a STAT investigation in calling on the Centers for Medicare and Medicaid Services to beef up oversight of AI and algorithmic tools that discriminate against old and sick patients.

advertisement

“These tools apply a generalized need for care to an individual beneficiary’s situation, resulting in generalizations instead of person-centered approaches to care, which is antithetical to the mission of the Medicare program,” the lawmakers wrote. The letter was written by Sen. Elizabeth Warren (D-Mass.), Rep. Jerry Nadler (D-N.Y.), and Rep. Judy Chu (D-Calif.). It was signed by 52 lawmakers, including Sen. Mike Braun (R-Ind.).

Last year, Warren publicly called for the government to halt the use of these tools until it knew whether they complied with Medicare’s coverage rules. Nadler, Chu, and other Democratic House members also wrote a letter last year that said the government wasn’t doing enough to police this technology.

The letter on Tuesday also called on CMS to establish a process for reviewing AI products used to make coverage decisions, instead of deferring to insurers’ own internal assessments about the accuracy and validity of tools they are using to issue denials.

advertisement

“Given that we do not know what inputs are used for the algorithms and AI tools currently being used, it is difficult to know the accuracy of the information they generate and whether the inputs comply with the regulations,” the letter stated.

A CMS spokesperson did not immediately respond to a request for comment.

The letter comes after a STAT investigation in 2023 found that Medicare Advantage insurers were using an AI tool to cut off care to patients struggling to recover from grave illnesses and injuries such as cancer, severe strokes, and amputations. The investigation found that the owner of the algorithm, UnitedHealth Group, pressured its own clinicians to cut off rehabilitative care for patients within 1% of the days projected by an algorithm.

It also exposed flaws in the algorithm and its reliance on data from historic patients — with different backgrounds, living situations, and medical issues — to make predictions about the care of patients in the future. Because those comparisons are never exact, the use of the predictions to cut off care invites bias and bad decisions that can harm patients physically and financially.

UnitedHealth and its subsidiary, NaviHealth, are facing a class-action lawsuit stemming from their use of the algorithm to deny care, as is Humana, another large Medicare Advantage insurer that has used the UnitedHealth-owned algorithm. Meanwhile, UnitedHealth has eliminated the NaviHealth name and rebranded the company as Optum Home & Community Care.

CMS has already passed new rules to crack down on insurers’ use of algorithms and AI tools in making coverage determinations and promised to step up audits of their denials this year.

But the lawmakers said those steps do not go far enough to prevent abuse, and called on the agency to take several additional actions to monitor the use of tools with Medicare Advantage policies that have become a profit center for insurers.

In addition to conducting a “systematic review” of these products, the lawmakers urged CMS to clarify how the agency distinguishes between tools that account for patients’ individual circumstances, versus those that base decisions on generalizations. They noted that insurers are often sticking with an algorithm’s initial prediction about a patient’s length of stay in a medical facility, instead of reassessing them for changes in their condition that might impact their need for additional care.

The lawmakers also called on CMS to issue more detailed guidance on when insurers can use their own internal criteria to make determinations about a patient’s need for services, asserting that continued “ambiguity” is leading to confusion and opaque decisions.

They also proposed a two-week grace period before an insurer can reissue a denial after an initial denial has been overturned on appeal. STAT’s investigation found that UnitedHealth Group was instructing its frontline clinical staff to immediately issue new denials to patients after they won appeals, resulting in a near-constant battle over coverage for severely ill family members.

The lawmakers pointed out that, despite the stepped up scrutiny from lawsuits and government investigations, Medicare Advantage insurers are continuing to use AI and algorithmic tools with no transparency into how they were developed.

“CMS has shared that current AI tools are not able to self-correct when an incorrect decision is made, yet plans continue to use these tools exclusively,” the letter stated, adding that CMS should take the steps proposed to “prevent future AI-related harms in health care.”