AI-guided decisions can devastate lower-income Americans. This group wants to change that

In 2016, the Arkansas Legal Aid Center was flooded with calls. People on a Medicaid program that provided in-home care assistance were having their caregiving hours cut. Reducing the hours for somebody with quadriplegia or cerebral palsy was devastating, said Kevin De Liban, an attorney with the organization at the time.

“People are lying in their own waste, people are getting bed sores from not being turned. They’re being totally shut in and shut out of the world — and they don’t know why,” said De Liban. “When they ask the state’s nurse, ‘Why are you cutting my hours?’ The state’s nurse would just say, ‘It’s not me, it’s the computer.’”

advertisement

The Arkansas Department of Human Services had implemented a new computerized tool for determining eligibility, but hadn’t told anyone. Disabled recipients of this care fought the state for improperly reducing their hours, and won

“When I was a kid, my grandmother [had polio] and most often was in a wheelchair. And things like accessibility, how to get into and out of buildings, whether a bathroom was available, how to use it because she couldn’t always get on and off the toilet by herself — those were all real considerations that as a kid I was part of,” said De Liban. 

Eight years later, artificial intelligence has made deeper inroads into many aspects of life in the United States. A new report published Tuesday details how AI is often responsible for making crucial decisions that impact a person’s life. Health care decisions comprise the bulk of this issue — over 73 million people interface with AI through Medicaid eligibility and enrollment process — but similar technology is also used to make critical decisions about housing, schooling and employment. 

advertisement

A faulty automated system in Michigan’s Unemployment Insurance agency wrongly accused 40,000 people of fraudulently receiving benefits. And many states’ child protective services have adopted a controversial algorithm called the Allegheny Family Screening Tool to determine housing for millions of abused or neglected children, though critics allege it is racially biased.

“While popular discourse has recently centered on the newest versions of AI that generate answers, reports, or images in response to users’ questions or prompts, such technologies derive from a lineage of automation and algorithms that have been in use for decades with established patterns of harm to low-income communities,” writes De Liban, the report’s author and the founder of a new organization called TechTonic Justice, which hopes to help low-income Americans handle this rapidly shifting society.

With the recent advent of generative AI, artificial intelligence is only poised to grow its reach. STAT has extensively documented how existing biases permeate the many uses of AI in health care and complicate lives as companies increasingly look to AI to increase profits. Federal oversight of AI is nascent and liable to change with the reelection of former President Donald Trump. STAT spoke with De Liban about why he penned this report, how people are affected by this issue, and how TechTonic plans to provide relief to people unjustly affected by AI decision-making.

This interview has been edited for clarity and length.

What are you hoping to achieve with this report on how AI decides how low-income people “work, live, learn, and survive” in the United States?

This report is the first, to my knowledge, comprehensive look at all the ways that AI is used to make decisions in the lives of low-income people, to quantify how pervasive it is, and to explain the harm it causes and the way it’s different from human decision-making. It lays out a path for what needs to change for there to be anything approaching meaningful accountability or justice with respect to AI’s usage. 

advertisement

All this hype about generative AI and GPT — that’s not something really to be concerned with right now. What we need to be looking at is the injustices that AI is perpetrating and come up with ways to minimize the harm and create long-term structural change that prevents the harm from ever happening in the first place. 

After you first experienced this kind of issue while working at Arkansas Legal Aid, what happened?

I started getting calls from other advocates in the states and even in Europe who were facing similar battles, mostly around public benefits. And at the time no central resource existed. So with my colleagues from the National Health Law Program and Upturn, we formed something called the Benefits Tech Advocacy Hub, which was focused on helping advocates work through issues related to public benefits. The hub has been successful, and we realized it’s not enough because there are so many more uses of AI and algorithms related to technology, housing, employment, schooling.

Your report spends a lot of time detailing how “it’s the computer” is increasingly the answer for who’s making the decisions that govern people’s lives. But AI affects many peoples’ lives, regardless of class or status. Why focus on low-income people, specifically? 

There’s all these cascading harms in the lives of low-income people, right? What happens if a big box employers’ AI technology decides you’re not productive enough, and then suddenly maybe you qualify for more public benefits, where you’re gonna face an AI system that denies you or is hard to navigate. Or maybe you miss out on paying rent and then can’t get an apartment because the attendant screening algorithm says you’re not gonna be a good bet. And maybe that increases the chances that a child welfare investigation is opened up about you. People with low incomes are exposed to this incredibly dangerous technology in all fundamental aspects of life and have less ability to absorb any of the fallout from the harm that it causes and less ability and resources to fight it.

advertisement

You mention how this shows up in Medicaid, private insurance and many other health care-related domains. Why is AI so prevalent in these contexts?

Health care, it’s all about restricting utilization and cutting benefits. Whether it’s a state or a private insurer, they want to keep costs down. And AI is the best modern way to do that. And so I think [that’s] the reason that you see so much AI used in health care. It’s the perfect way to do that because it has this veneer of objectivity and is a way to make a decision about what somebody needs.

AI has become an incredibly broad term that could mean everything from holding conversations to generative AI that can spit out full pictures and renderings. How are you defining it in this report? 

I like the very broad definition of some sort of machine or computer-based system that takes input or initial information, processes it or analyzes it, and puts out some sort of decision or recommendation. I’m not worried so much with how technically sophisticated the system is because what’s considered technically simple algorithms often cause the same kind of harm as those that are more complex. And they have the same issues, right? You can’t understand them easily. You might not know they’re happening. They’re incredibly difficult to fight. They are not sufficiently regulated by laws. 

We don’t even know it’s being used in some places or it’s not easy to know if it’s being used. Once you know it’s being used, it’s still hard to understand how it works. And even if you understand how it works, it’s hard to actually fight back against it. So there’s something sort of fundamental about the nature of AI that makes it a different problem than something we’ve ever faced before. 

How are you planning to push back against this AI “exposure”?

We are out to train as many legal aid providers, other front line advocates and affected communities who want training on how AI shows up in the lives of poor people. That’s number one. Then we’ll provide in-depth technical assistance for people who are fighting a particular instance. 

advertisement

As long as the equation is big tech and government vendors versus civil society, I don’t think the chances for meaningful protections are very good. If we can change that calculus to civil society and an organized constituency of people who are affected by it invested in some sort of better future, then I think we might be able to tip the balance.