Psshh. Psshh. Psshh.
The steady push of air from a CPAP machine can be lifesaving for people with sleep apnea. It also provides health care professionals with a stream of information that ensures a person is breathing through the night.
advertisement
A new report from the Center for Democracy and Technology and the American Association of People with Disabilities shows how health technologies that rely on artificial intelligence and algorithmic systems can be a double-edged sword for people with disabilities.
It’s the latest report to document how a person’s identities can affect the care they receive from health technologies and how such systems often struggle to adequately serve people from marginalized communities. Until recently, much of the research in this space has centered around race and gender, but more researchers are turning their eyes towards the 20% of Americans with disabilities.
“Technology is a new lever of discrimination, but what we’re seeing isn’t inherently new,” said Ariana Aboulafia, one of the report’s co-authors. “People with disabilities have faced concerns in the health care system for decades. That is worse for multiply-marginalized people with disabilities, for women with disabilities, for disabled people of color.”
advertisement
For much of the 20th century, people with disabilities were shut away in segregated facilities. While recent laws and court decisions have shuttered many of these hospitals and integrated disabled people into society, many peoples’ health status requires near-constant oversight. The document offers recommendations for how providers, hospitals and people with disabilities might navigate AI-powered technologies.
STAT talked with the report’s co-authors: Aboulafia, the disability rights in technology policy lead at the Center for Democracy and Technology, and Henry Claypool, a technology policy consultant at AAPD.
This interview has been edited for length and clarity.
Why did y’all write this report?
Ariana Aboulafia: People, both with and without disabilities, are interacting with AI and algorithmic technologies. Whether they know it or not is a different question. For people with disabilities, specifically, there’s a real risk of discrimination. Technologies are not being developed, deployed, and sometimes audited in ways that are inclusive of people with disabilities. With the integration into potentially high-risk environments like health care, it’s the creation of an ecosystem that stands to be risky or potentially harmful for people with disabilities.
Henry Claypool: It’s been difficult for disabled people to really understand what the implications are for working with AI tools. It is helpful to just talk about it in really concrete ways, so that more [people] in our advocacy community can relate to how these automated decision-making tools can impact our population.
Why are technological systems struggling to meet people with disabilities’ needs?
Aboulafia: There are all sorts of reasons why training data sets are not properly inclusive of disability. One of them is absolutely stigma. One is that there are so many variances in the definition of disability that if someone were to try to build out a data set and ask, “do you have a disability?” without providing a definition — someone very well likely does have a disability but they don’t necessarily know that they fall into a particular definition.
advertisement
Another reason why data sets might be under-inclusive of people with disabilities is, people with disabilities are disproportionately in hospitals, in institutions, disproportionately incarcerated. These are not areas where lots of data outreach is happening.
Is this just a problem with data collection? Or is the technology insufficient?
Aboulafia: Both. One good example is facial recognition. Facial recognition a lot of times just won’t work on certain people with facial differences. And part of that is because the training data wasn’t inclusive of people with facial differences.
Let’s say that someone wants to adopt using a retinal scan for something. But there’s no consideration of the fact that there are people who don’t have retinas, someone who has a prosthetic eye. The overarching concern that undergirds a lot of this is that people with disabilities aren’t properly being considered from the beginning.
Claypool: If somebody is using 800 catheters or something in a 90 day period, that’s often not what an AI tool is calibrated to. It’s calibrated towards meeting the needs of a population that’s not including disabled people. Without an audit of this nature, you’re turning on a tool without checking its utilization for a population that has a need that exceeds what the tool is built for.
Who’s to blame for these faulty technology systems?
Aboulafia: The issues of collecting accurate disability data are widespread in government and among tech developers. Things like stigma are going to attach. Whether you’re talking about census data or whether you’re talking about a developer building out a training data set, having more people with disabilities who can help bring awareness to this and help potentially correct is really the best way to ameliorate some of these concerns.
Claypool: It’s not going to be something that I think a federal remedy can address by just putting more money into the census and counting disabled people. It could help, certainly there are areas where we could definitely improve. But I think we still have a fair amount of work to do.
advertisement
One of the most visible ways that these gaps show up in health care is through in-home monitoring systems. Are there any potential benefits for people with disabilities to have more of these systems?
Claypool: It’s a way to reach people. Transportation can be such a hassle and public transportation isn’t always reliable. If they’re having transportation issues, it’s difficult to get in to see a physician or other clinician, and therefore these are tools that can really help keep a close watch on health status. They’re watching their blood pressure and have regular reports.
With CPAP machines, they are almost always monitoring people’s sleep events. Those are fairly accessible to clinicians and can allow them to identify points in time when people are not breathing enough. And just think about all the people that live with diabetes, and how this technology has allowed them to stay on top of their blood sugar levels.
And what about the drawbacks?
Aboulafia: Any sort of monitoring technology that people are using in their own homes, including smart home surveillance systems, tend to run footage through algorithms to check for problems, including wearable technologies. Technology sometimes relies on the internet or electricity, right? Let’s say, you are really relying on this technology to take care of someone with a disability. And let’s say that person has an internet outage. All of a sudden, you don’t know what’s going on anymore.
Anytime you’re talking about surveillance, there’s a privacy concern, right? Because if you’re increasing surveillance, you’re decreasing privacy. There are absolutely privacy-related concerns for some of these technologies, particularly the in-home monitoring systems that may take footage and then run it through an algorithm. We recommend in the report that if people with disabilities do want to use them, they should choose these sorts of technologies that come from their providers as opposed to 3rd parties.
And any hospital or health care providers that want to use AI in hospital systems should do so without the goal of replacing human providers.
advertisement
AI is prone to “hallucinating,” basically making data up. How could this affect people with disabilities?
Aboulafia: As AI continues to sort of promulgate throughout the health care system both in administrative contexts and then in some other contexts as well, there may be “administrative errors” that are found in your [electronic health records].
The use of AI in what was previously considered to be a lower risk context within health care for things like administrative transcription of visits. The recommendation stands for people without disabilities to review their own EHR because they may be able to catch mistakes that are made by AI software.
Are these systems being audited?
Aboulafia: It’s difficult to know the answer to that.
Sometimes audits are done on algorithmic or AI technologies, and then they market themselves as being, “bias tested.” But that audit may not have included disability. It may have included race, it may have included gender. It may have included race and gender. And so folks may genuinely believe that they’re implementing an algorithmic system that’s been “bias tested.”
In a perfect world, we have what’s called a pre-deployment audit and a post-deployment audit. A lot of these, the ship has sailed on implementation. But that’s not to say that there are still not many more systems that people are considering implementing into high-risk systems.
What are some of the other forces that would impact the implementation of these technologies?
Claypool: There’s a shortage of direct care workers. To some extent, technology might be able to help in the margins there with the schedule of the worker. But this is by no means a measure that should be employed when you’re thinking about whether or not to make cuts in Medicaid.
If you take more money out of the system, you’re likely going to see some reduction of hours for a population that relies on what are called long-term services and supports. When states are making these decisions based on budget shortfalls, it can often be a really crude tool where they will dial down the amount of hours that an individual is eligible to receive but that doesn’t correspond with their need. And so you’ll end up having people’s care compromised and not getting enough hours to cover their needs.
advertisement
Aboulafia: If [these technologies] are used in a way that it was to replace in-person care, that’s particularly problematic. One of the recommendations that we make is that these at-home monitoring technologies are not viable replacements for in-person care, and that’s not to say that disabled people shouldn’t use them. But we consider them a supplement, rather than a replacement.
It feels like there are a lot of steps that disabled folks have to go through to make sure they’re not either not getting scammed or not getting monitored in an awful way in health care situations.
Claypool: Historically, this is the reality for disabled people. We’ve had fraught interactions with the health care system since its inception. We’re the classic example of someone that perhaps can’t be cured by a profession that’s designed to deliver people from their circumstances.
We are on our way to achieving more progress. If we work with the technology developers, we can have better outcomes. And I think that’s the spirit in which these best practices are offered up.