When it comes to artificial intelligence (AI) in healthcare, policymakers must balance innovation with potential harms to patients and healthcare workers, lawmakers and witnesses said during a Senate hearing Wednesday.
“I have concerns because when we talk about the promises of AI, we need to also talk about its risks,” chair Edward Markey (D-Mass.) said during the Subcommittee on Primary Health and Retirement Security hearing. “We have learned time and again that left to self-regulate, big tech puts profit over people almost every time. We cannot afford to repeat that mistake by not regulating artificial intelligence now.”
Markey kicked off the hearing with remarks on the potential dangers of unregulated experimentation involving AI in health. Unregulated AI technology could “fuel our next pandemic” or “supercharge existing inequalities in our healthcare system,” he said. Failing to produce effective policy on AI in healthcare could jeopardize patient privacy or lead to misdiagnoses or mistreatment of patients, he warned.
Markey promoted two bills he introduced: one that would allow HHS to respond to AI-powered biosecurity threats, and another that would stop companies from implementing untested technology without their workers or customers knowing.
“We don’t need big tech treating our healthcare system like a lab to experiment on patients and workers,” Markey said. “We need a healthcare system that prioritizes … heart rhythms over bots run by algorithms.”
Sen. Roger Marshall (R-Kan.), the ranking member of the subcommittee, agreed that regulation is needed to protect patients and healthcare professionals, but cautioned that over-regulation would stifle innovation and the promise of AI in healthcare.
“As I’ve always said, those closest to the industry know the challenges [and] they understand the opportunities and the risks the best,” he said. “They also know the most practical and impactful solutions as we look for guardrails that protect Americans but at the same time promote innovation.”
“After all, [AI] and machine learning had been making remarkable discoveries and improving healthcare for some five decades without much government interference,” Marshall added.
However, several witnesses invited to testify shared Markey’s concerns. Thomas Inglesby, MD, director of the Johns Hopkins Center for Health Security in Baltimore, outlined steps that Congress should take to prevent possible biological risks caused by future AI models.
“We only have one chance to get things right for each new open-source model release,” Inglesby said. “These measures taken together will reduce the risk of high-consequence malicious and accidental events derived from AI that could trigger future pandemics.”
Christine Huberty, a supervising attorney at the Greater Wisconsin Agency on Aging Resources, emphasized that AI needs to be transparent to ensure patient privacy and trust.
“It is unrealistic to eliminate AI completely from the healthcare system,” Huberty said. “If the machine itself can’t be dismantled, then patients should at a minimum have a clear view of its moving parts. And when the algorithm gets it wrong, patients need to be compensated and both the insurance companies and their subcontractors must be penalized.”
On the other hand, Keith Sale, MD, vice president and chief physician executive of Ambulatory Services at the University of Kansas Health System, attested to AI’s potential in alleviating healthcare provider burnout.
“There’s a great opportunity for AI technology to assist and remove that burden of documentation and administrative tasks that have become commonplace in healthcare, and are truly challenging our physicians and our healthcare workers, as you try and keep up with a growing demand of patient care,” Sale said. It should be considered a tool and “not something that should replace what I decide in practice or how I make decisions that affect my patients,” he added.
Subcommittee members emphasized their goal of striking a balance between protecting patients and allowing innovation to flourish. Each member emphasized the need for thoughtful and effective policy to ensure that AI in healthcare serves the people and professionals who will use it on a daily basis. Markey noted that the testimony would help the committee in moving forward with its efforts to draft that policy.
“[AI] must be paired with a voice for workers in determining their own working conditions, more treatments and cures for all patients, and better access to healthcare,” Markey said. “Otherwise, we are innovating for the sake of profit and that isn’t really innovation at all. It is greed, we can act now to prevent the next cautionary tale.”
-
Michael DePeau-Wilson is a reporter on MedPage Today’s enterprise & investigative team. He covers psychiatry, long covid, and infectious diseases, among other relevant U.S. clinical news. Follow
Please enable JavaScript to view the