Three out of five nurses said they don’t trust their employer to place patient safety as the most important factor when using artificial intelligence (AI) tools, according to a survey from National Nurses United (NNU), the largest union of RNs in the U.S.
Of more than 2,300 RNs and NNU members surveyed from January 18 through March 4, 60% disagreed with the statement, “I trust my employer will implement AI with patient safety as the first priority,” as detailed in a press release from the union.
Nurses said they are alarmed by the growing influx of unregulated AI tools in healthcare, including predictive models used to determine staffing assignments, computer-generated reports that replace nurse-to-nurse handoffs, and chatbots that replace triage nurses, which they argue undermine clinical judgment and compromise patient care.
“Nurses are not against technology,” said Deborah Burger, RN, president of the NNU, in a recent media briefing. “We embrace technology that enhances patient care and is used in conjunction with our own extensive education and clinical experience. However, we are opposed to this Wild West of unregulated and unchecked gadgetry parading as a panacea for all that ails healthcare.”
“We want the public to hit the pause button on [the] current reckless push by ‘Big Tech’ and healthcare companies to force AI technologies onto our patients, us, and healthcare workers without our knowledge or consent,” she added. “We will not allow our patients to be the guinea pigs.”
The NNU recently published the Nurses and Patients’ Bill of Rights: Guiding Principles for AI Justice in Nursing and Health Care, which underscores the need for safety, privacy, transparency, and high-quality person-to-person care, as well as the right to exercise professional judgment. “The burden of demonstrating safety should rest with developers and deployers, not patients and their caregivers,” the document notes.
In addition, hundreds of union nurses gathered outside Kaiser Permanente San Francisco Medical Center late last month to protest the center’s Integrated Care Experience, which highlighted the use of advanced analytics and AI.
In the NNU survey, 40% of nurses said their employer had introduced “new devices, gadgets, and changes to the electronic health records” in the past 12 months. Half of those surveyed said their employers use algorithms based on electronic health record data to determine patient acuity and need for nursing care.
Of those respondents, 69% said their own assessments clashed with computer-generated acuity metrics, which rely on real-time charting — which they argue is unrealistic given patient loads and chronic understaffing — and fail to incorporate the educational, psychosocial, or emotional needs of patients and families.
“The result of relying on the algorithmically driven acuity measurements is that, on a daily basis, in unit after unit, we have multiple patients whose acuity is underrepresented, which means there are not enough nurses to provide optimal care in a timely manner,” said Cathy Kennedy, RN, a vice president of NNU and the president of the California Nurses Association, who works at Kaiser Permanente’s Roseville Medical Center, in the press release.
Additional key findings from the survey included:
- 12% of respondents said they have seen patient handoffs, traditionally completed through direct communication between nurses, transition to computer-generated reports
- 48% of the nurses using such handoffs said the automated reports do not align with their personal assessments
- 29% of nurses reported that they cannot change AI-generated assessments or categorizations in facilities that leverage devices capturing images and sound from patients; for example, pain scores and wound assessments
- 40% of nurses in facilities where automated systems are used to predict patient outcomes, risks for complications, and discharge needs said they are unable to modify those scores to reflect their own clinical judgment
Michelle Mahon, RN, assistant director of nursing practice at NNU, pointed out that the Joint Commission has already published warnings to the public and clinicians about the dangers of inadequate handoffs leading to wrong-site surgeries, delays in treatment, falls, medication errors, and death.
Jeff Breslin, RN, a vice president of NNU and a board member of the Michigan Nurses Association, who works at University of Michigan Health-Sparrow in Lansing, shared his first-hand experiences with these technologies.
“Previously, when a patient was being transferred from the emergency department to another unit in the hospital, the nurse taking responsibility for the patient would have an opportunity to discuss the patient’s clinical status and ask any follow-up questions with the nurse from the ER,” he said.
Now, critical patient care information is sometimes missing from automated reports that would not have been missed in a nurse-to-nurse conversation, he explained. For example, Breslin said he walked into the room of a patient unmasked and without any personal protective equipment because he wasn’t alerted that his patient was COVID-positive.
In another instance, the automated report failed to alert Breslin that a new patient was immunocompromised when he was also caring for patients with COVID and the flu.
“It was my ability to assess, and my experience as a nurse for almost 30 years, that allowed me to provide the best care for this patient, not the computer-generated report that missed some very critical information,” Breslin noted.
Despite all the hype around AI, it cannot replicate human intelligence, Mahon stressed. “AI can recognize patterns in data, but healthcare data are often incomplete or biased … It takes experience and expertise of a human nurse to understand that data in context, identify what’s missing, and holistically assess the needs of individual patients.”
-
Shannon Firth has been reporting on health policy as MedPage Today’s Washington correspondent since 2014. She is also a member of the site’s Enterprise & Investigative Reporting team. Follow
Please enable JavaScript to view the