What You Should Know:
– ECRI, a non-profit organization focused on healthcare safety, has released its annual Top 10 Health Technology Hazards report for 2025.
– This year, the report highlights the potential risks associated with artificial intelligence (AI) in healthcare, emphasizing the need for careful assessment and management of AI-enabled technologies to ensure patient safety.
AI: A Double-Edged Sword in Healthcare
While AI offers tremendous promise for improving efficiency, accuracy, and patient outcomes in healthcare, it also poses significant risks if not implemented responsibly. ECRI experts caution that AI systems can produce false or misleading results, perpetuate biases, and even lead to patient harm if not properly validated and monitored.
Top 10 Health Technology Hazards for 2025
ECRI’s Top 10 Health Technology Hazards for 2025 are:
- Risks with AI-enabled health technologies: Potential for bias, hallucinations, and inaccurate results.
- Unmet technology support needs for home care patients: Challenges in using complex medical devices at home.
- Vulnerable technology vendors and cybersecurity threats: Risks associated with reliance on third-party vendors and cybersecurity vulnerabilities.
- Substandard or fraudulent medical devices and supplies: Potential harm from counterfeit or faulty medical products.
- Fire risk from supplemental oxygen: Fire hazards associated with oxygen use in healthcare settings.
- Dangerously low default alarm limits on anesthesia units: Risk of undetected patient complications due to improper alarm settings.
- Mishandled temporary holds on medication orders: Potential for medication errors due to unclear or poorly documented hold orders.
- Poorly managed infusion lines: Infection risks and tripping hazards from improper management of infusion lines.
- Harmful medical adhesive products: Skin injuries and complications from inappropriate use of medical adhesives.
- Incomplete investigations of infusion system incidents: Failure to thoroughly investigate infusion-related incidents can hinder future prevention efforts.
Key Recommendations
The report provides detailed recommendations for healthcare organizations and industry stakeholders to mitigate these risks and improve patient safety. These include:
- AI Governance and Oversight: Establish clear guidelines and oversight for AI implementation, including data quality, bias mitigation, and performance monitoring.
- Home Care Technology Support: Provide comprehensive training and support for patients and caregivers using medical devices at home.
- Cybersecurity and Vendor Management: Strengthen cybersecurity measures and carefully vet third-party technology vendors.
- Supply Chain Integrity: Implement robust processes to ensure the quality and authenticity of medical devices and supplies.
- Medication Management: Develop clear workflows for medication hold orders and ensure proper management of infusion lines.
- Patient Safety Culture: Foster a culture of safety that encourages incident reporting and thorough investigation.
“The promise of artificial intelligence’s capabilities must not distract us from its risks or its ability to harm patients and providers,” said Marcus Schabacker, MD, PhD, president and chief executive officer of ECRI. “Balancing innovation in AI with privacy and safety will be one of the most difficult, and most defining, endeavors of modern medicine.”