Transforming Healthcare Through Ethical AI: Enhancing Trust

David Ting, founder/CTO at Tausight

In November, President Biden signed a new Executive Order – titled “Safe, Secure, and Trustworthy Artificial Intelligence” – that promises to introduce new national AI regulations that focus on safety and responsibility in the use of this revolutionary new technology. Fast on the heels of this high-profile EO, the Biden Administration has already started the process of writing actual standards for the safe use of generative AI by announcing that the U.S. Department of Commerce. In late December, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced a deadline of February 2, 2023, to receive input from the public on federal guidance for testing and safeguarding AI systems.

This represents significant interest by the federal government on AI. The question is: What does this mean for healthcare?

New standards, for one. By including healthcare in this broad new EO, the Biden Administration is signaling very clearly that health systems should expect new safety, security, and equity standards for AI very soon.

More broadly, the Biden EO reflects a transformative moment for the entire healthcare industry. The use of AI has brought us to a before-and-after moment. We have now entered the era of “Ethical AI” – a period of healthcare technology where our use of AI must be matched by our commitment to patient care and comport with the new standards being established by the government. Our ability to meld these influences will determine the degree to which the healthcare industry will benefit from the revolutionary potential offered by the use of AI.

Privacy and cyber security are two areas of concern cited by healthcare leaders in any discussion of AI. Is there an approach that aligns with the EO’s mission to enhance privacy while still enabling clinical efficiency? Can we protect patients’ most valuable information – such as electronic personal health information (ePHI), which is a primary target for cybercriminals – and still provide top-quality care?

Let’s take a closer look to find out.

The EO and NIST

To fully grasp the origins of President Biden’s EO, it’s important to understand the NIST-AI Risk Management Framework (RMF) AI-100-1. As stipulated within the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283), the RMF aims to serve as a resource to organizations utilizing AI systems and hopefully help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. 

For healthcare, the EO and RMF provide a useful dual framework for enhancing privacy when using AI. The standards outlined in NIST AI-100-1 provide guidance for getting visibility into the life of ePHI. These steps include: meticulously tracking where it resides, learning how it’s transmitted, and maintaining a detailed log of access details. The EO and NIST both encourage cutting-edge encryption methods and secure data transmission to safeguard patient information. This means healthcare leaders should be placing privacy at the forefront of our AI solutions.

A Blueprint for Executive Assurance

To navigate the complexities of AI integration with confidence, healthcare executives can use the EO and NIST frameworks to vet AI tools that help them ensure compliance while delivering reliable performance. The following are traits to look for in AI solutions designed to promote privacy and cyber protection:

  • Valid and Reliable: AI solutions should guarantee the validity and reliability of healthcare insights, providing executives with a foundation of trust in decision-making.
  • Safe: By prioritizing patient safety, AI applications should minimize risks and ensure that safety protocols evolve with the dynamic healthcare landscape.
  • Secure and Resilient: Robust security measures fortify healthcare data, guaranteeing integrity and confidentiality. AI systems should adapt to emerging threats, enhancing resilience and securing data transmission within health systems.
  • Accountable and Transparent: AI models should be designed for executive understanding, providing detailed logs of ePHI access for transparency and accountability.
  • Explainable and Interpretable: Recognizing the need for interpretability, AI models should be deliberately designed to be explainable, allowing executives to confidently interpret AI-generated insights.
  • Privacy-Enhanced: The EO’s commitment to privacy requires visibility into ePHI, actively managed access, and transmission in alignment with NIST AI-100-1 standards.
  • Fair: Actively addressing biases, AI models should promote fair treatment across diverse patient populations, fostering healthcare equity and inclusivity.

Elevating Executive Assurance through Ethical AI

Health systems create a strategic advantage by deploying AI tools for adaptability and compliance. One significant area where AI can be leveraged for adaptability is in privacy-promoting solutions where AI is not only more resilient at adapting to evolving regulations and changing uses for patient data. This puts healthcare executives in a position where compliance does not become a mere checkbox but an ongoing commitment to ensuring they can navigate regulatory changes and new ways of using their most critical data assets.

Responding to the Biden EO and the need for privacy protections requires healthcare executives to navigate directly at the intersection of healthcare and AI. By deploying the right AI systems, these organizations can achieve unparalleled visibility into ePHI, secure data transmission, and adhere to NIST AI-100-1 standards. This positions them to lead confidently in a digital era where technology aligns seamlessly with the principles of patient-centric care and regulatory excellence.


About David Ting

David Ting, Tausight Founder & CTO, was the co-founder and former CTO of Imprivata and a former appointee to the U.S. Department of Health and Human Services Health Care Industry Cybersecurity Task Force. David has more than twenty years of experience developing identity and security solutions for government and enterprise environments. David holds twenty-two US patents, with additional pending.

While at Imprivata, Ting built the technology behind the OneSign solution used extensively in healthcare. As director, he oversaw Imprivata’s evolution from a venture backed startup to a public company and subsequent private acquisition in 2016. Ting has more than twenty years of experience developing identity and security solutions for government and enterprise environments. In 2016, he was appointed by the U.S. Department of Health and Human Services to the Health Care Industry Cybersecurity Task Force, authorized under the Cybersecurity Information Sharing Act of 2015. Ting helped draft the recommendations for securing healthcare in the Cybersecurity Task Force Report submitted to Congress in 2017.