
The promise of AI is greater efficiency in solving complex problems. But with that promise comes the responsibility of understanding and applying
governance, ethical, and reliability principles. Therefore, developing a responsible approach to developing and deploying artificial intelligence (AI) in a safe, trustworthy, and ethical fashion is essential. As 94% of IT leaders believe more attention should be paid to responsible AI development, the healthcare industry must devise strategies to alleviate the current AI challenges.
Training data and algorithmic for bias
Because data sensitivity (the risk associated with the exposure, unauthorized access, or misuse of specific data) creates barriers to compiling data sets required for Machine Learning (ML) training, the trained data frequently carries some bias. When this occurs, the patient cohort will not represent the wider population. Similarly, if ML dataset training lacks diversity, the AI may develop biased algorithms that fail certain demographic groups. These and other factors are causing mistrust among health professionals and patients, and the issue is exacerbated in the US, where the lack of standardized data formats across EHR systems further slows down data access. Studies have shown that integrating data from various sources often requires extensive data cleaning and normalization, which delays research timelines.
Further, many institutions use different health information systems, complicating sharing and aggregating data for research purposes. This fragmentation creates technical barriers to timely data access. These realities often create AI bias.
For example, suppose an ML training system recognizes melanoma on the images of people with white skin. The AI might misinterpret images from patients with darker skin tones and fail to diagnose melanoma due to sampling bias (Adamson & Smith, 2018). Despite representing only 1% of skin cancers, melanoma is responsible for over 80% of skin cancer deaths. Therefore, ML developers should disclose the details of the training data, including patient demographics and baseline characteristics such as age, race, ethnicity, and gender.
In addition to overcoming bias when training data for AI operations, other issues need to be addressed. AI “hallucinations” are incorrect or misleading results caused by insufficient training data, wrong assumptions, or training data biases. A ChatGPT study found inaccurate, even dangerous, responses.
For example, AI is often utilized to predict sepsis or heart failure by analyzing extensive patient data and calculations using neural networks in deep learning. This can challenge medics who seek to leverage AI predictions but need help understanding their rationale.
The Cure for AI Maladies
Gartner predicts that 50% of governments globally will enforce responsible AI policies by 2026. The U.S. healthcare system employs several practices and guidelines to ensure AI’s fair, safe, and ethical use, ensuring patient well-being while allowing for innovation.
Regulatory Guidance
Oversight by the U.S. Food and Drug Administration (FDA) regulates AI-based medical devices under its “software as a medical device” (SaMD) category. It ensures AI systems used in diagnostics, treatment planning, and other clinical settings meet safety and efficacy standards. The FDA also enforces post-market surveillance to monitor AI performance after its deployment. The Health Insurance Portability Accountability Act (HIPAA) mandates strict data privacy and security measures for AI systems handling protected health information (PHI). AI developers must anonymize personal data used for training models to comply with these rules.
Bias & Transparency
In the US, there is a strong focus on minimizing algorithmic bias to avoid exacerbating healthcare disparities. Initiatives like the National Institute of Standards and Technology (NIST) are working on frameworks to identify and reduce AI bias. AI tools are scrutinized for fairness to ensure they do not disproportionately impact specific patient populations based on race, gender, socioeconomic status, or other factors.
This transparency is crucial for building trust in AI and making it more ethical in clinical environments. Healthcare providers and AI developers are encouraged to implement interpretable models, allowing clinicians and patients to understand how AI systems arrive at particular decisions.
Best Practices in AI Use
Clinical validation and maintaining human-in-the-loop practices are essential in ensuring the best outcomes. Before AI tools are deployed in healthcare settings, they must undergo extensive clinical validation. This involves testing the AI on diverse datasets and patient populations to ensure its predictions are accurate and reliable in real-world scenarios.
Continuous Learning and Monitoring
Keeping a watchful eye on AI healthcare systems post-deployment is essential to detect issues like model drift, where the AI’s accuracy diminishes over time. This ensures that AI remains relevant and effective in clinical environments.
Conversely, some AI models employ continuous learning practices, incorporating new patient data to encourage efficiencies and solve problems before they arise. However, these adaptive AI systems must remain within regulatory bounds to ensure ongoing safety.
A user-centric and collaborative approach is critical to mitigate risks associated with AI misdiagnosis or erroneous decisions. AI ethics experts and diverse stakeholders must be active early in AI creation. The right partners can help healthcare providers meet these requirements, safeguard patient trust, and ensure a safe future. Innovation and ethical integrity can thrive by allowing physicians and healthcare professionals to review AI-generated insights and retain final decision-making authority.
About Pravin Twari
Pravin Twari is the Executive Vice President of FPT Software USA and leads and supports FPT employees globally to create sustainable long-term value for our customers and partners. With two decades of senior management experience with the House of Tatas and FPT Software, he has developed many technological solutions for bettering people’s lives in healthcare, media, and manufacturing.