Lazarus is an adjunct professor of psychiatry.
WarGames is a 1983 American Cold War science fiction film directed by the legendary John Badham. The film’s premise revolves around a young computer whiz kid named David Lightman (Matthew Broderick) who accidentally connects to a top-secret supercomputer which has complete control over the U.S. nuclear arsenal. The supercomputer, named WOPR (War Operation Plan Response), is designed to predict possible outcomes of nuclear war. Mistaking the computer’s simulation for a real-life game, David starts playing a nuclear war scenario, causing a national nuclear missile scare and almost starting World War III.
While the movie itself does not directly relate to the field of medicine, the themes of technology, ethics, and responsibility are highly relevant. In medicine, the increasing use of artificial intelligence (AI) and machine learning brings up concerns similar to those in WarGames and other sci-fi movies like Star Trek. For instance, the potential for miscommunication or misinterpretation of data, the ethical implications of machine decision-making, and the importance of human oversight and understanding of complex systems are all pertinent issues in today’s medical field. Sci-fi movies can often serve as metaphors for the potential risks and unintended consequences of relying heavily on advanced technology in sensitive areas such as healthcare.
The Potential of AI in Healthcare
The integration of AI in healthcare is rapidly advancing. AI has the potential to revolutionize many aspects of patient care, as well as administrative tasks within the healthcare system. This includes, but is not limited to, AI algorithms for diagnosing diseases, predictive analytics for patient outcomes, automation of routine tasks, and personalized medicine based on individual genetic makeup.
AI can also help physicians with decision-making, provide predictive insights, and improve accuracy in diagnosis and treatment. It can aid in treatment planning, patient monitoring, and even in surgical procedures. AI can process vast amounts of data faster and more accurately than humans, potentially leading to earlier detection of diseases and more precise treatment plans.
Should the AI Doctor Step Out of the Virtual World?
It is important to remember that AI should be seen as a tool to aid healthcare professionals, not replace them. But deep down, many of us fear the latter possibility. Perhaps one day we will create an artificially intelligent doctor who, like Professor Moriarty in Star Trek: The Next Generation episode “Ship in a Bottle,” will clamor to leave the holodeck (a virtual reality room able to reproduce any place and person[s] one imagines).
An AI doctor practicing medicine at the patient’s bedside is an interesting vision for the future of healthcare.
The concept of an AI doctor is not entirely far-fetched. We are already seeing the beginnings of this with AI systems like IBM’s Watson, which can analyze a patient’s medical history and suggest potential diagnoses and treatments. Additionally, there are AI-powered virtual health assistants that can interact with patients, answer their queries, and even monitor their health conditions.
In one study, the use of a microphone on a secure smartphone allowed an ambient AI scribe to transcribe — but not record — patient encounters and then use machine learning and natural-language processing to summarize the conversation’s clinical content and produce a note documenting the visit. Study participants were reportedly “blown away” by the ability of the technology to appropriately filter the conversation from a transcript into a clinical note. The AI scribe saved doctors an hour at the keyboard every day.
However, while AI can analyze data and provide clinical summaries and recommendations, it is important to remember that the practice of medicine involves more than just data analysis. It requires empathy, understanding, and human connection. These factors are currently beyond the capabilities of AI. Therefore, while an AI doctor might be able to assist with clinical decisions or administrative tasks, the need for human healthcare professionals who can provide compassionate care and understand the nuances of human health and disease will always remain.
This is where the concept of creating an AI doctor from a holodeck becomes intriguing. If we could create an AI doctor that not only processes data and makes clinical decisions but also interacts with patients in a human-like manner, the potential benefits could be amazing. It could allow for 24/7 availability of medical care, decrease the burden on human doctors, and provide a consistent quality of care. The AI physician could be programmed with the most up-to-date medical knowledge and guidelines, ensuring patients receive the best possible treatment.
Lingering Questions and Concerns
However, even with this advanced technology, some challenges would remain. Ethical considerations, such as who is responsible when an AI doctor makes a mistake, would still need to be addressed. Furthermore, while a holodeck AI doctor might be able to mimic human interactions, it may still lack genuine empathy and understanding — and be no better than an android.
In one episode of Star Trek (“Requiem for Methuselah”), attempts to instill emotions in an android (“Rayna,” played by Louise Sorel) overwhelmed her and caused her death. Future iterations of Star Trek, most notably Star Trek: Voyager, employed holographic representations of a doctor, to be used primarily in medical emergencies. The doctor is programmed to become more like people, but its attempts to build human experiences, attributes, senses, and feelings into the doctor’s subroutines are often disastrous.
So although the potential benefits of an AI doctor could be enormous, the concept should still emphasize that AI complements human healthcare professionals rather than replaces them. It is also essential to remember that the use of such technology should always be guided by the principles of medical ethics and the ultimate goal of improving patient care.
Why is it important to mention that AI should be deployed for the betterment of healthcare? Because science fiction accounts tend to portray the nefarious side of AI (think: the medical thriller The Algorithm Will See You Now, by Jennifer Lycette, MD). And let’s not forget that Professor Moriarty actually seized control of The Enterprise and endangered the crew, demanding that Captain Picard find a way to transfer him into the real world.
Until the benefits of AI are fully realized and portrayed in a less sinister or dystopian light — which can contribute to public fear and misunderstanding — we should probably close the holodeck.
Arthur Lazarus, MD, MBA, is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of several books on narrative medicine, including Medicine on Fire: A Narrative Travelogue and Narrative Medicine: Harnessing the Power of Storytelling Through Essays.
Please enable JavaScript to view the