Over-reliance on artificial intelligence (AI) in medicine may lead to a loss of clinical knowledge, a new JAMA viewpoint argued.
Agnes Fogo, MD, of Vanderbilt University Medical Center in Nashville, Tennessee, and colleagues used an example from kidney pathology, noting that pathologists “around the world are annotating tissue specimens to feed the algorithms.” Up to 100,000 annotations “are needed before an algorithm can recognize basic subunits such as a glomerulus,” they wrote.
“Following this enormous effort, the algorithm will do its job in an instant,” they wrote, noting that pathologists in the future will get not only a pathology slide, but also data on the number of glomeruli and the area of interstitial fibrosis. “With this information readily at hand, the pathologist would only have to focus on the more complex lesions to generate a diagnosis,” they added.
The downside, however, is that “if pathologists are no longer required to evaluate the basic histology elements themselves, the skill to do so will gradually be lost.”
“[By] moving the basic elements from the kidney biopsy literally out of the pathologist’s view, these will receive less and less attention in the day-to-day practice of clinical pathology and, thereby, the real intelligence of the basic architecture of the kidney will diminish,” they wrote.
Also, if AI models are used to streamline or innovate in medicine, the capacity to understand and solve problems without the help of AI could be lost, they wrote. This problem becomes more pronounced, the authors noted, if AI begins to advance medical understanding.
For instance, a study published in Scientific Reports showed that unsupervised AI models were able to identify tissue areas that had not previously been named in traditional kidney pathology, the researchers noted. Yet very little effort was made to understand those newly identified tissue areas.
“It can only be hoped that humans will be able to catch up with the newly defined constructs as long as the knowledge of real histology is still there,” they wrote. “We should realize that if this is allowed to move on, the near future will be characterized by rapidly decreasing knowledge about the pathogenesis underlying disease development.”
Not being able to understand how an AI came to its conclusions is known as “black box computing.” Fogo and colleagues noted that if medicine gets to “a stage where output is defined in the black box that is fed by input, and this black box contains constructs that are no longer consistent with previously defined entities, most of today’s knowledge on disease mechanisms will be forgotten, and we will be ruled by systems that only focus on intervention strategies that will provide the best possible outcome.”
Concerns over the negative effects of AI in healthcare have centered around the technology’s reliability, potential to exacerbate medical bias, or even threaten the future of human existence. Still, researchers have also shown that AI models contain great potential as support tools in clinical decision making as well.
Fogo and colleagues concluded that “physicians should contemplate how to take advantage of the potential benefits from AI in medicine without losing control over their profession.”
“Physicians should recognize that keeping AI within boundaries is essential for the survival of their profession and for meaningful progress in diagnosis and understanding of disease mechanisms,” they wrote.
-
Michael DePeau-Wilson is a reporter on MedPage Today’s enterprise & investigative team. He covers psychiatry, long covid, and infectious diseases, among other relevant U.S. clinical news. Follow
Disclosures
Fogo had no disclosures. Co-authors reported relationships with CSL Vifor, Otsuka Pharmaceutical, Walden Biosciences, Delta4, GSK, AstraZeneca, Boehringer Ingelheim, Aurinia, Hansa, Vera Therapeutics, and Toleranzia.
Primary Source
JAMA
Source Reference: Fogo AB, et al “AI’s threat to the medical profession” JAMA 2024; DOI:10.1001/jama.2024.0018.
Please enable JavaScript to view the