A major ethical question regarding AI and healthcare
The rise of AI in healthcare—especially radiology—has launched countless conversations about ethics, bias and the difference between “right” and “wrong.” A new analysis published in La radiologia medica, the official journal of the Italian Society of Medical Radiology, explores perhaps the biggest ethical question of them all: Who is responsible for the benefits, and harms, of using AI in healthcare?
The authors. focused on radiology with their commentary, but their message is one that can be applied to any specialty looking to deliver patient care through the use of AI.
“When human beings make decisions, the action itself is normally connected with a direct responsibility by the agent who generated the action,” wrote lead author Emanuele Neri, University of Pisa in Italy, and colleagues. “You have an effect on others, and therefore, you are responsible for what you do and what you decide to do. But if you do not do this yourself, but an AI system, it becomes difficult and important to be able to ascribe responsibility when something goes wrong.”
Ultimately, according to the authors, the radiologists using AI are responsible for any diagnosis provided by that AI. AI does not have free will or “know” what it is doing, so one must point to the radiologists themselves.
Due to this responsibility, the team added, “radiologists must be trained on the use of AI since they are responsible for the actions of machines.” That responsibility also carries over to any specialists involved in the research and development of any AI system. If you helped build a dataset for AI research, in other words, one could argue that you share part of the blame if that AI makes an incorrect diagnosis. This is just one of many reasons that it is so crucial to develop trustworthy AI.
Another key point in the analysis is that AI automation can actually have a negative impact on the radiologist’s final diagnosis or treatment decision.
“Automation bias is the tendency for humans to favor machine-generated decisions, ignoring contrary data or conflicting human decisions,” the authors wrote. “Automation bias leads to errors of omission and commission, where omission errors occur when a human fails to notice, or disregards, the failure of the AI tool.”
Neri et al. concluded by looking at the “radiologist-patient relationship” in this new era of AI technology, pointing out that providers must be honest about the origins of their decisions.
“A contingent problem with the introduction of AI and of no less importance is transparency toward patients,” the authors wrote. “They must be informed that the diagnosis was obtained with the help of the AI.”