AI could change healthcare forever—but ethical questions remain

AI could have a transformative impact on healthcare, especially radiology. According to a new analysis in the European Journal of Radiology, however, there are significant ethical issues that need to be addressed for any legitimate progress to be made.

“New AI applications and start-up companies seem to emerge daily,” wrote Nabile M. Safdar, MD, Emory University in Atlanta, and colleagues. “At the start of 2019, funding in imaging AI companies exceeded $1.2 billion. Yet, questions of algorithm validation, interoperability, translation of bias, security and patient privacy protections abound.”

It’s widely known that bias—particularly selection bias—can creep into AI algorithms. Datasets may not include findings from certain demographic groups, for example, leading to an algorithm that has been inadvertently trained to be less helpful when used in the treatment of underrepresented patient groups.

“Commercial uses of AI can also result in automation bias, which can diminish the likelihood that the healthcare provider will question an erroneous result due to the tendency to over-rely on automated systems, which after all are typically designed to reduce human error and enhance patient safety,” the authors wrote.

Another considerable issue with AI that researchers must remember is the fact that these algorithms have largely been designed to help with common health conditions. This helps patients suffering from those conditions, of course, but what about the patient experiencing a more rare ailment? Radiologists are trained to help treat a wide range of conditions—and they can use reason and additional research as needed if they happen to encounter something they don’t know as much about when treating a patient. AI algorithms, however, are trained to tackle specific issues and such flexibility does not exist yet.

Safdar and colleagues also explored the “unsettled questions” that come with developing these “data-hungry” algorithms. Will Europe’s General Data Protection Regulation (GDPR)—a data privacy regulation first implemented in 2018—play a role in how AI is developed and perceived? Will GDPR-like policies result in researchers gathering data in areas that are less regulated? Will GDPR have a financial impact on providers in the United States? These questions, the authors noted, “will likely be settled in both courts of law and public opinion.”  

Safdar concluded by looking ahead, observing that the American College of Radiology, European Society of Radiology, Society for Imaging Informatics in Medicine, European Society of Medical Imaging Informatics, Canadian Association of Radiologists and American Association of Physicists in Medicine all co-authored a statement on the ethical use of AI in radiology in October. The authors called this document “a reliable ethical framework.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 16 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Trimed Popup
Trimed Popup