Will AI’s ‘black box’ problem ever be solved?

Imaging providers continue to embrace AI technology, taking advantage of its ability to improve workflows and prioritize urgent cases. However, according to a new analysis published in the American Journal of Roentgenology, researchers still don’t truly understand how these algorithms work—and that’s a significant issue that must be addressed.

“Utilization of AI, especially deep learning research, is increasing in radiology, pathology and medicine in general,” wrote authors Adarsh Ghosh and Devasenathipathy Kandasamy, both from the department of radiodiagnosis at the All India Institute of Medical Sciences. “However, because such algorithms affect patient outcomes, the black box–like structure of deep learning algorithms remains a pet peeve. We do not know exactly how the algorithms work, and therefore we cannot anticipate when the algorithms will fail.”

When a human specialist makes a mistake, the cause can typically be researched and documented, allowing others to learn from what happened. AI models, on the other hand, are much harder—and often impossible—to learn from after a mistake occurs. This difference “hinders clinical implementation,” according to the authors.

Considering that “the ultimate aim of science” is to “bring forth the unknown using hypothesis and rebuttals,” Ghosh and Kandasamy also said that academics exploring AI technology must go further than documenting a certain model’s accuracy or sensitivity.

“Although machine learning is a very convenient method of exploring big medical data, researchers and peer reviewers should not limit themselves to accuracy-driven metrics and should attempt to the explore the concrete biologic explanations underlying the opaque models being built,” the authors wrote. “In the long run, this will enable medical discovery.”

The authors closed their analysis by looking ahead, noting that key changes are necessary for AI to reach its potential as a true game-changer.

“Scientific discovery should remain the main driving force behind research published in medical and radiology journals, and AI research should not be limited to reporting accuracy and sensitivity compared with those of the radiologist, pathologist, or clinician,” the authors concluded. “More importantly, reports of AI research should try to explain the underlying reasons for the predictions, in an attempt to enrich biologic understanding and knowledge.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 18 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."