Interpretability of radiological AI deemed essential for clinical adoption

If a medical AI algorithm performs well, why let the proverbial black box undermine confidence in it?

Or, as asked in an opinion piece published May 27 in Radiology: Artificial Intelligence: “Why shouldn’t we simply trust the model and ignore why it made a specific decision?”

The issue at hand is interpretability, the degree to which the human mind can comprehend the logic behind an AI algorithm’s conclusion.

Writing in response to findings published the same day in the same journal, Despina Kontos, PhD, and Aimilia Gastounioti, PhD, both of the radiology department at the University of Pennsylvania, suggest such interpretability may not be essential—but it surely can speed AI adoption into routine clinical practice.

An understandable interpretation of an erroneous decision or prediction “helps one understand the cause of the error and delivers a direction for how to fix it,” the authors point out.

Meanwhile, an interpretation of a correct decision or prediction “helps verify the logic for a specific conclusion, making sure that causal relationships are picked up and alleviating potential suspicion about confounding or bias.”

In either case, “it is easier for radiologists and patients to trust a model that explains its decisions, including its failures, compared with a ‘black box.’”

Kontos and Gastounioti composed their take as commentary on review findings published the same day in the same journal.

The findings, presented by Mauricio Reyes of the University of Bern in Switzerland and colleagues, draw from radiologists’ opinions on the topic at hand and include three essential takeaways:

  • Radiology artificial intelligence (AI) systems often have numerous computational layers that can make it difficult for a human to interpret a system’s output.
  • Interpretability methods are being developed such that AI systems can be explained by using visualization, counterexamples or semantics.
  • By enhancing their interpretability, AI systems can be better verified, trusted and adopted in radiology practice.

Journal publisher RSNA has posted both the Reyes et al. study and the Kontos–Gastounioti commentary in full for free.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.