Healthcare AI players advised to see explainability as ‘tailored interpretability’

When applying AI to help answer clinical questions, developers, researchers and clinicians should all remain mindful of the difference between interpretability and explainability.

The distinction is important because a clinician wants to know why a certain treatment was given; a researcher wants a hypothesis to test, and a patient wants to know how to get well soon. And because each of these stakeholders seeks their own form of understanding, it’s helpful to see explainability as “tailored interpretability.”

The point was made by Mihaela van der Schaar, a Turing fellow and professor of machine learning, AI and health at the University of Cambridge in the U.K. and UCLA in California.

Delivering the keynote address at the all-digital 2020 annual meeting of the International Conference on Learning Representations (ICLR) the last week of April, van der Schaar made the case that the field of healthcare AI needs to rethink the “complex class of problems” presented when applying the technology.

“We need new problem formulations,” van der Schaar said, according to coverage of the event posted May 6 in VentureBeat. “There are many ways to conceive of a problem in medicine, and there are many ways to formalize the conception.”

In the same article, VentureBeat reporter and AI editor Seth Colaner quotes several other experts who presented at ICLR on clinically predictive technologies in healthcare.

One is Chris Paton, a medical doctor with the University of Oxford. Speaking in a virtual workshop titled “AI for Affordable Healthcare,” Paton picked up on van der Schaar’s point about explainability. He said understanding how clinicians think is key for AI developers striving to offer explainability to end-users.

“When clinicians make decisions, they normally have a mental model in their head about how the different components of that decision are coming together,” Paton said. “That makes them able to be confident in their diagnosis.”

If they can’t see into AI’s proverbial black box—if “they’re just seeing a kind of report on a screen—they’ll have very little understanding [with] a particular patient how confident they should be in that [diagnosis].”

Colaner closes his coverage on a forward-looking note from keynoter van der Schaar.

“Machine learning really is a powerful tool, if designed correctly—if problems are correctly formalized and methods are identified to really provide new insights for understanding these diseases,” she said. “I really believe that machine learning can open clinicians and medical researchers [to new possibilities] and provide them with powerful new tools to better [care for] patients.”

There’s considerably more on the conference at VentureBeat. Click here to read Colaner’s full coverage.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup