AI with ‘transparent clinical reasoning’ might stand in for busy toxicologists

Explainable AI wouldn’t be much use diagnosing victims of poisoning when the medical toxicology is complex, as in an overdose of multiple drugs at once.

However, it’s almost as sharp as human experts when the cause is simple and straightforward, as with ingestion of a single common cleaning product.

This means the technology could be called upon during frenetic periods in emergency rooms or poison centers.

So suggest researchers who developed a probabilistic logic AI network for the task, then tested its performance against that of two medical toxicologists and a decision tree model.

Michael Chary, MD, PhD, of Weil Cornell Medicine and colleagues at Harvard used a library of 300 synthetic cases to build an AI system capable of mimicking experienced clinicians making decisions based on inputs from physical exams.

They gave each case five findings that would be expected in patients sickened by one or two substances.

The AI system, which they dubbed Tak, agreed with the human experts most of the time for straightforward cases and some of the time for moderately complex cases but fell behind on the complicated cases.

Still, it handily beat the decision-tree classifier across the board.

Chary et al. comment that probabilistic logic networks “can model toxicologic knowledge in a way that transparently mimics physician thought.”

Underscoring that the synthetic design of the experimental cohort makes the study a proof-of-concept project, they call for further research to figure out how their approach might translate to clinical practice for medical toxicologists.

Publishing their work in the July edition of Computers in Biology and Medicine, the authors conclude:

Physicians must trust an AI-based system to include it in their evaluation and treatment of patients. An algorithm can earn that trust through proficiency on complex cases and transparency. Tak demonstrates transparent clinical reasoning. This transparency, if preserved in more accurate models, may remove barriers to the use of AI approaches in clinical decision making. Even if a more detailed analysis of the limits of probabilistic logic networks suggests an unimprovably poor performance on complex cases, a transparent AI system may be useful by automating aspects [of] routine cases and in doing so freeing up expert time for more complicated cases.

Chary’s co-authors were Ed Boyer, MD, PhD, of Brigham and Women’s Hospital and Michele Burns, MD, MPH, of Boston Children’s Hospital.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.