AI interprets radiology reports with 91% accuracy
Researchers from New York's Icahn School of Medicine at Mount Sinai have developed machine learning capable of interpreting radiologist reports, according to a study published in Radiology.
"The ultimate goal is to create algorithms that help doctors accurately diagnose patients," said first author John Zech, a medical student at the Icahn School of Medicine at Mount Sinai. "Deep learning has many potential applications in radiology—triaging to identify studies that require immediate evaluation, flagging abnormal parts of cross-sectional imaging for further review, characterizing masses concerning for malignancy—and those applications will require many labeled training examples."
In this study, researchers trained artificial intelligence (AI) to interpret x-rays, computed tomography (CT) scans and magnetic resonance imaging (MRI) reports. Researchers developed a set of algorithms to teach the AI words such as "phospholipid," "heartburn" and "colonoscopy."
"The language used in radiology has a natural structure, which makes it amenable to machine learning," said senior author Eric Oermann, MD, instructor in the department of neurosurgery at the Icahn School of Medicine. "Machine learning models built upon massive radiological text datasets can facilitate the training of future AI-based systems for analyzing radiological images."
A total of 96,303 radiologist reports covering head CT scans were used to train the AI. The AI then showed it was able to identify concepts in text with 91 percent accuracy.
"Research like this turns big data into useful data and is the critical first step in harnessing the power of AI to help patients," said study co-author Joshua Bederson, MD, professor and system chair for the department of neurosurgery at Mount Sinai Health System and clinical director of the Neurosurgery Simulation Core.