AI-powered peer review catches key findings missed by radiologists

AI could provide radiologists with significant value as an advanced peer review tool, according to new findings published in Academic Radiology. This comes at a time, the authors explained, when radiologists are being asked to interpret more medical images than ever before.

“While new scanner technology has improved the quality of scans, which enables the detection of more subtle findings, it has also increased the number of images generated for each examination,” wrote first author Balaji Rao, MBBS, Yale University School of Medicine, and colleagues. “The increasing volume (both in the number of examinations and images per examination) has placed an added burden on the practicing radiologist. On average, a radiologist currently interprets one image every three to four seconds. This increased workload may increase the chances of error and compromise the quality of care provided by radiologist.”

Rao et al. focused on intracranial hemorrhage (ICH) for their research, using an FDA-approved AI solution to assess CT scans for signs of ICH. The tool’s findings were then retrospectively applied to more than 5,500 scans that, according to a practicing radiologist or trainee, showed no signs of ICH. All scans were performed in November and December 2017 at one of eight different imaging sites.

Overall, the solution reported a possible ICH in 28 different CT scans. A team of three neuroradiologists reviewed each finding, determining that 16 of the flagged scans were correctly identified. The overall false-negative rate for the specialists who originally interpreted each image was found to be 1.6%. The AI tool ultimately had a false-positive rate of 32%.

“Given the potential impact, diagnostic errors can have on patient outcomes, new AI tools, and technology that can assist radiologists may be of great value as clinicians strive to continuously decrease error rates and improve patient care,” the authors wrote. “Automated triage of imaging studies with AI using deep learning techniques such as convolutional neural networks has the potential to achieve this.”

The authors did note that determining the clinical impact of this AI solution’s findings is challenging at this point in time. However, at least one example from the study shows just how effective AI-powered peer review technology could be at improving outcomes.

“One of the patients whose ICH was not identified by the radiologist represented after six days with a substantial increase in the size of the hemorrhage,” Rao and colleagues wrote. “It is hoped that the widespread implementation of AI solutions may prevent such adverse clinical scenarios.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 18 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup