Radiology: Peer review fundamentally flawed

Like much of medicine, radiology suffers from a backward perspective on performance improvement.Most industries have moved beyond punitive programs like peer review to predict and prevent errors at the system level, rather than report them at the individual one, explained the authors of a June perspectives article in Radiology.  

Traditional quality improvement focused on individuals as the source of quality and performance issues, with error reporting serving as the feed for performance improvement. For almost every industry, this individual level of analysis had fallen out of favor by the 1980s.

“Decades of research have shown that individual error rates derived from reporting systems do little by themselves to improve, or even accurately measure, individual performance, for a variety of reasons,” offered David B. Larson, MD, MBA, from the department of radiology at Cincinnati Children’s Hospital in Ohio, and John J. Nance, JD, of John Nance Productions in Seattle.

Airlines provide a prototypical example. “Since the mid-20th century, aviation has been transformed from a relatively high-risk human endeavor into the extremely low-risk enterprise that it is today.” A fatal accident occurs in less than one out of every 9 million flights, noted Larson and Nance. Before moving away from individual error reporting; however, the industry’s safety record was grimmer.

At present, one of the most common radiology quality assurance programs is peer review, typified by the American College of Radiology’s RadPeer. Under this and other initiatives, radiologists review samples of their peers’ reports and document whether they believe the reading radiologist was correct.

“After relying on a similar approach to human error for decades, it became increasingly apparent to aviation experts by the 1970s that the underlying causes of human failures were a systemic problem and had to be treated as such,” the authors noted.

A major turning point came in 1974, with the crash of Trans World Airlines (TWA) Flight 514. Initially treated as a case of error by an individual pilot (under the added troubles of inclement weather), further investigation showed that the mistake that left 92 people dead had occurred repeatedly with near misses and was the result of a poorly designed aviation reporting system.

According to Larson and Nance, researchers in aviation and other sectors began to study why people make mistakes. Rather than discovering that errors occurred as isolated cases or were clearly associated with poor individual performance, “human errors tend to occur in relatively predictable ways and with relatively predictable frequency.”

“It is a question of whether to study the what, when and how of an event or to simply focus on the who,” the authors continued. They argued that peer review was being operated as a punitive mechanism, intending either to coach or judge the individual, but frequently eliciting defensiveness for singling people out.

Measuring errors doesn’t fix them

Larson and Nance offered a host of reasons countering radiology’s reliance on error reporting for quality improvement. “By itself, quality measurement does not improve quality any more than a batting average improves hitting.” And because knowing error rates does not specify what needs to be fixed or how the department should go about preventing the problems, documenting errors can be a waste of resources.

What is more, the discrete act of measuring individuals’ errors ignores the systemic nature of these problems. An error committed by one person is likely to be committed by others, research indicates.

More specifically, the authors cited studies revealing that peer-review data are subjective, inaccurate and frequently suffer from statistical errors like underreporting and bias. Larson and Nance also argued that, because error reporting is highly quantitative, it gives a false impression that it is accurate.

Peer review error reporting “implicitly assumes that some radiologists are good and that some are bad. Research reveals this assumption to be overly simplistic; performance is highly domain specific.”

Larson and Nance posited a group of advantages to switching from individual error reporting to a systemic approach to quality improvement:
  • Feedback from individuals is necessary, in part to ensure that mistakes are not repeated.
  • A systemic methodology enables individuals to learn from others’ mistakes without committing those errors themselves.
  • The approach is proactive; instead of waiting for outliers to perform errors, everyone can undergo education and training.
  • Areas of suboptimal performance can help identify systemic points that contribute to poor quality or performance.

Peer review should play an important role in acquiring these data and identifying cases of error and how and why they are likely to occur—rather than emphasizing who caused them. In this way, radiology and patients stand to gain significantly from peer review and performance improvement—but the impetus needs to shift from individual remediation to systematic improvement.

To read more about process improvement in radiology, click here.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup