JAMA commentary: Beware surveillance bias
Although there is a trend toward accountability and standards, attention needs to be given to the quality of measurement tools, according to Johns Hopkins University researchers in a commentary published in the June 15 edition of the Journal of the American Medical Association.
The science of outcomes reporting is young and lags behind the desire to publically report adverse medical outcomes, wrote Elliott R. Haut, MD, associate professor of surgery at the Johns Hopkins University School of Medicine in Baltimore, and Peter J. Pronovost, MD, PhD, a Johns Hopkins professor of anesthesiology and critical care medicine.
Surveillance is an important source of error, the researchers remarked. “Surveillance bias, a nonrandom type of information bias, refers to the idea that ‘the more you look, the more you find,’” they wrote. “It occurs when some patients are followed up more closely or have more diagnostic tests performed than others, often leading to an outcome diagnosed more frequently in the more closely monitored group.”
For example, deep vein thrombosis (DVT) is a significant cause of preventable harm and a commonly monitored quality of care measure, the authors noted. “Because injured patients are at increased risk for DVT, some clinicians use duplex ultrasound to screen high-risk asymptomatic trauma patients for DVT. Other clinicians argue this approach is neither clinically necessary nor cost-effective and therefore do not routinely screen for DVT in trauma patients. This clinical uncertainly leads to variability in the use of screening duplex ultrasound, creating variability in rates of DVT identified and reported.”
One key to stopping DVT from becoming deadly is to prevent it or find it early and treat it. So the more tests are done for DVT, the higher the DVT rate for a hospital. If a hospital has a high DVT rate, Haut posited, is it a place a patient should avoid? “Or is it a place that looks for DVT more aggressively—before any symptoms appear—and prevents DVT from progressing to a much more serious complication?" Reporting a DVT rate therefore doesn't tell much about hospital quality, because it doesn't delineate whether the hospital is ignoring a potential complication or successfully preventing one.
Pronovost and Haut recommended several steps to help reduce the error caused by surveillance bias:
Those developing and reviewing outcome measures should ensure that the methods for surveillance are clearly explicated. “Ideally, evidence-based clinical guidelines should specify which patients at risk should be studied and clearly convey exact testing modalities and frequency,” the wrote.
Policy makers need to examine the costs and benefits of proposed outcome measures to enable rational prioritization of which measures to mandate. “Formal analysis such as the value of information analysis may help prioritize which outcome measures to collect and evaluate whether the costs of collecting a specific outcome measure are worth the benefits.”
Performance measures could link a process of care with adverse outcomes when defining incidences of preventable harm. “When standardized surveillance is too costly or risky, processes of care among those sustaining the outcome could be examined.”
“Performance measurement is essential for improving quality and reducing costs of medical care,” the authors conclude. “However, most outcome measures in use do not sufficiently standardize surveillance for events and those at risk for events, likely introducing substantial measurement error. If outcome measurement is to fulfill its purpose, greater attention to surveillance bias is needed.”
The science of outcomes reporting is young and lags behind the desire to publically report adverse medical outcomes, wrote Elliott R. Haut, MD, associate professor of surgery at the Johns Hopkins University School of Medicine in Baltimore, and Peter J. Pronovost, MD, PhD, a Johns Hopkins professor of anesthesiology and critical care medicine.
Surveillance is an important source of error, the researchers remarked. “Surveillance bias, a nonrandom type of information bias, refers to the idea that ‘the more you look, the more you find,’” they wrote. “It occurs when some patients are followed up more closely or have more diagnostic tests performed than others, often leading to an outcome diagnosed more frequently in the more closely monitored group.”
For example, deep vein thrombosis (DVT) is a significant cause of preventable harm and a commonly monitored quality of care measure, the authors noted. “Because injured patients are at increased risk for DVT, some clinicians use duplex ultrasound to screen high-risk asymptomatic trauma patients for DVT. Other clinicians argue this approach is neither clinically necessary nor cost-effective and therefore do not routinely screen for DVT in trauma patients. This clinical uncertainly leads to variability in the use of screening duplex ultrasound, creating variability in rates of DVT identified and reported.”
One key to stopping DVT from becoming deadly is to prevent it or find it early and treat it. So the more tests are done for DVT, the higher the DVT rate for a hospital. If a hospital has a high DVT rate, Haut posited, is it a place a patient should avoid? “Or is it a place that looks for DVT more aggressively—before any symptoms appear—and prevents DVT from progressing to a much more serious complication?" Reporting a DVT rate therefore doesn't tell much about hospital quality, because it doesn't delineate whether the hospital is ignoring a potential complication or successfully preventing one.
Pronovost and Haut recommended several steps to help reduce the error caused by surveillance bias:
Those developing and reviewing outcome measures should ensure that the methods for surveillance are clearly explicated. “Ideally, evidence-based clinical guidelines should specify which patients at risk should be studied and clearly convey exact testing modalities and frequency,” the wrote.
Policy makers need to examine the costs and benefits of proposed outcome measures to enable rational prioritization of which measures to mandate. “Formal analysis such as the value of information analysis may help prioritize which outcome measures to collect and evaluate whether the costs of collecting a specific outcome measure are worth the benefits.”
Performance measures could link a process of care with adverse outcomes when defining incidences of preventable harm. “When standardized surveillance is too costly or risky, processes of care among those sustaining the outcome could be examined.”
“Performance measurement is essential for improving quality and reducing costs of medical care,” the authors conclude. “However, most outcome measures in use do not sufficiently standardize surveillance for events and those at risk for events, likely introducing substantial measurement error. If outcome measurement is to fulfill its purpose, greater attention to surveillance bias is needed.”