Circ: Two hospital ranking methods found lacking
An increasing focus on the improvement in the quality of patient care has led to a burgeoning of individual performance measures, Zubin J. Eapen, MD, of the Duke Clinical Research Institute at Duke University Medical Center in Durham, N.C., and colleagues wrote. But assessing care based on many individual measures is cumbersome, which prompted the use of composite measures for ranking hospitals.
“As the purpose of performance measures expands from aiding quality improvement efforts to potentially influencing reimbursement, the methodology used in combining several individual measures into a composite score gains increasing importance,” Eapen and colleagues wrote. “Given the numerous methodologies available for creating a composite score, developers must explore different approaches and compare the conclusions that different composite scores offer on the same set of measures.”
The researchers used data from the American Heart Association’s Get With The Guidelines-Coronary Artery Disease (GWTG-CAD) program obtained between 2006 and 2009 to compare the two principal methods. The opportunity-based method, which is applied by the Centers for Medicare & Medicaid Services (CMS) in the pay-for-performance program, tallies the number of times a required measure is performed and divides the sum by the total number of eligible opportunities across all patients at a hospital. The all-or-none method counts the total number of eligible patients who received all of the required measures divided by the total number of patients eligible for the care processes.
For their analysis, the researchers identified 194,245 records of acute myocardial infarction patient cases submitted by 334 sites participating in both GWTG-CAD and the CMS pay-for-performance program between July 1, 2006, and June 30, 2009. They evaluated six measures for the composite score comparison, with primary outcomes of 30-day risk-standardized all-cause mortality and readmission rates.
They found the median opportunity-based score was 95.5 percent while the median all-or-none score was 88.9 percent. Both scores skewed positively, but distribution in the all-or-none approach was more evenly distributed.
There was a modest correlation for both composite measures with the 30-day risk-standardized mortality rate but neither method correlated with the 30-day risk-standardized readmission rate. Modifying either score by adding measures produced similar changes in rankings.
The authors wrote that greater variation in scores helps to tease out best practices among hospitals, but lamented that it was difficult to discern superiority because both methods proved similar in ranking hospitals. They recommended additional studies be done to better gauge composite indices’ ability to discern quality.
“The lack of correlation between composite score and 30-day readmission is of even more concern,” Eapen and colleagues wrote. “Identifying process measures that are associated with early readmission and/or a composite scoring method that can integrate recommended processes of care together with 30-day readmission rates is of significant importance to hospitals as they face potential reductions in Medicare payments for excess hospital readmissions, beginning on Oct. 1, 2012.”
They cautioned that use of registry data was a study limitation because participation is voluntary and might not be representative of hospitals nationally. Additionally, it is possible that records submitted may not reflect the full spectrum of care at a given hospital.