State Medicaid directors criticize new CMS scorecard
CMS has released a new a “scorecard” tracking quality measures of states within Medicaid and the Children’s Health Insurance Program (CHIP), but the National Association of Medicaid Directors (NAMD) said it failed to offer a true apples-to-apples comparison of state performance.
The first version of the scorecard compiles measures voluntarily reported by states as well as federally reported measures on state health system performance, state administrative accountability and federal administrative accountability, like how long it takes CMS to review Medicaid waiver plans.
“Despite providing health coverage to more than 75 million Americans at a taxpayer cost of more than $558 billion a year, we have lacked transparency in the performance and outcomes of this critical program,” CMS Administrator Seema Verma, MPH, said in a statement. “The scorecard will be used to track and display progress being made throughout and across the Medicaid and CHIP programs, so others can learn from the successes of high performing states. By using meaningful data and fostering transparency, we will see the development of best practices that lead to positive health outcomes for our most vulnerable populations.”
Because much of the scorecard relies on voluntary reporting, not all states provided data on each service. For example, no performance data was displayed for the use of opioids at high dosage in persons without cancer in Medicaid because only 14 states reported it.
Verma declined to discussed specific findings in the scorecard, telling reporters: “I will let you look at the data and make your own conclusions.”
The scorecard had been mentioned in Verma’s speech last year at the NAMD conference. The association itself called parts of the report “commendable,” like its inclusion of federal performance measures, but said the scorecard doesn’t account for differences between states’ Medicaid populations.
“A common axiom is: ‘You’ve seen one Medicaid program, you’ve seen one Medicaid program,’” the group said in a statement. “In other words, differences in eligible patient populations, covered benefits, delivery models, and types of measures used impact availability of data and calculation of results. Until these fundamental variances are addressed in the Scorecard, it will not be possible to make apples-to-apples comparisons between states.”