eCQMs & CDS: Two Sides of the Same Coin
Electronic clinical quality measures (eCQMs) and clinical decision support (CDS) share many similarities and common requirements, all with the goal of improving healthcare quality.
Recognizing the potential to better link the two, in late March the Office of the National Coordinator for Health IT (ONC) and the Centers for Medicare & Medicaid Services (CMS) launched the Clinical Quality Framework (CQF), an open source project to set standards to better harmonize CDS and eCQMs. This work is being facilitated through the ONC’s Standards & Interoperability Framework.
“We know we need to do both to improve care,” says Amy Hewig, MD, MS, acting chief medical officer for the Office of the Chief Medical Officer at ONC—speaking during a kickoff event unveiling the CQF initiative. “One of the most important strategies we have at ONC is linking e-clinical quality measures to clinical decision support so providers can not only measure performance, but really improve it as well.”
“We have learned a lot since Stage 1 of Meaningful Use. We’ve learned what works and what doesn’t work so well about the electronic clinical quality measures, their development and implementation, certification and alignment—or not—with other programs,” says Kate Goodrich, MD, MHS, director of the Quality Measurement and Health Assessment Group at CMSalso during the event. “It’s important that CMS and ONC collaborate to share these learnings and develop a strategic and tactical plan to reach our next stage with standards development as well as clinical quality measure development.”
Meanwhile, 700-bed Medical University of South Carolina (MUSC), in Charleston, is a rare example of an organization that has managed to drive its eCQM reporting requirements through a coordinated CDS program. It’s single, comprehensive organizational blueprint for meeting quality measure requirements helps its clinicians with both reporting eCQMs and practicing evidence-based medicine.
Gathering commonalities
Harmonization of the standards used for CDS and eCQM is required to reduce implementation burdens, promote integration between these two domains and facilitate care quality improvement, federal officials say.
“Part of the goal of this initiative is to come up with the things that are in common in clinical improvement and measuring quality, and to identify those things that should be in common, and then develop a common language to describe the things that are the same and the things that are different,” explains Doug Fridsma, MD, PhD, chief science officer and director at the ONC’s Office of Science and Technology, who spoke at the event.
Standards used for the electronic representation of CDS and eCQM were not developed in consideration of each other, and use different approaches to patient data and computable expression logic, says Ken Kawamoto, CQF co-coordinator, speaking during the kickoff event.
However, “they are really two sides of the same coin in terms of measuring and improving healthcare quality,” he says. Both eCQMs and CDS require the ability to identify cohorts of patients based on logical combinations of patient data.
For example, a quality measure might identify patients with diabetes and whether they have gotten a hemoglobin A1c test during a particular timeframe. If they have, they have met that criteria; if they have not, they missed that criteria, explains Kawamoto. For CDS, a similar logic would trigger the clinician to order a hemoglobin A1c test if a patient with diabetes has not had it done within a specific timeframe.
While CQMs measure adherence to a standard plan of care or care processes, CDS requires a physician or other stakeholder to follow a standard plan of care. Moreover, Meaningful Use requires implementation of CDS rules for improvements related to certain outcomes of eCQMs. “There is a lot of clear overlap,” Kawamoto says.
Case study
It’s currently difficult to share logic between CDS and eCQM. However, executives from MUSC have developed a unique, coordinated CDS program that supports meeting quality measure requirements so “you can see everything on one page and isolate where things overlap and where they are filled,” according to Elizabeth Crabtree, MPH, PhD, director of evidence-based practices and assistant professor, who spoke at the Healthcare Information & Management Systems Society’s annual conference in February.
Crabtree later told Clinical Innovation + Technology that she knows of no other providers linking eCQMs and CDS this way.
To help MUSC handle the abundance of reporting and performance programs required by certification bodies, federal agencies, payers and organizations themselves, the CDS program drives one single reporting structure at the enterprise level.
“We have so many quality measures flying around, and they all look the same but they are all different,” says Crabtree, who noted MUSC reports on 250 different measures. As such, the organization decided to review the nuances between different measures and get consistent definitions across the board. But, getting there meant breaking down the siloes of the many quality improvement initiatives in the organization and creating a unified structure to get data out.
To handle this, the organization developed the CDS Oversight Committee, which works to design and implement CDS tools to drive evidence-based practice with data capture for reporting directly built in, which results in quality measures directly linked to care. “This tackles workflow, not just lists,” she says.
This committee takes on the arduous task of harmonizing the measure concepts used in multiple reporting programs that are applied with differing specifications, versions of specifications and submission mechanisms. MUSC reviews measures across care settings and groups them by family—or a group of CQMs related to a single care process or disease state, such as diabetes—to identify common structured data requirements and assess their impact on measurement.
Within the committee is a CDS workgroup that works in collaboration with MUSC’s EHR development and operations committee and ad-hoc content expert teams on a number of activities, including identifying targets for new and improved CDS interventions; designing, validating, developing and integrating such workflows into CDS interventions; and evaluating the intervention effect, says Itara Barnes, manager of regulatory analytics at MUSC, who also spoke at the HIMSS session.
“It allowed us to shift focus not on the measurement for the sake of measurement, but to see how it is meaningfully implemented to improve care,” says Crabtree. “Everyone has a place in the process.”
Barnes says the changes are starting to work, but they are still waiting to learn their final reporting rates. “I expect we’ll see this reflected in the final performance rates,” she says.
Moving forward
With federal health agencies driving efforts to harmonize the standards of eCQMs and CDS, the hope is achieving something similar to what MUSC is accomplishing: better quality care.
Bringing these elements together fits in with a key focus in the recommendations of Meaningful Use Stage 3: support of CDS. Also, the next edition of EHR technology certification criteria calls for improved interoperability exchange for transitions of care and CDS.
CDS and eCQM harmonization is expected to move rapidly, with the S&I Framework conducting a series of pilots. The framework team is working to fill the gaps by defining syntax and semantics for each quality measure, says Fridsma.
Once the necessary tools are in place, ONC will look to adopt them as part of the certification process, which entails vetting from federal advisory committees and the public to make sure they are appropriately validated, says Fridsma. “This is the beginning of important activities that will impact not only the work we’re doing on Meaningful Use, but other things that are happening out there in the healthcare ecosystem.”