Beyond Alerts: Linking CDS to Evidence-Based Medicine
Alliance of Chicago’s dashboards can track performance at a glance, including Type II Diabetes quality measures. Screenshot courtesy of Chicago Alliance of Community Health Centers |
In a recent review of 70 controlled trials, researchers from Duke University Medical Center and Old Dominion University’s College of Health Sciences found that CDS systems significantly improved clinical practice in more than two-thirds of trials. The research identified four CDS features that are predictors of improved clinical practice:
- Automatic provision of decision support as part of clinician workflow
- Provision of recommendations, rather than assessments only
- Provision of decision support at the time location of decision making
- Computer-based decision support
In the scope of meaningful use, the Department of Health and Human Services (HHS) defines CDS systems or tools as those that provide practitioners with “general and person-specific information, intelligently filtered and organized, at appropriate times, to enhance health and healthcare.”
The evidence base
Meaningful use wasn’t on the radar when the Chicago Alliance of Community Health Centers embarked on its CDS strategy some five years ago. “We started out to use Health IT to improve healthcare quality and access,” says Fred D. Rachman, MD, CEO of the Alliance, which began as a group of four safety net hospitals and is now engaged with 24 additional healthcare centers in 10 states.The Alliance utilized funding from the Department of Health and Human Services’ Health Resources and Services Administration (HRSA) and the Agency for Healthcare Research and Quality (AHRQ) to use commercial technology. “We partnered with GE Healthcare to use its Electronic Health Record System and data warehouse technology, and with AMA [the American Medical Association] to layer in performance and consensus practice standards that they were developing for key conditions,” Rachman says. “We preloaded relevant data from the paper record that would help us populate these measures, so we’ve been tracking some of them virtually since go-live. As the data accumulates, we are able to report on more of the measures.”
“We are able to publish aggregate quality and health status data via a data warehouse that draws de-identified patient data from all sites that are live on the EMR. These are issued as quality performance dashboards across all of the participating centers in our initiative,” he says. “These show comparison across organizations, and can be drilled down by each individual organization by site, by practitioner, or by health disparity group, or other populations.”
Numbers are only part of the story
Based on five years of accumulating data, “it’s very humbling because we came from an era where we are doing quality audits on very limited samples, where we got to identify which patients were included. Now we have universal data on all patients who meet criteria, whether they are actively engaged in care or not. When we first looked at the performance measure data when we went live and were finally seeing a true, universal, population picture, the results were surprising and disappointing. As time has gone on, we have seen gradual improvement in the measures, for the most part,” says Rachman.Universal population reporting also means taking on challenges of harder-to-reach populations and problems: “When we undertake efforts to improve performance on particular measures, we are more keenly aware that there are many significant patient, community and health system factors that are beyond what an individual institution can impact,” he says.
“We’re reporting on AMA Physician Consortium Measures [which are now NQF-endorsed measures] and are by and large forming the core measures … for both pay-for-performance and meaningful use,” he says.
One of the challenges going forward is that “all of these uses of technology are predicated on having efficient and accurate data capture strategies. We must assure that clinicians are capturing data at the point of care, in the right way for the system to recognize it and use it for CDS, and to be recognized in calculating a performance measure,” says Rachman.
A second challenge is the way CDS is presented to the clinician and where it is integrated in the workflow, “which is both an art and a science,” Rachman adds. “As we’re seeing more practices adopt technology and get more experience with its use, there will be continued development and learning about forms and strategies of [CDS] and how it is integrated into practice.”
A dashboard view
Summit Medical Group is using Clinical Quality Solutions (CQS), a CDS tool integrated with its Allscripts EHR to build its evidence base—and physicians are seeing results in point-of-care decision support, says Robert W. Brenner, MD, MMM, CMO of Summit Medical Group, a for-profit, physician-owned multispecialist group based in Berkeley Heights, N.J. Summit employs about 140 FTE clinicians at nine sites in four counties.CQS was originally designed for PQRI queries for CMS, Brenner says. “CQS allows us to have a dashboard for each physician so that when a patient arrives, a physician can click on an icon within [the EHR] and up comes a dashboard of quality metrics that physician has decided to monitor.” Physicians can see how they measure up to evidence-based metrics, and can access patients’ previous values to see what the trend has been.
“A lot of our quality data and outcomes data are retrospective: This gives real-time [data],” he says. “Not only can our physicians see what they’re doing with that patient, they can see how they’re doing overall with all their patients compared with our group and the national standard.”
“For disease management, this puts us ahead of the curve,” he says. “It’s giving us a lot of leverage with our physicians to make some behavioral changes and how they’re treating quality measures and outcomes.”
CQS has been fully implemented at Summit for about a year, and physicians must show they are measuring their self-selected quality metrics, and whether patient care has improved. “They’re starting to see the value of it in patient care,” Brenner says. “And there are rewards related to it—we’re getting PQRI dollars from this.” Last year, Summit received $239,000 from CMS based on improved outcomes, he says.
Summit uses its registries to identify diabetic patients with multiple complicating conditions, which are flagged as high-risk in the system. “We’re getting [those patients] to the office the week before they’re scheduled for an appointment [to] see what things they’re missing that might be helpful. Did they have an appointment with an endocrinologist? Did they have an appointment with a cardiologist?” Brenner says.
This information shows up on the EHR when the patient arrives for an appointment, alerting the physician to take additional action. Summit call center personnel also assist patients in setting up appointments and calling them with reminders, he adds.
Since implementing the system, Summit’s compliance with diabetes care guidelines has risen from 8 percent on the three bundle—lipids, hypertension and hemoglobin A1C—to more than 19 percent, or more than twice national average, according to Brenner. Likewise, Summit’s complete pediatric immunization rate has risen from 68 percent to 81 percent, he says.
“We’ve changed these things with querying our database and using registries, then going back and using our call center. It has had major impact.”
In addition to physician-selected measures, Summit reports on HEDIS and PQRI measures, Brenner says.
Beyond the usual structures
“About 12 years ago, we embarked on a strategy, the MemorialCare Physician Society, to look at a variety of quality and educational and other physician-related issues beyond the boundaries of the usual medical staff structures,” says Harris R. Stutman, MD, executive director of clinical informatics and research at MemorialCare Health System, a five-hospital organization based in Fountain Valley, Calif.Since its inception in 1997, the focus has been on developing and implementing evidence-based medicine and best practices. Clinicians are divided into 15 best practice teams, and each team has at least one physician (when it’s appropriate) from each hospital as well as nurses, pharmacists, and other clinicians, says Stutman. These teams define best practices and evidence-based medicine, “particularly as it relates to order sets, clinical guidelines and clinical pathways for our organization,” he adds.
For several years, however, “as the evidence-based content was finalized, it went onto paper, which was filed away on each hospital unit, and nobody could find it when they needed it,” says Stutman. “That was a major impetus for us to electronify our approach to dissemination of the ‘approved’ content [data].”
Accordingly, MemorialCare started an EHR procurement process in 2003, selected a vendor (Epic) in 2004, and began taking the EHR live in 2006. The last hospital in the network went live with the EHR earlier this year, he says. “We’ve been very pleased with [the ability] to seamlessly deliver all of these best practice order sets and now, by extension, clinical guidelines and rules and reminders and order defaults, and all sorts of other things at the exact point of clinical decision-making through the mechanism of our EHR.”
MemorialCare uses Zynx content to “prime the pump,” migrating content from its staging area to the EHR, says Stutman. MemorialCare’s Best Practice Group clinicians build the CDS in Zynx, and after a comment period results in broad clinical consensus, the EHR Design and Build Group builds it into the Epic EHR, “using the Zynx-based content as the template,” Stutman says.
“Physicians not only have a say in what’s in the best practice order sets and experience-based medicine, but they can much more easily access them, and physicians’ use of these order sets has been excellent,” says Stutman.
Physicians are less receptive to interactive reminders, rules and alerts, according to Stutman, because they often find them interruptive to their work flow. “But we’ve tried to keep those kinds of things very focused.” The rate at which a rule, reminder or alert results in a physician changing an ordering decision is between 10 and 30 percent at MemorialCare, depending on the specific type of alert. That rate “is similar to what you find in the informatics literature,” he says.
The organization reports on 80 to 100 quality measures, plus “specific reports” required for California agencies and organizations. There have been no major surprises, Stutman says: “We anticipated that we needed a robust library of order sets going forward. We probably have about 900 order sets [and] about 125 are uniquely evidence-based. Another 600 to 650 are experience-based or convenience-based order sets.”
Memorial Herrman also sets “bold goals” at the enterprise level: “They’re marks we put on the wall higher than we can currently jump,” Stutman says. “One of our bold goals is that we want our patients, based on the core measures defined by CMS, to get perfect care greater than 95 percent of the time. A goal of 100% might be impossible, but the challenge has significantly improved our care process.” For AMI (acute myocardial infarction) patients, for example, there are seven core measures plus two other measures that are related: in hospital mortality and readmission rate.
“We want to be compliant on all nine measures for every one of our AMI patients,” he says, and the organization is using a variety of electronic capabilities to meet that goal.
Not just ‘plug and chug’
“Clinical decision support is the bedrock that drives your patient quality,” says Robert Murphy, MD, chief medical informatics officer at Memorial Hermann Healthcare System, a network of 11 hospitals in the Greater Houston area. When it comes to CDS and evidence-based medicine, “often order sets, alerts and reminders are the first things people focus on, and they may have some of the greatest opportunities, but I think to be effective, it’s a really comprehensive workflow analysis of how you can inject the right information at the right time to improve care,” Murphy says.“Part of the analytics question of CDS is, ‘is this making a difference to improve care?’ [And not just in terms of] ‘what was the alert/response rate?’ But it is looking at the clinical outcomes that are tied to those interventions,” says Murphy. “So over the last nine months, we have been producing monthly [reports on] the safety events related to patient harm. Beneath that, we have our near misses and we have what we call the good catches. These are [definite] prevention of medical error, and the CDS is the base of that pyramid on the good catches,” he says.
Memorial Hermann now reports approximately 750 errors per month “that are absolutely prevented by the CDS,” he says. “Once you have these [tools] in place, they’re always on the watch to protect and make systems—especially medication management system—as safe as possible.”
In addition to required core measure reporting, the clinical and operational leadership at Memorial Hermann determines what to measure, such as blood glucose measurement in intensive care units. “There’s not a federal requirement for us to report this, but it’s an important part of patient care, so we have robust reporting around that,” Murphy says.
He cites two examples where evidence-based medicine in the CDS system made a difference in care at Memorial Hermann. In the first instance, “with our CDS interventions and supporting education, we achieved a 29 percent relative reduction in the administration of warfarin with an [international normalized ratio] INR greater than 3.5. This resulted in a lower administration of vitamin K as an antidote, lower episodes of bleeding, lower number of blood transfusions, shorter length of stay and overall costs, with a total estimated annual cost reduction of $917,000 for these patients,” he says.
“There are times when an alert fires, it may be over ridden … and when you look only at the raw data—did the alert get overridden?—it can be a little discouraging,” he says. Despite the overrides, however, changes in behavior occurred downstream: “We did see the correlation of the improvements in warfarin administration related to [having] the alert in the computer,” Murpy says. “Even though the alert rates were in the range of acceptance of only about 40 percent, the clinical actions that were changed within six hours were almost 80 percent.”
“There can be multiple interventions, not just the alert alone, but I’m pleased to see that it was one component that helped us continue to improve care,” Murphy says.
The second example was the case of erythrocyte stimulating agents (ESAs) and hemoglobin level. “In February 2007, a major article was published in The Lancet warning that these classes of drugs should not be administered when the patient’s hemoglobin was greater than 10, rather than the previous threshold of 12,” says Murphy. “We implemented our CDS interventions in June 2007, six months before the FDA issued its black box warning, and showed a 30 percent relative reduction in inappropriate ESA administration, and corresponding decreases in thrombotic-related complications, length of stay and cost.”
Both cases illustrate the integration of CDS and evidence-based medicine, as well as the underlying human factors at work. “The power of the computer is it can quickly introduce [evidence] enterprisewide. All of the other elements are import as well as adjuncts, but left alone, you see how slowly evidence diffuses out among caregivers and other mechanisms of communication.” CDS and evidence-based medicine require both “a rapid assessment of evidence and literature, and the human factors of education, and reinforcement,” says Murphy.
“It’s not just a plug-and-chug and a rule on your computer.”