AHIMA: Standard terminologies required for MU
CHICAGO—The goals of Meaningful Use are supposed to be accomplished through the use of the EHR to encode, structure and communicate data with providers and systems, according to speakers on a session about standard terminology challenges at the 84th American Health Information Management Association’s (AHIMA) Conference & Exhibition.
There are lots of standards but standardization is needed for interoperability, said Amy Sheide, RN, 3M Health information services. The common Meaningful Use dataset has 16 elements that should be recorded as structured data and used in the transmission of data and summaries of care. There are limitations and variations between each of the standard terminologies—they apply terms differently, for example. "Compliance can be quite intensive as a result," she said.
No single term covers all healthcare domains, Sheide said. There are inconsistent utilization and application among users; variable architectures, formats and release schedules; and maintaining standard terminologies is time- and resource-intensive and takes away from time spent on patient care. Plus, standard terminologies may not contain local concepts.
Homegrown terminologies using local codes need to match more well-known vocabularies to be considered valid to meet Meaningful Use requirements. When it comes to meeting clinical quality measure objectives with standard terminologies, there are multiple values that need to be captured that live in multiple value sets and may live in multiple measures, Sheide said. “Differences between the measures are going to be observed because of inconsistent application by the measure stewards,” she said.
For example, some people use the term end-stage renal disease and some use N-stage renal disease. Those discrepancies don’t capture patient data and ensure that everyone is receiving the same quality of care across organizations. It also limits the interoperability of systems especially when trying to achieve the most correct data capture.
The question now, Sheide said, is how do we make standard terminologies operate within systems and integrate legacy data. Up-to-date mapping is required so that providers are meeting Meaningful Use but also getting correct data capture, she said.
Terminologies are context specific, said Susan Matney, MSN, RN, also with 3M, and all have their own identifiers. The best way to run decision support is to have one source with a single type of identifier you can map to, she said. “You need a standalone terminology or vocabulary server that is system-agnostic.” Whatever data you’re interfacing should link to the code you need. “Vocabulary servers need breadth and depth of content” and the best one out there right now is SNOMED-CT. “It’s not perfect and we know that but it’s a robust container that you can query and sort.”
Facilities utilize point-to-point or centralized mapping, Matney said, but “point-to-point mapping is a mess when you’re trying to do queries. Centralized mapping works the best.”
You need systemwide updates when you have clinical quality measures because they “don’t contain the latest and greatest codes,” Matney said. This summer, the National Library of Medicine (NLM) and the Office of the National Coordinator of Health IT launched an aggressive effort to ensure the validity of all Meaningful Use Stage 2 value sets in collaboration with the value set authors, she said, and the effort is nearing completion.
“We’re excited about that,” she said, because it establishes the NLM as a single authority and a value set authority center. In the old value sets, the Meaningful Use measures were created by domain experts but now the NLM is going to make them consistent and eventually Stage 2 compliant.