Deep learning improves identification of adverse drug events from EHRs

Using a deep-learning model, a University of Massachusetts Lowell research team was able to significantly improve the extraction of adverse drug events (ADEs) from electronic health records (EHRs).

The findings underscore that deep learning could help better identify ADEs, which are costly injuries for patients and facilities alike and a leading cause of hospitalizations.

While being tested, the model achieved an F1 score, which measures accuracy, of 65.9 percent—outscoring the best existing model for extracting ADEs. That model had an accuracy of just 61.7 percent, according to research published in the JMIR Medical Informatics.

“Our deep-learning model achieved state-of-the-art results, which is significantly higher than that of the best system in the Medication, Indication and Adverse Drug Events (MADE) 1.0 Challenge,” Hong Yu, PhD, of the department of computer science at University of Massachusetts Lowell and corresponding author or the research, et al. wrote. “Deep-learning models can significantly improve the performance of ADE-related information extraction.”

ADEs are described as an injury that results from a medical drug intervention and account for about 41 percent of all hospital admissions, according to researchers. Because of this, ADEs typically also come with prolonged hospital stays, increasing the economic burden on a facility. For example, the annual cost of ADEs for a 700-bed hospital is about $5.6 million, which is why ADE detection and reporting is “crucial” for drug-safety surveillance, researchers said.  

Traditionally, ADEs are discovered using the FDA Adverse Reporting System (FAERS). However, underreporting and missed drug exposure patterns are the most common challenges associated with the FDA’s system. Other headwinds with the system exist, as well.

“First, the objective and content of the report in FAERS change over time, which may confuse physicians and the general public,” Yu et al. wrote. “Second, patients may choose not to mention some reactions, due to which practitioners fail to report them. Third, ADEs with long latency or producing unusual symptoms may be unrecognized.”

For their deep-learning model, researchers trained and tested it with the MADE 1.0 Challenge dataset, which consists of more than 1,000 EHR notes of cancer patients.

“All our models outperformed the existing systems in the MADE 1.0 Challenge, which may be because of the following reasons: First, our models benefited from deep learning that is able to learn better from the data,” Yu et al. wrote. “Second, we enriched the features of deep-learning models; therefore, our model outperformed the system that used similar deep-learning models as ours.”

After seeing promising results, the research team said it believes their “results can facilitate research on ADE detection, (natural-language processing) and machine learning.”

""

Danielle covers Clinical Innovation & Technology as a senior news writer for TriMed Media. Previously, she worked as a news reporter in northeast Missouri and earned a journalism degree from the University of Illinois at Urbana-Champaign. She's also a huge fan of the Chicago Cubs, Bears and Bulls. 

Around the web

Compensation for heart specialists continues to climb. What does this say about cardiology as a whole? Could private equity's rising influence bring about change? We spoke to MedAxiom CEO Jerry Blackwell, MD, MBA, a veteran cardiologist himself, to learn more.

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”