AI quickly sniffs out COVID in virtual-visit notes
It was late April when the CDC added impaired taste and/or smell to its list of COVID-19 symptoms. Thanks to AI and natural language processing (NLP), researchers at the Medical University of South Carolina had beaten the federal agency to the punch.
In fact, the team had already recommended its ER staff ask about taste and smell as part of a standard COVID workup.
The accomplishment “demonstrates the value of a data-driven approach for the identification of relevant symptoms in novel infections such as the one at the root of this rapidly evolving pandemic,” write MUSC team members in a case report posted online May 25 in the Journal of the American Medical Informatics Association.
Jihad Obeid, MD, Leslie Lenert MD, MS, and colleagues arrived at their finding by using deep learning to parse unstructured clinical notes that had been collected verbally during telehealth visits.
The technique achieved only modest performance (AUC=0.729) for predicting COVID-positive lab tests. The authors suggest this may have owed to significant noise in the notes, an unavoidable element due to templated text and patient-entered data.
Still, despite those challenges, the model’s results were strong enough to prompt the suggestion for prioritizing testing of patients who mentioned troubles with taste or smell.
“Even with an imperfect model, it was possible to risk-stratify the population, helping direct resources to patients in most need,” the authors comment in their discussion.
Informatics tools such as NLP and AI methods, they conclude, “can have significant clinical impacts when applied to data streams early in the development of clinical systems for outbreak response.”