Physicians’ behaviors are nearly all AI needs to head off faulty drug prescriptions

Contrary to intuitive expectations, many errors in drug ordering are caused or worsened by the intricacies of the EHR. These problems often stem from mistakes made at the point of the clinician–computer interface.

To avert the resulting risk, researchers have designed an AI-based system that can flag errors going just by the ordering clinician’s behavior and the context in which it occurs.

The innovation may not only reduce prescription-drug errors but also relieve pharmacists’ workloads while better ensuring patient privacy and security, the team suggests.

The work was carried out at New York University and is described in a study published Oct. 5 in JAMIA Open.

To build and test their system, Martina Balestra, PhD, of NYU’s Center for Urban Science and Progress in Brooklyn and colleagues collected data on drug prescribers’ actions over a two-week period at an academic medical center.

From this data they built a machine learning classification model to predict drug orders questionable enough that they needed a pharmacist to stop and examine.

Internally validating the model, they found it had an area under the receiver-operator characteristic curve of 0.91 and an area under the precision-recall curve of 0.44.

Balestra and co-authors note that conventional EHR drug alerting has entailed reviewing drug orders alongside the patient’s medical records.

The shortcoming in this approach, they maintain, is its lack of attention to the EHR’s inherent vulnerabilities to error and how these potential fail points may show up in the orderer’s actions.

By contrast, the NYU team’s machine learning modeling

offers us a novel perspective on the factors influencing order entry by focusing on the behavior of the provider and errors that arise from the workflow around the EHR. Whereas previous models predicting errors ingest patients’ medical records, by focusing on the behavior of the clinician, we also reduce the risk to the privacy and security of these patients’ data while still being useful to pharmacists.”

The study is available in full for free.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

With generative AI coming into its own, AI regulators must avoid relying too much on principles of risk management—and not enough on those of uncertainty management.

Trimed Popup
Trimed Popup