Predicting psychiatric readmission proves ‘hard for humans, hard for machines’

Machine learning is only fair to middling at predicting readmission for discharged psychiatric inpatients from narrative discharge summaries and other clinical notes.  

Still, researchers have found that some AI models do better, on average, than expert psychiatrists at the difficult task.

Senior author Roy Perlis, MD, of Massachusetts General Hospital and colleagues there and at MIT and the University of Massachusetts used EHR data and notes from more than 5,000 patients to develop and test several predictive AI models.

The best language-based performer used standardized sets of keywords, or “topics,” from discharge summaries. These referred to comorbidities (for example, orthopedic injuries), psychosocial features (family relationships, homelessness) symptoms (psychosis, substance abuse) and medications.

Perlis and team report that this model performed better than, or at least as well as, models that relied on bag-of-words sets or coding data. Still, it too was disappointing enough for the authors to remark on its “markedly poorer” performance compared with models predicting nonpsychiatric hospital admissions.

“We would underscore that this likely reflects the challenging nature of the task; indeed, clinical features strongly predictive of readmission remain unclear, and there are no validated biomarkers or markers of disease progression,” they write.

An interesting side finding: The researchers included a study arm pitting non-clinician raters against the physicians as well as the algorithms—and the non-clinicians solidly outperformed the experts.

In addition, the non-clinician participants improved their performance when given feedback while the clinicians did not.

On this Perlis and co-authors comment that the experimental task was “distinct from standard clinical practice.”

“[I]t may be the case that nonexperts are more easily able to conform to new tasks because they have fewer incorrect priors, whereas experts are harder to shift from their existing frameworks,” the authors write. “This latter set of observations is consistent with decades of prior evidence that clinician predictions, when compared to real-world outcomes, often do not substantially exceed chance.”

The study report was published online in Translational Psychiatry Jan. 11 and is available in full for free.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup