COVID crisis burnishes AI’s promise while magnifying chinks in its armor

The COVID crisis has spurred clinical researchers to look myriad ways AI might help win the war against the virus. Has the investigatory boomlet ended up raising as many nettlesome concerns as positive possibilities?

Smithsonian magazine has posted a news-analysis piece contemplating the paradox and its ramifications for real-world patient care.

“Artificial intelligence had already been in use by hospitals, but the unknowns with COVID-19 and the volume cases created a frenzy of activity around the United States,” writes freelance reporter Jim Morrison. “Models sifted through data to help caregivers focus on patients most at-risk, sort threats to patient recovery and foresee spikes in facility needs for things like beds and ventilators. But with the speed also came questions about how to implement the new tools and whether the datasets used to build the models were sufficient and without bias.”

With that as a jumpoff, Morrison surveys a handful of academic medical centers that have pursued AI-inclusive strategies for fighting the pandemic. Most have succeeded at advancing the science, at least within research settings. Some have run into various types of snags translating concepts into care.

Three examples of the latter:

Stanford researchers “expressed concern that over-reliance on artificial intelligence—which appears objective but is not—is being used for allocation of resources like ventilators and intensive care beds,” Morrison reports. “These tools are built from biased data reflecting biased healthcare systems and are thus themselves also at high risk of bias—even if explicitly excluding sensitive attributes such as race or gender,” the researchers wrote in a study published in JAMIA.

Researchers at the University of Virginia Medical Center have struggled to practicalize predictive software applied to COVID treatment guidance. “These algorithms have been proliferating, which is great, but there’s been far less attention placed on how to ethically use them,” one medical scientist tells Morrison. “Very few algorithms even make it to any kind of clinical setting.”

And at the Cleveland Clinic, where researchers developed sophisticated algorithms for help with capacity planning, ICU preparation and rehospitalization risks, the team is scrambling as the novel coronavirus mutates. “The issue isn’t that there isn’t enough data,” a researcher says. “The issue is that data has to be continuously reanalyzed and updated and revisited with these models for them to maintain their clinical value.”

Morrison also notes worries over the FDA approving AI-based tools before they’re sufficiently validated and widely raised concerns over the tools’ proclivity for amplifying racial and socioeconomic biases.

Read the whole thing at Smithsonian.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup