HIMSS speakers see standards, possibly ‘nutrition labels’ in healthcare AI’s future

Freely hope for the best, but diligently prepare for the worst. Applied to end users of healthcare AI, that adage could have been a key takeaway at last week’s annual meeting of the Health Information Management Systems Society (HIMSS) in Las Vegas.

“The good news is your optimism for AI is justified,” Mayo Clinic Platform head John Halamka, MD, told the audience in one session. However, he added, “there are caveats.”

Indeed. Enough of those were flagged that even AI vendors urged caution.

“We should think of any machine learning algorithm that is predicting a condition for somebody as a lab test,” said Tanuj Gupta, MD, MBA, an executive with EHR supplier and AI developer Cerner Corp. “If [its outputs are] off, and you potentially cause some morbidity and mortality issue, it’s a problem.”

The quotes are from a rundown filed by Stat News reporter Casey Ross, who was there to observe the dark vs. bright character of the perspectives on offer from various invited speakers. The outlet posted his coverage Aug. 16.

One steady refrain seems to have been a call for standards by which algorithms could be assessed for safety and efficacy.

Well enough, but what body is going to draw up and enforce any such standards?

“The FDA and the Government Accountability Office have created high-level frameworks for regulating artificial intelligence,” Ross points out, “but those proposals do not address the specific dilemmas created by the algorithmic products already making their way into care.”

He cites a recent Stat News investigation showing algorithms embedded in Epic EHR systems outputting iffy info to clinicians treating seriously ill patients.

The fumbles and misgivings are unlikely to slow healthcare AI’s momentum. As several HIMSS speakers underscored, according to Ross, existing and emerging algorithms have plenty of upside.

What’s more, some powerful players are working on creative ways to tamp down healthcare AI’s risks without sacrificing its rewards.

For example, Duke University has proposed a way to label algorithms like food products.

Mayo’s Halamka is all for that:  

“Shouldn’t we as a society demand a nutrition label on our algorithms saying this is the race, ethnicity, the gender, the geography, the income, the education that went into the creation of this algorithm? Oh, and here’s … some statistical measure of how well it works for a given population. You say, ‘Oh well, this one’s likely to work for the patient in front of me.’ That’s how we get to maturity.”

Read Ross’s full report.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

Compensation for heart specialists continues to climb. What does this say about cardiology as a whole? Could private equity's rising influence bring about change? We spoke to MedAxiom CEO Jerry Blackwell, MD, MBA, a veteran cardiologist himself, to learn more.

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”