Stanford study questions how medical AI devices are evaluated

Every day, more AI uses are coming to market, with medical devices a perfect target for innovating healthcare. And while more than 130 of such tools have been approved by the Federal Drug Administration, some experts are saying the review process needs to be reevaluated.

That’s according to a group of Stanford researchers who wanted to know how much regulators and doctors actually know about the accuracy of the AI devices they are touting and approving. The evidence may actually reveal some of the faults with AI technology, according to the study, which was published in Nature. The researchers analyzed every AI medical device approved by the FDA between 2015 and 2020 for their study.

They found that approval for AI devices was starkly different than the approval process for pharmaceuticals.

The biggest problem lies with historical data being used to train AI algorithms. In many cases, this means the data is outdated. Many algorithms are never actually tested in a clinical setting before being approved, and many devices were also only tested at one or two sites, limiting the inclusion of data from more racially and demographically diverse patients.

“Quite surprisingly, a lot of the AI algorithms weren’t evaluated very thoroughly,’’ said James Zou, the study’s co-author, who is an assistant professor of biomedical data science at Stanford University as well as a faculty member of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

This also meant that AI devices weren’t being assessed for live patients in real settings. Instead, the predictions and recommendations were based on retrospective data.

This realization may mean that AI medical devices may actually fail to capture how healthcare providers can actually use these tools in clinical settings. The same is true for different demographics.

The researchers pointed to a deep learning model that analyzes chest X-rays for signs of collapsed lungs to prove their point. While the model worked accurately for one cohort of patient data, against two other patient data sites, the algorithms were 10% less accurate. Accuracy was also higher for white patients than for Black patients.

“It’s a well-known challenge for artificial intelligence that an algorithm may work well for one population group and not for another,” Zou said.

The findings may inform regulators about the challenges of AI medical devices and reveal a need for stricter approval requirements.

“We’re extremely excited about the overall promise of AI in medicine,” Zou said. “We don’t want things to be overregulated. At the same time, we want to make sure there is rigorous evaluation especially for high-risk medical applications. You want to make sure the drugs you are taking are thoroughly vetted. It’s the same thing here.”

Amy Baxter

Amy joined TriMed Media as a Senior Writer for HealthExec after covering home care for three years. When not writing about all things healthcare, she fulfills her lifelong dream of becoming a pirate by sailing in regattas and enjoying rum. Fun fact: she sailed 333 miles across Lake Michigan in the Chicago Yacht Club "Race to Mackinac."

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.