Another itemized set of actions Washington could take to better regulate medical AI

Inadvertently complementing a to-do list suggested by the Pew organization earlier this month, two more close AI observers are urging Congress to equip the FDA for tighter regulation of medical algorithms.

Epidemiologist and public intellectual Abdul El-Sayed, MD, PhD, and Soleil Shah, a Fulbright scholar and Stanford medical student, make their case by way of an opinion piece published Oct. 7 in Scientific American.

In short, they state, medical AI has been running amok:

From sexual trauma victims being unfairly labeled as ‘high-risk’ by substance-abuse-scoring algorithms to diagnostic algorithms failing to detect sepsis cases in more than 100 health systems nationwide to clinical decision support (CDS) software systematically discriminating against millions of Black patients by discouraging necessary referrals to complex care—this problem abounds. And it extends our pandemic as well. In a review of 232 machine-learning algorithms designed to detect COVID-19, none were of clinical use.”

The primary problem underlying the mess, they maintain, is that most of the problematic algorithms required no FDA review. And of the ones that did, none had to prove safe and efficacious in clinical trials.

El-Sayed and Shah note the need for regulation to better keep pace with innovation. On that note, they submit three action items for Congress to take up:

1. Lower the threshold for FDA evaluation.

The understanding of device equivalency under the 510(k) process “should be narrowed to consider whether the datasets or machine learning tactics used by the new device and its predicate are similar,” the authors write. “This would prevent a network of algorithms, such as kidney disease risk tools, from being approved simply because they all predict kidney disease.”

2. Dismantle systems that foster overreliance on medical algorithms by healthcare workers.

In general, unless patient harm results from poor medical decisionmaking on the part of physicians, doctors “should not face significant penalties for using their own clinical judgement instead of following recommendations from medical algorithms,” Shah and El-Sayed comment. “An algorithm may label a patient as high-risk for drug abuse, but a doctor’s understanding of that patient’s history of trauma adds critical nuance to the interpretation.”

3. Establish systems of algorithmic accountability for technologies that can evolve over time.

For medical algorithms intended for clinical settings, the Federal Trade Commission “could require more frequent assessments to monitor for any changes over time,” they write, noting that several bills addressing this shortfall have stalled in Congress. “Hopefully, there will be added momentum in the months ahead.”

“We know algorithms in healthcare can often be biased or ineffective,” Shah and El-Sayed conclude. “But it is time for America to pay more attention to the regulatory system that lets these algorithms enter the public domain to begin with. For in healthcare, if your decisions affect patient lives, ‘do no harm’ must apply—even to computer algorithms.”

Click here for the full piece and here for coverage of the Pew organization’s October call to action for Washington.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.