Operating in healthcare, AI could land someone in medicolegal limbo

If medical AI makes a goof and causes a patient harm, the provider using the technology may be liable for malpractice. Or will the AI vendor be on the hook?

It depends. And either way, it’s complicated.

For example, if the vendor ends up in the defendant’s chair, part of the legal conundrum may require parsing out preemption. This is the concept that, in theory, protects drug and device makers once their products have been approved by the FDA.

However, algorithms don’t just sit there and stay the same for their lifetime—they learn as they go. As a result, the FDA can’t know how an AI will have changed after it’s been, for example, reading medical images for a year and, in the process, supposedly growing in its prowess.

Radiologist Saurabh Jha, MD, of Penn Medicine fleshes out the most pressing questions likely to arise in any malpractice scenario in which a human is pointing a finger at a machine. Or vice versa.

Writing for STAT March 9, Jha notes that the responsibility for AI’s ongoing changeability—software vendor vs. healthcare provider—“depends on the outcome of the first major litigation in this realm. The choice of who to sue, of course, may be affected by deep pockets. Plaintiffs may prefer suing a large hospital instead of a small, venture-capital-supported start-up.”

When the plaintiff’s attorneys set their sights on the provider, that clinician may be held liable even if he or she disagreed with, and therefore disregarded, the algorithm’s conclusion.

In fact, the doctor might be blamed not only for missing a critical finding but also for going against an AI-based recommendation.

“A string of such lawsuits would make radiologists practice defensively,” Jha suggests. “Eventually, they would stop disagreeing with AI because the legal costs of doing that would be too high. Radiologists will recommend more imaging, such as CT scans, to confirm AI’s findings.”

And of course, a key part of AI’s promise is making healthcare more efficient, more accurate and, as a result, less expensive.

“The adoption of artificial intelligence in radiology will certainly be influenced by science,” Jha writes. “But it will also be shaped by the courts and defensive medicine. Once a critical mass of radiologists use AI clinically, it could rapidly diffuse and, in a few years, reading chest x-rays, mammograms, head CTs and other imaging without AI will seem old-fashioned and dangerous.”

It’s ironic, then, that that the courts may end up keeping AI and radiologists “tethered to each other, granting neither complete autonomy.”

To read Jha’s thought exercise in its entirety, click here.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.