Operating in healthcare, AI could land someone in medicolegal limbo

If medical AI makes a goof and causes a patient harm, the provider using the technology may be liable for malpractice. Or will the AI vendor be on the hook?

It depends. And either way, it’s complicated.

For example, if the vendor ends up in the defendant’s chair, part of the legal conundrum may require parsing out preemption. This is the concept that, in theory, protects drug and device makers once their products have been approved by the FDA.

However, algorithms don’t just sit there and stay the same for their lifetime—they learn as they go. As a result, the FDA can’t know how an AI will have changed after it’s been, for example, reading medical images for a year and, in the process, supposedly growing in its prowess.

Radiologist Saurabh Jha, MD, of Penn Medicine fleshes out the most pressing questions likely to arise in any malpractice scenario in which a human is pointing a finger at a machine. Or vice versa.

Writing for STAT March 9, Jha notes that the responsibility for AI’s ongoing changeability—software vendor vs. healthcare provider—“depends on the outcome of the first major litigation in this realm. The choice of who to sue, of course, may be affected by deep pockets. Plaintiffs may prefer suing a large hospital instead of a small, venture-capital-supported start-up.”

When the plaintiff’s attorneys set their sights on the provider, that clinician may be held liable even if he or she disagreed with, and therefore disregarded, the algorithm’s conclusion.

In fact, the doctor might be blamed not only for missing a critical finding but also for going against an AI-based recommendation.

“A string of such lawsuits would make radiologists practice defensively,” Jha suggests. “Eventually, they would stop disagreeing with AI because the legal costs of doing that would be too high. Radiologists will recommend more imaging, such as CT scans, to confirm AI’s findings.”

And of course, a key part of AI’s promise is making healthcare more efficient, more accurate and, as a result, less expensive.

“The adoption of artificial intelligence in radiology will certainly be influenced by science,” Jha writes. “But it will also be shaped by the courts and defensive medicine. Once a critical mass of radiologists use AI clinically, it could rapidly diffuse and, in a few years, reading chest x-rays, mammograms, head CTs and other imaging without AI will seem old-fashioned and dangerous.”

It’s ironic, then, that that the courts may end up keeping AI and radiologists “tethered to each other, granting neither complete autonomy.”

To read Jha’s thought exercise in its entirety, click here.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup