Liability for following AI treatment recommendations not so clear-cut, but emerging patterns suggest safe pathways

Legal scholars have argued that taking up AI-based recommendations for nonstandard treatment decisions puts physicians at risk of being found liable in medical malpractice suits.

However, under certain circumstances, jurors deciding the outcome of these suits would be less likely to push the “liable” button, according to a report published Sept. 25 in The Journal of Nuclear Medicine.

A team of researchers led by Kevin Tobia, JD, of Georgetown University Law Center in Washington, D.C., came to this conclusion by conducting an online experimental study involving a nationally representative sample of 2,000 U.S. adults. They asked participants to review one of four scenarios in which a physician had used AI to obtain a treatment recommendation.

Each scenario contained one of two AI recommendations—standard or nonstandard care—and the physician’s decision of whether to accept or reject it. In all scenarios, the physician’s decision caused a harm. Participants then assessed the physician’s liability for that harm.

Based on these assessments, the researchers determined that physicians who accept advice from an AI system to provide standard care can reduce the risk of liability.

There’s no such liability shielding behind AI, however, when the AI recommends nonstandard care and the physician rejects it to instead provide standard care.

Or, as the authors put it in their discussion:

“We find that two factors reduce lay judgment of liability: following standard care and following the recommendation of AI tools. These results provide guidance to physicians who seek to reduce liability, as well as a response to recent concerns that the risk of liability in tort law may slow the use of AI in precision medicine. Contrary to the predictions of those legal theories, the experiments suggest that the view of the jury pool is surprisingly favorable to the use of AI in precision medicine.”

Tobias et al. state that their study is the first to supply experimental evidence about physicians’ potential liability for using AI in precision medicine.

Julie Ritzer Ross,

Contributor

Around the web

When regulating AI-equipped medical devices, the FDA might take a page from the Department of Transportation’s playbook for overseeing AI-equipped vehicles. These run the gamut from assisting human drivers to fully taking the wheel. 

Kit Crancer, RBMA board member, speaks with Radiology Business about key legislative developments on the Hill that will affect the specialty. 

California-based Acutus Medical has said its ongoing agreement to manufacture and distribute left-heart access devices for Medtronic is the company's only source of revenue.