Liability for following AI treatment recommendations not so clear-cut, but emerging patterns suggest safe pathways

Legal scholars have argued that taking up AI-based recommendations for nonstandard treatment decisions puts physicians at risk of being found liable in medical malpractice suits.

However, under certain circumstances, jurors deciding the outcome of these suits would be less likely to push the “liable” button, according to a report published Sept. 25 in The Journal of Nuclear Medicine.

A team of researchers led by Kevin Tobia, JD, of Georgetown University Law Center in Washington, D.C., came to this conclusion by conducting an online experimental study involving a nationally representative sample of 2,000 U.S. adults. They asked participants to review one of four scenarios in which a physician had used AI to obtain a treatment recommendation.

Each scenario contained one of two AI recommendations—standard or nonstandard care—and the physician’s decision of whether to accept or reject it. In all scenarios, the physician’s decision caused a harm. Participants then assessed the physician’s liability for that harm.

Based on these assessments, the researchers determined that physicians who accept advice from an AI system to provide standard care can reduce the risk of liability.

There’s no such liability shielding behind AI, however, when the AI recommends nonstandard care and the physician rejects it to instead provide standard care.

Or, as the authors put it in their discussion:

“We find that two factors reduce lay judgment of liability: following standard care and following the recommendation of AI tools. These results provide guidance to physicians who seek to reduce liability, as well as a response to recent concerns that the risk of liability in tort law may slow the use of AI in precision medicine. Contrary to the predictions of those legal theories, the experiments suggest that the view of the jury pool is surprisingly favorable to the use of AI in precision medicine.”

Tobias et al. state that their study is the first to supply experimental evidence about physicians’ potential liability for using AI in precision medicine.

Julie Ritzer Ross,

Contributor

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup