4 hurdles thwarting AI from conquering clinical practice

AI will not earn a place in the daily practice of medicine until its developers definitively answer some pressing questions on fitness and appropriateness.

The unresolved issues needing attention include data quality and ownership, transparency in governance, trustbuilding in “black box” medicine and legal responsibility for medical errors in which AI is implicated.

That’s according to the authors of an opinion piece posted this week in the Medical Journal of Australia.

After summarizing AI’s proven prowess in several layers of healthcare—diagnostics, image interpretation and predictions/prognostications—senior author Ben Freedman, MBBS, PhD, of the University of Sydney and colleagues flesh out the nagging issues:

1. Health disparities, excluded populations and data biases. Existing inequities in healthcare delivery stand to be exacerbated by AI if developers fail to include population- representative data when training algorithms, the authors point out.  

“This is a not a new problem, and we must do better science and be awake to the limits of data quality and evidencebased medicine,” they comment.

2. Data sovereignty and stewardship. When Google-owned DeepMind came out with an AI-based app for patients with kidney disease, consumer watchdogs cried foul over the developers’ nontransparent use of patient data.

“Issues of data sovereignty … threaten the existence of effective AI,” Freedman and colleagues write. “Patient data should not be provided to technology giants without a good governance structure to protect data sovereignty.”

3. Changing standards of care. Healthcare providers will have no choice but to change care protocols as AI makes inroads into daily practice. In part this is because it may become poor practice not to use the technology when it’s available and deemed a preferred approach by clinical guidelines.

“We will see a time when all medicine and allied health work as a team with AI,” the authors write. “Those who refuse to partner with AI might be replaced by it.”

4. Legal responsibility for AIcaused injury. Physicians using AI should “own” their care decisions when they’re aided by the technology, the authors argue. However, as AI teaches itself to function independently, at least in theory, some doctors might not be wrong to blame the technology if they get sued for malpractice.

Further, it “seems unfair for doctors to be held responsible for an AI decision when they are unable to deduce how and why that decision was made,” the authors write, alluding to AI’s “black box” problem. “Such matters are outside the scope of clinicians’ expertise and best dealt with legally as a product liability claim.”

“AI has already arrived in healthcare,” Freedman et al. write, “but are we ready for the kind of changes that it will introduce?”

“Much effort is needed,” they conclude, “to translate algorithms into problem-solving tools in clinical settings and demonstrate improvement in clinical outcomes with saving of resources.”

The journal has posted the paper in full for free.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.