Dangers of AI in clinical settings are still unknown

There’s a lot of buzz about the applications of AI in the healthcare sector, but the innovations are still in infancy, leaving the full potential use of AI in clinical settings unknown.

One challenge to the growth of AI in clinical settings is a lack of guarantees for benefits to patient or patient safety, according to a study published in BMJ Quality & Safety. Researchers set out to analyze current AI research from a quality and safety perspective by highlighting important questions that must be answered in order for the technologies to be successful.

Much of the focus of AI in healthcare is on machine learning techniques to help solve complex problems. Machine learning in clinical decision support systems are increasingly widespread to reduce clinical errors, with the bulk of research focused on diagnostic decision support in specific domains such as radiology. Some of this research has made its way into clinical settings, with machine learning algorithms helping diagnose patients and make treatment recommendations.

In the future, these AI systems could even help triage patients and prioritize access to clinical services, first author Robert Challen, MD, of the University of Exeter College of Engineering Mathematics and Physical Sciences in Exeter, UK, et al. wrote. However, this creates a new set of problems, including ethical issues that perpetuate inequality among patient access as well as safety concerns, according to the researchers.

“Translation of ML research into clinical practice requires a robust demonstration that the systems function safely, and with this evolution different quality and safety issues present themselves,” Challen and colleagues wrote.

One source of bias in machine learning algorithms can be simply from the data or images on which the system was trained. If training and operational data are mismatched, or “out-of-sample,” the outcomes of the AI system can be different. Sample selection bias can also have an adverse impact on machine learning systems, as the availability of high-quality data isn’t yet the norm and the mix of index cases and controls are not typically equal in medicine.

How machine learning systems should report bias is another question. Another frequently cited issue with AI in healthcare is the black box problem, where algorithms make predictions or output results without much explanation, making it harder to detect errors. In addition, some machine learning algorithms can report estimates of confidence with their results. Those that cannot leave the clinician without a fail-safe.

Similarly, clinicians can become too reliant on algorithms––a concept called automation complacency––which could lead to pitfalls if people using imperfect systems don’t catch errors.

According to researchers, many of these issues need to be addressed before bringing AI systems “from laboratory to bedside,” including raising the right questions.

“As with all clinical safety discussions we need to maintain a realistic perspective,” Challen et al. wrote. “Suboptimal decision-making will happen with or without [machine learning] support, and we must balance the potential for improvement against the risk of negative outcomes.”

Amy Baxter

Amy joined TriMed Media as a Senior Writer for HealthExec after covering home care for three years. When not writing about all things healthcare, she fulfills her lifelong dream of becoming a pirate by sailing in regattas and enjoying rum. Fun fact: she sailed 333 miles across Lake Michigan in the Chicago Yacht Club "Race to Mackinac."

Around the web

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

With generative AI coming into its own, AI regulators must avoid relying too much on principles of risk management—and not enough on those of uncertainty management.

Trimed Popup
Trimed Popup