Oxford experts: ‘Ethically unacceptable’ to bypass impact testing of AI-powered clinical decision support
AI-based CDS tools that perform well in clinical trials will flounder on the way to clinical practice if they’re not evaluated early and thoroughly for their effects on real-world clinical decisionmaking.
So warn scholars and clinicians led by a team at the University of Oxford in England. The signatories’ commentary on the matter is running in Nature Medicine as a letter to the editor.
Surgery sciences professor Peter McCulloch and colleagues additionally hold that algorithm creators must consider differences between target patient populations vs. patient populations used for training, testing and validating AI-based CDS tools.
“Because it cannot be assumed that users’ decisions will mirror the algorithm’s recommendations, it is … crucially important to test the safety profile of new algorithms not only in silico but also when used to influence human decisions,” the authors write. “Skipping this step and moving directly forward to large-scale trials would expose a considerable number of patients to an unknown risk of harm, which is ethically unacceptable. Suboptimal safety standards led to disastrous consequences in the early days of pharmacological trials; there is no need to repeat these mistakes with clinical AI.”
To that point, the authors suggest modeling medical AI research on the phased trials used to evaluate new drugs for safety and efficacy.
In a news release, Oxford doctoral candidate Baptiste Vasey, the study’s lead author, adds that approved and installed algorithms should be immediately monitored for performance with actual patients.
“We are convinced that human clinicians should and will remain at the center of patient care, and therefore [we’re] aiming to improve the way in which AI-based clinical decision support systems are evaluated when used to enhance rather than replace human intelligence,” Vasey says. “A critical phase of this process is when such systems are assessed when first used by clinicians in real-life settings.”
The correspondence is signed by the DECIDE-AI Steering Group, whose acronym stands for Developmental and Exploratory Clinical Investigation of Decision support systems based on Artificial Intelligence. The group is creating guidelines for lab-to-clinic development and implementation of AI-based CDS mechanisms.
The letter is available in full for free.