Medical statistician reviewing COVID AI studies: ‘Poorly reported … horrible’
As researchers have turned to AI for battling COVID-19, raising hopes and making headlines, the shortcomings of the studies have drawn the ire of numerous statistics experts.
Their main shared concern is quality.
“[It] scares a lot of us because we know that [AI] models can be used to make medical decisions,” says Maarten van Smeden, a medical statistician at University Medical Center Utrecht in the Netherlands. “If the model is bad, they can make the medical decision worse. So they can actually harm patients.”
Van Smeden offers his comments via Discover magazine, which assigned a reporter to drill into the heart of the alarm and published her article July 6.
Writer Allison Whitten reports that van Smeden is co-leader of a formidable team of international researchers who are evaluating COVID-19 machine-learning models using standardized criteria.
The project has so far recruited 40 reviewers, and they’re updating their critiques whenever a new model comes out. The BMJ is publishing the work as its first-ever “living review,” we learn.
“So far, their reviews of COVID-19 machine learning models aren’t good: They suffer from a serious lack of data and necessary expertise from a wide array of research fields,” Whitten writes.
Van Smeden tells her the COVID-19 AI studies reviewed so far are “so poorly reported that I do not fully understand what these models have as input, let alone what they give as an output. It’s horrible.”
Read it all: