Digital symptom checkers aided by AI are largely operating in the dark

Options are increasing for healthcare consumers looking to check their symptoms with an AI digital platform for self-diagnosis. However, research into the use, accuracy and regulation of these technologies is woefully scant.

The researchers who arrived at this conclusion after reviewing the literature are urging action to close the gap between what’s going on and what’s known about it.

Stephanie Aboueid of the University of Waterloo in Ontario and co-authors published their findings May 1 in JMIR Medical Informatics.

They name as examples of such platforms the Mayo Clinic symptom checker, Babylon Health, the Ada health app and the K Health app.

The team scoured seven literature databases for studies incorporating self-diagnosis, digital platforms and public or patients. They whittled the initial find of some 2,536 articles to 19 for review.

Aboueid and team found that most of the work has so far been conducted in the U.S., followed by the U.K. Topics include accuracy or correspondence with a doctor’s diagnosis, regulation, user experience, ethics, and privacy and security.

They further found the consumers most likely to use digital self-diagnosis are those who lack access to clinicians and those who perceive their condition as stigmatizing.

Additionally, the diagnostic accuracy of the platforms varies greatly depending on the disease as well as the platform.

Women and the highly educated are more likely to choose the right diagnosis from the possibilities listed on the digital platform.  

In their discussion, Aboueid et al. noted the value of exploring how AI-guided self-diagnosis might be used to improve patient engagement and reduce unnecessary in-person visits.

They also sounded their concern over what’s lacking so far in the literature.

“Given the direct-to-consumer approach of these platforms, it is worrisome that only a few studies have focused on the use of this technology,” they wrote. “It is important that future research and resources are directed to understanding the accuracy and regulation of self-diagnosing AI digital platforms.”

A worthwhile next step may involve creating an app library indicating which platforms are backed by a trustworthy healthcare organization, they suggested.

“It should be noted that patient engagement is necessary in the development of these platforms to ensure that they allow a high proportion of individuals—irrespective of gender and education—to choose the right diagnosis,” the authors added. “Importantly, user experience is crucial to consider as the public may be skeptical of this technology.”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."
 

With generative AI coming into its own, AI regulators must avoid relying too much on principles of risk management—and not enough on those of uncertainty management.

Cardiovascular devices are more likely to be in a Class I recall than any other device type. The FDA's approval process appears to be at least partially responsible, though the agency is working to make some serious changes. We spoke to a researcher who has been tracking these data for years to learn more. 

Trimed Popup
Trimed Popup