5 hills AI must take and hold to make it big in healthcare

If it’s to progress from capturing the public’s imagination to earning widespread clinical implementation, healthcare AI has a long road to travel.  

Researchers in Canada break down the major hurdles in a paper published July 10 in the Journal of Medical Internet Research.

Slotting machine-learning use cases into two primary categories—automation and decision support—James Shaw, PhD, of the University of Toronto and colleagues compare the market penetration of AI with that of other technologies according to the NASSS framework.

The acronym stands for Nonadoption, Abandonment and Challenges to the Scale-up, Spread and Sustainability (of technologies).

The authors suggest decision-support applications will lead those for automation, at least in the short term.

Following the NASSS framework, they outline various issues thwarting implementation of primarily decision-support AI. Among the sticking points they underscore:

1. Meaningful decision support. Clinical decision-making is “a complex process involving the integration of a variety of data sources, incorporating both tacit and explicit modes of intelligence,” the authors explain.

To inform this decision-making process more intuitively, they add, AI developers are adding communication tools such as data visualization. “The nature and value of these communication tools are central to the implementation process, helping to determine whether and how algorithmic outputs are incorporated in everyday routine practices.”

2. Explainability. How do healthcare AI models achieve their results? Too often, the answer remains unknown even to the computer scientists who create them, Shaw and colleagues point out.

“The lack of understanding of those mechanisms and circumstances poses challenges to the acceptability of machine learning to healthcare stakeholders,” they write. “Although the issue of explainability relates clearly to decision support uses cases of machine learning as explained here,” they add, “the issue may apply even more profoundly to automation-focused use cases as they gain prominence in healthcare.”

3. Privacy and consent. Legislation and guidance are lacking on the proper use of data from wearable devices. Meanwhile, many health-related apps have unclear consenting processes related to the flow of data generated through their use, the authors note.

On top of those two glaring concerns, data that are de-identified may be reidentifiable when linked with other datasets. “These considerations create major risks for initiatives that seek to make health data available for use in the development of machine learning applications, potentially leading to substantial resistance from healthcare providers,” the authors write.

4. Algorithmic bias. “Algorithms are only as good as the data used to train them,” Shaw et al. write.

“In cases where training data are partial or incomplete or only reflect a subset of a given population, the resulting model will only be relevant to the population of people represented in the dataset. This raises the question about data provenance and represents a set of issues related to the biases that are built into algorithms used to inform decision making.”

5. Scalability and normal accidents. As AI applications mushroom across the healthcare landscape, it’s inevitable that some algorithmic outputs will confound, contradict or otherwise confront others.

“The effects of this interaction are impossible to predict in advance, in part because the particular technologies that will interact are unclear and likely not yet implemented in the course of usual care,” the authors write. “We suggest that implementation scientists will need to consider the unintended consequences of the implementation and scale of ML in health care, creating even more complexity and greater opportunity for risks to the safety of patients, health care providers, and the general public.”

Shaw and team also flesh out implementation snags around the role of corporations and the changing nature of healthcare work.

In concluding their observations and predictions, the authors call the future of machine learning in healthcare “positive but uncertain.” To a large extent, they suggest, acceptance and adoption of the technology rests in the collective hands of all healthcare stakeholders—patients, providers and AI developers alike.   

“[A]s applications of machine learning become more sophisticated and investment in communications strategies such as data visualization grows, machine learning is likely to become more user-friendly and more effective,” Shaw and colleagues write. “If the implementation science community is to facilitate the adoption of machine learning in ways that stand to benefit all, the issues raised in this paper will require substantial attention in the coming years.”

To read the paper in its entirety, click here.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."
 

With generative AI coming into its own, AI regulators must avoid relying too much on principles of risk management—and not enough on those of uncertainty management.

Cardiovascular devices are more likely to be in a Class I recall than any other device type. The FDA's approval process appears to be at least partially responsible, though the agency is working to make some serious changes. We spoke to a researcher who has been tracking these data for years to learn more. 

Trimed Popup
Trimed Popup