AI vendors discuss the biggest mistakes being made in healthcare today

More healthcare providers are starting to investigate how AI can improve patient care, but many of them hurt their own chances of success by making costly mistakes.

At RSNA 2019 in Chicago, we asked numerous vendors about the biggest mistakes they see providers making on a regular basis when it comes to AI development and implementation. Perhaps the faults they’ve witnessed can help guide the work of others in the years ahead.

A selection of vendor responses can be read below:

  • Imad B. Nijim, chief information officer, MEDNAX Radiology Solutions:

“One of the biggest mistakes is focusing on the technology. As with any nascent technology, there is a learning curve and new terminology to pick up. Ultimately, it doesn’t matter how many models you have and what is the ROC curve is. Many early adopters are drawn to the technical details, which is great—and fun—but what problem are you trying to solve? How does it connect to your overarching strategy? Those are the questions that should drive your technology decisions. 

The maturity path for AI adoption starts with ‘experimental’ research and ends with an AI-enabled organization. There are many steps in between, such as governance, operational support and monitoring. I recommend you understand what those steps are for your organization and build a deliberate path forward.”

  • Karley Yoder, vice president and general manager of AI, GE Healthcare:

“Successful AI products require a community/ecosystem working together. The single biggest mistake we see providers make is trying to solve the ‘AI implementation hurdles’ on their own. Some of these hurdles include:

  1. No streamlined way to integrate AI solutions into existing clinical workflows. Healthcare providers can waste weeks to months trying to overcome this challenge, often to middling success at the end of the whole process. Vendors must take on this integration work and provide a fully integrated solution back to providers.
  2. Inconsistency of marketing and claims. With 150+ startups in the medical imaging space, claims and numbers are often thrown around. If providers trust solutions that don’t have full regulatory clearance and exhaustive clinical validation studies, they are setting themselves up for disappointment.  
  3. Not thinking big enough. The true power of the AI revolution will come when data from multiple data sources flows together in a hospital. If providers invest in the wrong infrastructure and platform partners, they will have invested in the wrong ‘roads and freeways’ and will have to ‘dig up’ what they install today to make room for what will be available in the future."
  • ​​​​​Gene Saragnese, CEO, MaxQ AI:

“I love healthcare, and there’s nothing more rewarding than being hugged by someone who is in tears because they just got great news thanks to something you’ve done. But the reality is that if you want to succeed in healthcare today and impact outcomes … there has to be a cost equation. And if there’s not, everything else is irrelevant. What you should do is work back from that cost equation. Too many people are working forward toward it.

People are so caught up on if the technology works or if AI is real. The technology does work. But that’s not what is going to determine its success. What determines success in healthcare is, is it integrated? Can I trust it? Are you solving a relevant problem?”

  • Woojin Kim, MD, chief medical information officer, Nuance:

“The first question you need to ask is whether the AI model you are developing or purchasing is clinically relevant. Once you decide that the AI model you are considering can solve real problems, you need to ask how that algorithm was developed, if you are considering purchasing it instead of developing it in-house. For example, you will want to know how many images were used for training and what the training data set look like in terms of population demographics, scanner types, imaging protocols, annotation methods used, validations performed, etc.

Thanks to several research papers and many AI experts in radiology speaking about it, many of us are becoming more aware of the ‘brittleness’ issue with the AI models. What I mean by brittleness is that an AI model may perform well at the institution where it was trained, but that same model may perform poorly at other institutions. This is why you cannot merely go by the performance metrics of an AI model, say, from a vendor, without performing your validation at your institution using your data.

Also, if the AI model is not well-integrated into the workflow, it won't be used. The radiologists have a tremendous workload, and they will not tolerate any solutions that will slow them down unnecessarily. No radiologist I know wants more mouse-clicks and screen popups added to their daily workday. Hence, tight integration into the workflow is one of the must requirements for a successful implementation of AI.”

  • Elad Walach, co-founder and CEO, Aidoc:

“Healthcare providers tend to focus on accuracy because that's the starting point, but algorithm accuracy isn't a substitute for real outcomes like faster turnaround times, better detection rates in clinical use, reduced hospital length of stay, etc. Until recently, data on real outcomes wasn't available, so accuracy was all we had to go by. Now, though, we are beginning to see real clinical data about the impact of different AI solutions, enabling healthcare providers to make more informed decisions.

To ensure an AI solution is actually performing well, it's critical to measure value on a continuous basis. Installing the product and leaving everything to the radiologists isn't enough; healthcare providers should assign a clinical implementation team and continuously work with AI vendors to ensure that they're getting the most out of the technology.”

  • Kyuhwan Jung, co-founder and chief technology officer, VUNO:

“Unrealistically high expectations of medical AI can raise standards for its performance and capabilities unnecessarily to the extent that they ultimately delay actual implementation or put medical professionals at unease. On the other hand, some critics still believe that the medical AI we are developing today is no more than an upgraded version of the machine learning-based diagnosis solutions we knew of in the past. This approach certainly restricts the potential applications of AI in the clinical environment. We need to have a more thorough understanding of the technical strengths and weaknesses of AI, put it to work in actual healthcare settings and gain practical experience to enhance confidence through proven track records of clinical validation.”

  • Marcel Nienhuis, vice president of marketing, VIDA:

“Most providers are approaching AI appropriately. They are planning for AI adoption, investigating available applications and asking the right questions around clinical validation, training datasets, and so on.

While many product evaluations and early deployments are taking place, I would encourage providers to approach AI with more agility than a traditional equipment procurement. Most, if not all, of the AI platform providers offer free trials and have designed their solutions to be easy to evaluate with minimal effort. I realize product evaluations are not free in terms of human capital; however, they provide a fantastic low-risk opportunity to experiment with the wide range of AI applications available in the market in order to better understand the ones that will offer value for an organization.”

Michael Walter
Michael Walter, Managing Editor

Michael has more than 18 years of experience as a professional writer and editor. He has written at length about cardiology, radiology, artificial intelligence and other key healthcare topics.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.