Prominent tech scholar: AI ‘feels like a runaway train that we’re chasing on foot’

Cynthia Rudin, PhD, is a highly regarded computer scientist who’s been eyeing the advance of artificial intelligence into society with equal parts enthusiasm and concern.  

At her home base, Duke University in Durham, North Carolina, Prof. Rudin directs a lab focused on interpretable machine learning while also teaching and/or researching statistical science, biostatistics and bioinformatics.

Looking outward, Rudin has called on the federal government to regulate AI companies, warning that a lack of centralized oversight could bring unintended consequences that no one but pure profiteers would like.

Or, as she put it a few weeks ago in The Hill: AI technology “feels like a runaway train that we’re chasing on foot.”

Attention must be paid, as Rudin’s recently earned honors have included a 2022 Guggenheim Fellowship and, in 2021, a $1 million AAAI Squirrel AI Award conferred by the Association for the Advancement of Artificial Intelligence to those who have helped steer the technology toward the broad benefit of humanity. (The Squirrel has been called the “Nobel Prize of AI.”)  

Rudin took questions from HealthExec, opting to answer in writing via email. Here’s the exchange with our questions in bold type.

Professor Rudin, would you begin by telling which clinical use cases you’re especially excited about within healthcare AI?

Right now I’m working on computer-aided mammography, neurology for critically ill patients and analytics for wearable devices for heart monitoring (think of a smartwatch). I’m excited about the possibility of detecting arrhythmia much faster than ever before (used to be that you might not detect it until the person was at the hospital for some terrible situation).

A lot of my work is on techniques, which are cross-cutting, so you can use it across domains. We’re designing techniques for building medical scoring systems, which are tiny little formulas that look like someone might have created them by hand (they could fit on an index card), but they are actually deceivingly accurate machine learning models. Those can be used in almost any area of medicine.

For the mammography project, we’re building interpretable neural networks, and I’m excited that these are as accurate as black box neural networks, so they can replace black box networks across any clinical domain that uses images.

Are there any clinical applications over which you have strong misgivings?

I really don’t like what people are doing with explaining black boxes and telling everyone they actually understand what the black box is doing, because they don’t! There have been more than a couple cases where FDA-approved models went wrong, and because they are black box, no one really knows why. If you care, and if it’s important, build an interpretable model.

Everyone wants to use ChatGPT for medicine. Given that it provides wrong answers that it’s confident in, I don’t know why you’d want to trust it for high-stakes decisions. Just because it’s amazingly cool does not make it trustworthy.

Any thoughts on nonclinical uses of AI in healthcare—billing and coding, medical debt collections, administrative efficiencies and so on?

It’s a good idea to use AI to find patterns that might be important, but if it is something that matters, a human should be making the final decision. I know there were some articles about debt collections that were totally automated and not designed very well, which prevented people from accessing a human to sort out that issue. There were also algorithms, again not designed very well, that provided service in a racially biased way because they predicted the cost of healthcare as a proxy for how sick the patient was (which is a racially biased estimate)

The problem is that AI gets implemented way too soon. People get excited about what it can do and push it out there when it’s not very good and hasn’t been tested properly. It would be lovely, though, if the AI could fill in repetitive forms for us.

Do you really believe we need a new federal department to oversee AI out of Washington, D.C.?  

Yes, but not for healthcare, mainly for other things. The FDA should be able to handle AI growth within healthcare. It should not allow algorithms to be used when not tested. We wrote some papers on in vitro fertilization, detailing a case where patients are being sold AI predictions for thousands of dollars in cases where the AI wasn’t properly tested.

What sorts of big-picture opportunities and challenges would you expect to emerge from healthcare AI over the next several years?

Along with heart monitoring, I’d love to see AI better influence cancer care, so people could get more personalized treatments with fewer side effects. There seems to be a lot of AI in drug discovery. It’s not clear yet how successful that has been, and I’m not an expert.

I’m slightly concerned about genetics, but perhaps that’s more sci-fi. I don’t know how much we’re going to truly be able to glean about people’s health and personalities from their genetics. But the more we find, the more it needs to be heavily protected—and since genetic information is shared across family, it makes the privacy problem more difficult.

On balance, are you more eager than anxious about AI in healthcare—or vice versa?

I’m definitely excited about it. I’ve got concerns about AI’s growth in other areas, but healthcare I think is overall a great space for AI to expand into. The major challenge is getting a hold of data, since there is a lot of data-hugging that prevents AI researchers from doing their best work. A second important challenge is how to evaluate all these AI models before they get launched!

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

California-based Acutus Medical has said its ongoing agreement to manufacture and distribute left-heart access devices for Medtronic is the company's only source of revenue. 

The scam took place over a period of seven years, resulting in Medicare being billed for more than $70 million in fraudulent claims for unnecessary scans. 

Compensation for heart specialists continues to climb. What does this say about cardiology as a whole? Could private equity's rising influence bring about change? We spoke to MedAxiom CEO Jerry Blackwell, MD, MBA, a veteran cardiologist himself, to learn more.