10 questions clinicians—and patients—ought to ask about every AI they encounter

Technology educators, tech-policy wonks and hospital clinical leaders from three countries have collaborated to produce a helpful guide for end-users of healthcare-specific AI tools—and the patients they serve.

Released this week, the 24-page digital publication functions as a consumer-friendly primer on the proper use of AI to support decisionmaking by clinicians, administrators and other provider staff likely to engage with AI in the near or long-term future.

Produced by the Korea Advanced Institute of Science and Technology in South Korea in cooperation with the U.K.-based Sense About Science and the Lloyd’s Register Foundation Institute for the Public Understanding of Risk at the National University of Singapore, the guide summarizes AI’s ascent in healthcare, supplies a brief glossary of terms and describes ways AI is used to treat patients.

Further in, it advises users of healthcare AI tools to find out if the source of the data used for training and testing is known, the data has been collected or selected for the purpose the end-user is pursuing, limitations and assumptions for that purpose have been clearly stated, biases have been addressed, and the model has been tested and validated in real-world settings.

Next the guide suggests and fleshes out a number of specific questions that stakeholders and even close observers such as policymakers and journalists might ask before using, considering or covering healthcare AI.

Among these:

  1. Does the data represent the patients for whom the AI is being used?
  2. Are the patterns and relationships identified by the AI accurate?
  3. What assumptions is the AI making about patients and disease?
  4. Are the variables excluded from the model truly irrelevant?
  5. Are the results generalizable?
  6. Does the AI eliminate human prejudice from decisionmaking?
  7. How much decision weight can we put on it?
  8. How well does the AI really perform?
  9. Has its reliability been properly scrutinized?
  10. Does it make a useful real-world recommendation?

“By applying these questions, society can ensure AI developers’ solutions to modern healthcare challenges are making good use of the data and knowledge available, with minimal error, across different countries and populations, without deepening inequalities that are already high,” the authors write. “These are the AIs that will make useful real-world recommendations that clinicians can have confidence in.”

More:

From misdiagnosing a serious disease to exacerbating racial and economic health inequalities, AI gone wrong can have life-or-death implications. There’s confusion and fear out there—fear about robots taking people’s jobs, fear about data privacy, fear of who’s ultimately responsible if an AI-supported decision turns out to be wrong. Rather than throwing out tools that can help us, we’ll be better off if we discuss the right questions now about the standards AIs should meet.”

To download the guide, titled “Using Artificial Intelligence to Support Healthcare Decisions: A Guide for Society,” click here.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup