6 serious risks associated with AI in healthcare
The rapid rise of AI could potentially change healthcare forever, leading to faster diagnoses and allowing providers to spend more time communicating directly with patients. According to a new report from the Brookings Institution, however, there are also risks associated with AI in healthcare that must be addressed.
These are six potential risks of AI that were identified in the nonprofit organization’s report:
1. Injuries and error: “The most obvious risk is that AI systems will sometimes be wrong, and that patient injury or other healthcare problems may result,” author W. Nicholson Price II, University of Michigan Law School, wrote. “If an AI system recommends the wrong drug for a patient, fails to notice a tumor on a radiological scan, or allocates a hospital bed to one patient over another because it predicted wrongly which patient would benefit more, the patient could be injured.”
Errors related AI systems would be especially troubling because they can impact so many patients at once. In addition, patients and the patients’ family and friends are likely to not react well if they find out “a computer” is the reason a significant mistake was made. And in this modern era of online patient reviews, it would not take long for the word to get out that a providers’ AI capabilities could not be trusted.
2. Data availability: The logistics related to the patient data needed to develop a legitimate AI algorithm can be daunting. Even just gathering all of the necessary data for a single patient can present various challenges. As Price II explained, patients “typically see different providers and switch insurance companies, leading to data split in multiple systems and multiple formats.”
3. Privacy concerns: When you’re collecting patient data, the privacy of those patients should certainly be a big concern. Researchers may work to ensure that patient data remains private, but there are always malicious hackers waiting in the wings to exploit mistakes. Even a massive company such as Google can experience problems related to patient data and privacy, showing that it’s something everyone involved in AI must take seriously.
“AI could implicate privacy in another way: AI can predict private information about patients even though the algorithm never received that information,” Price II added.
4. Bias and inequality: If the data used to train an AI system contains even the faintest hint of bias, according to the report, that bias will be present in the actual AI.
“For instance, if the data available for AI are principally gathered in academic medical centers, the resulting AI systems will know less about—and therefore will treat less effectively—patients from populations that do not typically frequent academic medical centers,” Price II wrote. “Similarly, if speech-recognition AI systems are used to transcribe encounter notes, such AI may perform worse when the provider is of a race or gender underrepresented in training data.”
5. Professional realignment: One long-term risk of implementing AI technology is that it could lead to “shifts in the medical profession.”
“Some medical specialties, such as radiology, are likely to shift substantially as much of their work becomes automatable,” Price II wrote. “Some scholars are concerned that the widespread use of AI will result in decreased human knowledge and capacity over time, such that providers lose the ability to catch and correct AI errors and further to develop medical knowledge.”
(More AI in Healthcare coverage of this specific risk can be read here, here and here.)
6. The nirvana fallacy: The nirvana fallacy, Price II explained, occurs when a new option is compared to an ideal scenario instead of what came before it. Patient care may not be 100% perfect after the implementation of AI, in other words, but that doesn’t mean things should remain the same as they’ve always been.
Could this phenomenon occur and lead to inaction in the American healthcare system?