3 key requirements for AI in healthcare to reach its full potential
AI could change healthcare forever. But for the technology to reach its full potential, researchers must make sure they go about things the right way, according to a new report from the National Academy of Medicine (NAM).
The report, “Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril,” offers readers an in-depth examination of the current state of AI in healthcare. The 245-page report, featuring contributions by 27 different authors, can be read in full on the NAM website. Michael E. Matheny, MD, MS, MPH, of Vanderbilt University Medical Center and the Department of Veterans Affairs, and Sonoo Thadaney Israni, MBA, of Stanford University, served as co-chairs for the project.
Matheny and Israni also worked together to write an opinion piece for JAMA covering many of the report’s findings. Danielle Whicher, PhD, MHS, of Mathematica Policy Research—who was not one of the original report’s authors—was a third co-author of the JAMA commentary.
“The promise of AI in healthcare offers substantial opportunities to improve patient and clinical team outcomes, reduce costs, and influence population health,” Matheny et al. wrote in JAMA. “Current data generation greatly exceeds human cognitive capacity to effectively manage information, and AI is likely to have an important and complementary role to human cognition to support delivery of personalized healthcare.”
However, certain challenges remain when it comes to realizing AI in healthcare’s full potential. According to Matheny and his co-authors, these are three crucial points that all AI stakeholders—researchers, data scientists, physicians and vendors alike—must keep in mind for AI development and implementation to be successful:
1. Algorithms must be trained and validated on population-representative data:
More healthcare data is available right now than ever before—and it’s not even close. But the quality of data being used to develop AI algorithms is still often not what it should be.
“The current challenges are grounded in patient and healthcare system preferences, regulations, and political will rather than technical capacity or specifications,” the authors wrote. “It is prudent to engage AI developers, users and patients and their families in discussions about appropriate policy, regulatory and legislative solutions.”
On a related note, the team also highlighted the importance of “scrutinizing the underlying biases” of AI algorithms well before they are deployed in a clinical setting.
2. Remember that the current focus is augmented intelligence, not fully autonomous AI:
Matheny et al. noted that today’s researchers are working to support healthcare providers and not completely replace them. The public—and physicians themselves—should be reminded of this whenever possible to help avoid any sort of public backlash related to uncertainty about the technology.
“Focusing on this reality is essential for developing user trust because there is an understandable low tolerance for machine error, and these tools are being implemented in an environment of inadequate regulation and legislation,” the authors wrote.
3. Put effective training and educational programs into place:
Over time, these technologies will have such a significant affect on healthcare that proper training and educational programs will be a necessity.
“The curricula must be multidisciplinary and engage AI developers, implementers, health care system leadership, frontline clinical teams, ethicists, humanists, patients, and caregivers,” the authors wrote. “Each group brings much-needed perspectives, requirements, and expertise.”
Consumers will also need to be informed about “consent, privacy and healthcare AI savviness,” the team added.