Reality-checking 3 assumptions about healthcare AI
AI enthusiasts have varying aims and incentives for pushing the technology into healthcare, but many parrot a common set of justifications:
- AI will take care of repetitive tasks so physicians can spend more time with patients.
- By augmenting diagnostics and guiding treatments, AI will paradoxically personalize medicine.
- Machine learning will increase efficiencies and cut costs.
The chorus comes under inspection for tone deafness in an opinion piece running in the May edition of the AMA Journal of Ethics.
“If these assumptions are not acknowledged and addressed now,” write Mathew Nagy, MPH, and Bryan Sisk, MD, “then novel technologies might exacerbate, rather than mitigate, current challenges” to “the healing patient-clinician relationship.”
Nagy is a second-year medical student at the Cleveland Clinic Lerner College of Medicine of Case Western Reserve University. Sisk is a third-year clinical fellow in pediatric hematology and oncology at Washington University School of Medicine in St Louis.
Here are synopses of the three assumptions they put forth and flesh out in some detail:
Assumption 1: AI will relieve physicians of tedious work. Reality check: “[A]lthough many current tasks of clinical care might be offloaded to an algorithm in the future,” Nagy and Sisk write, “the time demand and intentional effort required to provide high-quality clinical care might not decrease and could in fact increase.”
Assumption 2: AI will increase administrative as well as clinical efficiencies, benefiting all stakeholders. Reality check: “If AI decreases the time required for a patient visit, the healthcare system might respond by increasing the volume of patients seen per day rather than allowing time for relationship development and shared decision making.”
Assumption 3: Equipped with more time and richer data, clinicians will be able to engage meaningfully in relationship-building activities with patients. Reality check: “[M]any clinicians report low confidence in their ability to engage in difficult or emotionally charged conversations as a reason for not engaging in shared decision making. Similarly, some clinicians avoid discussing their patient’s psychosocial concerns because they are unsure how to respond.”
“Without forethought and planning, the implementation of new technologies might diminish the patient-clinician relationship in the name of efficiency, accuracy or cost reduction,” Nagy and Sisk conclude. “As such, clinicians, technology developers, administrators and patient advocates should take steps to maintain the centrality of the healing relationship in medical care as AI technologies are developed and further integrated into the healthcare system.”
The AMA has posted the piece in full for free.