Medical staff must have a say in what AI is adopted for patient care, says AMA
As artificial intelligence (AI) tools continue to become more common in healthcare, physicians and medical staff are not always given a say in what AI is adopted and used. The prompted physicians to bring a new policy resolution calling for that collaboration to the American Medical Association (AMA) House of Delegates 2024 meeting.
While there is great promise to enhance care using healthcare AI tools, AI also comes with potential risks. Since the medical staff in the end is responsible for what happens to patients, the AMA said “that status quo is untenable."
The AMA physician delegates agreed and adopted the policy. It states medical staff are responsible for providing quality care and improving health outcomes, which includes includes the development, selection and implementation of augmented intelligence. But, the AMA said these activities depend on mutual accountability, interdependence and responsibility of the medical staff and the hospital governing body for the proper performance of their respective obligations.
“Organized medical staff should be an integral part at the outset of choosing, developing and implementing augmented intelligence and digital health tools in hospital care. That consideration is consistent with organized medical staff’s primacy in overseeing safety of patient care, as well as assessing other negative unintended consequences such as interruption of, or overburdening, the physician in delivery of care,” the new policy states.
This new policy builds on AMA principles of AI in healthcare approved by the AMA Board last fall that address the development, deployment and use of healthcare AI. This includes a call for more transparency in use of AI and the need for regulatory oversight.
The U.S. Food and Drug Administration (FDA) does regulate AI-enabled clinical devices, which includes software that has direct impact on patient care. To date this include about 800 FDA-cleared healthcare algorithms. But, the AMA noted the FDA does not have oversight of AI-enabled technologies that fall outside its scope, including AI that may have clinical applications, such as some clinical decision support functions. Thousands of non-regulated AI algorithms are already in use many hospital IT systems used in patient care and hospital management, this includes feature to enhance workflow in clinical reporting systems, billing systems, inventory control and analytics software.
“With a lagging effort towards adoption of national governance policies or oversight of AI, it is critical that the physician community engage in development of policies to help inform physician and patient education, and guide engagement with these new technologies. It is also important that the physician community help guide development of these tools in a way that best meets both physician and patient needs, and help define their own organization’s risk tolerance, particularly where AI impacts direct patient care,” the AMA stated in its principles of AI document.
The AMA said it is committed to ensuring that AI can meet its full potential to advance clinical care and improve clinician well-being, but this can only be accomplished by ensuring that physicians engage only with AI that satisfies rigorous standards to meet the goals of advancing health equity, prioritizing patient safety, and limiting risks to both physicians and patients.