Mental-health professionals urged to step up human oversight of ‘robot therapists’

Academic and popular writings on the use of “embodied” AI in mental healthcare are piling up fast. But where’s the guidance for psychiatrists, psychotherapists and clinical social workers looking to use robots, avatars and chatbots with real patients?

It has yet to be produced, leaving a yawning gap particularly around possible ethical implications, according to a systematic survey conducted at the Technical University of Munich in Germany and published in the Journal of Medical Internet Research.

Amelia Fiske, PhD, and colleagues reviewed the relevant literature and examined established principles of medical ethics, then analyzed the ethical and social aspects of embodied AI applications currently or potentially available to behavioral-health workers.

Examples of these technologies include robot dolls helping autistic children to communicate, avatars calming patients suffering from psychosis and virtual chat sessions for those with anxiety and depression disorders.

Embodied AI, the authors found, “is a promising approach across the field of mental health; however, further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental healthcare.”

Fiske and team offer a number of recommendations for high-priority areas in need of concrete ethical guidance. These include:

  • Professional associations in mental health should develop guidelines on the best use of AI in mental health services. This includes thinking through how to train and prepare young doctors for widespread use of embodied AI in mental health, such as blended care models, the authors write.
  • AI tools in mental health should be treated as an additional resource in mental health services. “They should not be used as an excuse for reducing the provision of high-quality care by trained mental health professionals,” Fiske et al. add, “and their effect on the availability and use of existing mental health care services will need to be assessed.”
  • Embodied AI should be used transparently. Guidance on how to implement applications in a way that respects patient autonomy needs to be developed, for example, “regarding when and how consent is required and how to best deal with matters of vulnerability, manipulation, coercion and privacy.”

In a press release sent by the university, study co-author Alena Buyx, MD, PhD, underscores that medical science has to date produced very little information on how people are affected by therapeutic AI.

Through contact with a robot, she adds as an example, “a child with a disorder on the autism spectrum might only learn how to interact better with robots—but not with people.”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."
 

With generative AI coming into its own, AI regulators must avoid relying too much on principles of risk management—and not enough on those of uncertainty management.

Cardiovascular devices are more likely to be in a Class I recall than any other device type. The FDA's approval process appears to be at least partially responsible, though the agency is working to make some serious changes. We spoke to a researcher who has been tracking these data for years to learn more. 

Trimed Popup
Trimed Popup