Mental-health professionals urged to step up human oversight of ‘robot therapists’

Academic and popular writings on the use of “embodied” AI in mental healthcare are piling up fast. But where’s the guidance for psychiatrists, psychotherapists and clinical social workers looking to use robots, avatars and chatbots with real patients?

It has yet to be produced, leaving a yawning gap particularly around possible ethical implications, according to a systematic survey conducted at the Technical University of Munich in Germany and published in the Journal of Medical Internet Research.

Amelia Fiske, PhD, and colleagues reviewed the relevant literature and examined established principles of medical ethics, then analyzed the ethical and social aspects of embodied AI applications currently or potentially available to behavioral-health workers.

Examples of these technologies include robot dolls helping autistic children to communicate, avatars calming patients suffering from psychosis and virtual chat sessions for those with anxiety and depression disorders.

Embodied AI, the authors found, “is a promising approach across the field of mental health; however, further research is needed to address the broader ethical and societal concerns of these technologies to negotiate best research and medical practices in innovative mental healthcare.”

Fiske and team offer a number of recommendations for high-priority areas in need of concrete ethical guidance. These include:

  • Professional associations in mental health should develop guidelines on the best use of AI in mental health services. This includes thinking through how to train and prepare young doctors for widespread use of embodied AI in mental health, such as blended care models, the authors write.
  • AI tools in mental health should be treated as an additional resource in mental health services. “They should not be used as an excuse for reducing the provision of high-quality care by trained mental health professionals,” Fiske et al. add, “and their effect on the availability and use of existing mental health care services will need to be assessed.”
  • Embodied AI should be used transparently. Guidance on how to implement applications in a way that respects patient autonomy needs to be developed, for example, “regarding when and how consent is required and how to best deal with matters of vulnerability, manipulation, coercion and privacy.”

In a press release sent by the university, study co-author Alena Buyx, MD, PhD, underscores that medical science has to date produced very little information on how people are affected by therapeutic AI.

Through contact with a robot, she adds as an example, “a child with a disorder on the autism spectrum might only learn how to interact better with robots—but not with people.”

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The tirzepatide shortage that first began in 2022 has been resolved. Drug companies distributing compounded versions of the popular drug now have two to three more months to distribute their remaining supply.

The 24 members of the House Task Force on AI—12 reps from each party—have posted a 253-page report detailing their bipartisan vision for encouraging innovation while minimizing risks. 

Merck sent Hansoh Pharma, a Chinese biopharmaceutical company, an upfront payment of $112 million to license a new investigational GLP-1 receptor agonist. There could be many more payments to come if certain milestones are met.