New global guidance from WHO on developing and adopting healthcare AI

In the five or so years healthcare AI has been firing imaginations worldwide, most of the attention has focused on applications wowing crowds in medical research arenas. Study findings have spurred predictions about scenarios to come in the future rather than descriptions of techniques to adopt in the present.

That’s changing now. With AI-inclusive devices routinely earning regulatory approval for use in clinical settings, the time seems ripe for helping the technology translators talk above the din of the visionary futurists.

So suggests the World Health Organization in a 150-page report released June 28.

In its overview of the publication, “Ethics and governance of artificial intelligence for health,” the WHO cautions:

While new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research and drug development and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies … must put ethics and human rights at the heart of its design, deployment and use.”

The organization says the report is the fruit of 18 months’ worth of discussion and debate among thought leaders with expertise in ethics, digital technology, law and human rights, as well as experts from governmental health agencies and departments.

The report lays out six principles intended to help AI implementers “limit the risks and maximize the opportunities intrinsic to the use of AI for health.”

The principles are:

  • Protecting human autonomy
  • Promoting human well-being and safety and the public interest
  • Ensuring transparency, explainability and intelligibility
  • Fostering responsibility and accountability
  • Ensuring inclusiveness and equity
  • Promoting AI that is responsive and sustainable

In the report’s foreword, Dr. Soumya Swaminathan, the WHO’s chief scientist, writes:

If employed wisely, AI has the potential to empower patients and communities to assume control of their own healthcare and better understand their evolving needs. But if we do not take appropriate measures, AI could also lead to situations where decisions that should be made by providers and patients are transferred to machines, which would undermine human autonomy, as humans may neither understand how an AI technology arrives at a decision, nor be able to negotiate with a technology to reach a shared decision. In the context of AI for health, autonomy means that humans should remain in full control of healthcare systems and medical decisions.”

Read the full report here.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup