2-year chatbot mission unites scores of co-developers, yields ‘trustworthy and friendly Rosa’

Women’s health specialists have demonstrated the customization of a commercial AI-based chatbot platform for patients with hereditary breast and ovarian cancer.  

The pilot project took two years, many hands and much manual labor to complete, but the team suggests the effort has been worth the payoffs.

First among these was expanding the initial prototype’s scope from 500 to 2,257 predefined questions and 67 to 144 corresponding answers.

Next is receiving feedback from patients who widely indicate they forgive the tool’s limited functionality because they like its interface and judge its avatar—“Rosa”—trustworthy and user-friendly.

The work was conducted in Bergen, Norway, and is currently posted in Patient Education and Counseling.

Lead author Elen Siglen of Western Norway Familial Cancer Center at Haukeland University Hospital and colleagues assembled a multidisciplinary team to select the software and modify it in iterations with ongoing input from members.

These included patient representatives, IT engineers, lab and administrative staff, genetic counselors and clinical geneticists from around the country, Siglen and co-authors note.

Altogether some 58 individuals contributed “based on their knowledge of what the patients normally ask for before, during and after genetic counseling for hereditary breast and ovarian cancer,” the authors report.

What’s more, the patient representatives were women with a BRCA mutation who had undergone prophylactic surgeries.

“This setting made them also experts in the field, providing the patient’s perspective at every stage,” Siglen and co-authors note. “Ideally, patient representatives should have been present at every workshop, preferably different participants each time.”

In the time since the end of the initial development phase, project leaders have conducted interviews with patients who had access to Rosa before, during and after genetic counseling and testing.

The authors state the findings from these interviews will be used in a next iteration, producing a final app version and marking the end of the pilot phase.

Sharing lessons learned so far, Siglen et al. underscore the “challenging and time-consuming” character of their undertaking.

They place this observation in the context of the satisfaction they derived from building a friendly chatbot that can serve as a competent source of information about, in this case, hereditary breast and ovarian cancer.

More:

When AI reaches the level of being able to create adequate answers itself and recognize the nuances in written language that differentiate two similar questions, we expect to observe a paradigm shift in the use of chatbots in healthcare. In genetic counseling services, the patients will have the ability to prepare and educate themselves before meeting the genetic counselor, through the chatbot giving them correct information, in their own environment at their own pace and time. Both patients and genetic counselors may benefit from chatbots as counseling sessions may be even more personalized and tailored to the need of the patients. Chatbots may thus serve as the perfect companion to genetic counseling.”

Read the full study.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

With generative AI coming into its own, AI regulators must avoid relying too much on principles of risk management—and not enough on those of uncertainty management.

Trimed Popup
Trimed Popup