Google Assistant tops AI-powered devices for medication help

More adults are utilizing voice assistant devices powered by AI to help with their medication management, but not all devices are the same. According to new research published in Nature, Google Assistant outpaces its peers, including Amazon’s Alexa and Apple’s Siri voice assistants.

Researchers from Klick Health, an independent, full-service marketing and commercialization agency for life sciences, looked at the comprehension of three voice assistant devices using the speech of 46 participants.

The study underscored how well this technology is working with healthcare objectives at a time when an estimated 46% of Americans use voice assistants. These interfaces are also increasingly being used to gather health information, which can be helpful to patients at home, but also come with some risks of inaccurate or inconsistent advice. Past studies have revealed patients can’t rely on voice assistants for medical advice and good information.

Part of the problem is that voice assistants might not be able to recognize the speech of different people when they ask about medications.

“If voice assistants cannot initially recognize the speech of different people when asked about various medications, then any subsequent response from the device would inevitably be inaccurate,” Adam Palanica, PhD, Behavioral Scientist at Klick and co-author of the study, et al.  wrote.

As such, the researchers set out to determine how devices––Alexa, Google Assistant or Siri––comprehended the generic and brand name medications of the top 50 most dispensed drugs in the nation. The top 50 drugs represent about 40% of total dispenses in the U.S., about 1.8 billion dispenses in 2016.

Google Assistant had the best results, with comprehension accuracy of 91.8% for brand medication and 84.3% for generic medication names, according to the study. The accuracy on generic brands was even higher than the amount of fully correct participant pronunciations­––55.6%.

Siri performed far worse, with 58.5% accuracy for brand names and 51.2% for generic names. In addition, Google Assistant didn’t have much variation in accuracy across accent types, compared to an 8% to 11% difference in comprehension rates.

Alexa had the worst comprehension performance, but took the shortest amount of time to respond.

Google’s accuracy could lie in its ability to filter out speech sounds for comprehension better than its voice assistant peers, researchers surmised.

“One potential reason for this result is that Google seemed to better edit speech sounds when listening for specific queries compared to Siri or Alexa. That is, Google would retroactively edit text to remove stuttering from participant voice recordings and any unnecessary ‘the’s’ and ‘um’s’, etc.,” Palanica and colleagues wrote.

Siri and Alexa, by comparison, seem to keep those words, which contribute to misinterpretation.

The findings highlight the broad importance of comprehension in pronunciation among voice assistants, particularly as AI systems continue to be adopted for healthcare uses.

“If an AI system cannot first recognize the speech of a user, then it will fail in every other subsequent task that it tries to perform,” Palanica et al. said.

“Digital voice assistants are becoming popular tools for gathering health information, so a lack of comprehension puts them at risk of providing poor, inconsistent, and potentially dangerous advice,” Yan Fossat, vice president of Klick Labs and fellow study co-author, said in a statement. “We were encouraged by some of the comprehension rates in our research, but more work needs to be done to ensure people’s health and safety.”

Amy Baxter

Amy joined TriMed Media as a Senior Writer for HealthExec after covering home care for three years. When not writing about all things healthcare, she fulfills her lifelong dream of becoming a pirate by sailing in regattas and enjoying rum. Fun fact: she sailed 333 miles across Lake Michigan in the Chicago Yacht Club "Race to Mackinac."

Around the web

With generative AI coming into its own, AI regulators must avoid relying too much on principles of risk management—and not enough on those of uncertainty management.

Cardiovascular devices are more likely to be in a Class I recall than any other device type. The FDA's approval process appears to be at least partially responsible, though the agency is working to make some serious changes. We spoke to a researcher who has been tracking these data for years to learn more. 

Updated compensation data includes good news for multiple subspecialties. The new report also examines private equity's impact on employment models and how much male cardiologists earn compared to females.

Trimed Popup
Trimed Popup