Report: Understanding AI in healthcare essential to stop fake news
It is well known that AI has the potential to upend several areas of medicine, including targeted treatment and diagnostics. However, a lack of knowledge about AI in the healthcare space could have a negative effect through the spread of misinformation––and fake news.
Only 17% of respondents in a survey with heart and circulatory diseases said they were aware of any current cases of AI being used in the diagnosis and treatment of the diseases, according to a report from the UK’s All-Party Parliamentary Group (APPG) on Heart and Circulatory Diseases, supported by the British Heart Foundation.
“It it is vital that efforts to engage the public on AI in healthcare begin right away,” the report reads. “If the knowledge gap with regards to AI in healthcare is not filled by the correct information, it will be filled by misinformation.”
AI awareness
The report looked at how AI can have an impact in medicine for patients with heart and circulatory diseases. Seven million people in the U.K. live with heart and circulatory diseases, contributing to one-quarter of deaths.
Despite the low level of public awareness on the topic, 90% of respondents in the report agreed it was the responsibility of the National Health Service (NHS) to inform the public about current and future uses of AI in healthcare. Nearly three-quarters stated it was the responsibility of the government, and 91% said the public should be well-informed about how AI is used in the healthcare sector.
And there are several AI-led efforts to improve health, including using Google Deep Mind to predict risk factors for heart and circulatory diseases. AI can also be used to enhance the role of a general practitioner by triaging patients who need to be seen or automatically creating GP notes during a patient visit, freeing up clinicians to interact with patients. Still, without informing the public of these critical uses in medicine, the rise of AI could instill fear and fake news.
“I’m not sure the general public know that much about AI, personally,” one patient representative stated in the report. “My fear is that the perception is that it is all to do with robots, which I’ve heard many times, I really don’t think the message has got anywhere near out there yet.”
To ensure patients are informed and engaged with AI, the report recommends NHS set up discussions with charities, the public and others to understand patient views and concerns, what patients need for information sharing, develop routes for information to flow between policymakers and patients, and explore the best way to engage in this effort.
Rising AI challenges
One significant challenge is for policymakers is the pace of regulation with the rise in technology and use of AI in medicine. As more patient data, powered by AI, is utilized in care, it’s important that patients are part of the conversation in the design and development.
“Evidence shows that involving patients in the development of healthcare innovations, for example through co-production, can improve health outcomes and lower costs,” the report found.
However, policymakers should look at the past to avoid making certain mistakes when it comes to engaging the public with AI. For example, one biochemist and nutritionist claimed in 1998 that genetically modified potatoes can cause damage to the stomach lining and immune system of rats, despite the Royal Society finding no convincing evidence of adverse effects from the potatoes. Still, media interest in the biochemist’s claims “created an environment of mistrust,” according to the report.
Fortunately, the public is likely to be receptive of education efforts, as the vast majority support the use of AI in the healthcare space. Nearly half––48%––of survey respondents in the report strongly supported doctors using AI technologies to assist them in diagnosing and treating heart and circulatory diseases. Another 37% supported the use, while 13% were unsure, and just 1% did not support or strongly did not support the use.
This strong support for AI is maintained as long as the human element of healthcare remains intact, according to the report, underscoring that interactions between clinicians and patients cannot be replicated. Most respondents––87%––were either very comfortable or comfortable being diagnosed by a human doctor who used their own judgment as well as an AI assistant to inform their decision. Without the input of the human doctor, that support drops to just 15%, with 58% reporting they would be uncomfortable or very uncomfortable with that scenario.
Having the patient perspective in this area is therefore critical to the success of AI in the space, including preparing clinicians for how they can leverage the technologies.
As AI technologies continue advancing, governments and health associations should be on the forefront of engaging the public on their uses when it comes to healthcare. With the right resources in place, the public may embrace AI as an instrumental piece of the care environment, without being fearful of misinformation.
See the full report here.