AI spots budding conspiracy theories on social media, helping public health officials intervene
A new machine learning program can identify COVID-19-related conspiracy theories on social media as they’re developing to help public health officials combat misinformation online.
That’s according to computer science and data experts who used nearly 2 million Twitter posts from the onset of the pandemic to build a variety of algorithms. The AI model tracked four conspiracy theories and categorized tweets as either COVID-19 misinformation or not.
Co-author Dax Gerts said false tweets typically contain more negative sentiment compared to factual posts and that conspiracies evolve over time, gathering details from unrelated theories and real-world events.
“Because people tend to believe the first message they encounter, public health officials could someday monitor which conspiracy theories are gaining traction on social media and craft factual public information campaigns to preempt widespread acceptance of falsehoods,” Gerts, a computer scientist with Los Alamos National Laboratory in New Mexico, said in a statement.
The authors offered up an example in which Bill Gates participated in a March Reddit discussion highlighting research he funded to develop injectable invisible ink that could be used to record vaccinations. Quickly after, there was an increase in words associated with anti-vaccine conspiracy theories implying the shots would secretly microchip patients for population control.
It’s these types of theories the authors say are important for public health officials to track and push back against.
“If not, they run the risk of inadvertently publicizing conspiracy theories that might otherwise 'die on the vine,’” said Courtney Shelley, a postdoctoral researcher in the Information Systems and Modeling Group at Los Alamos National Laboratory.
Read the entire study published in the April issue of the Journal of Medical Internet Research here.