Experiments suggest AI implementation without AI education ‘leads to increasing human stupidity’

Asked to identify which of six fictional persons is most likely to be a terrorist, 85% of 1,500 participants in a psychology experiment selected one of the least likely suspects.  

And they did so specifically because they’d seen an AI robot make the ridiculous choice first.

Those who did not first observe the AI in action made more reasonable selections.

Based on these and other results from this and another experiment, lead researcher Michal Klichowski, PhD, of Adam Mickiewicz University in Poland arrived at two conclusions.

First, people trust AI. “Its choice can make absolutely no sense, and yet people assume that it is wiser than they are (as a certain form of collective intelligence),” Klichowski comments. Most people seem susceptible to this effect, he adds, “and in the future it will have even greater impact because the programmed components of intelligent machine operation have started to be expressly designed to calibrate user trust in AI.”

Second, developing AI without educating people about its limitations as well as its potential “leads to increasing human stupidity,” Klichowski notes, citing prior research.

And this troubling phenomenon, he warns, could be “driving us toward a dystopian future of society characterized by widespread obedience to machines.”

Klichowski’s in-depth description of this work is running in Frontiers in Psychology.

“[I]f we truly want to improve our society through AI so that AI can enhance human decision making, human judgment and human action, it is important to develop not only AI but also standards on how to use AI to make critical decisions, e.g., related to medical diagnosis, and, above all, programs that will educate the society about AI and increase social awareness on how AI works, what its capabilities are and when its opinions may be useful,” he concludes.

“Otherwise, as our results show, many people, often in very critical situations, will copy the decisions or opinions of AI, even those that are unambiguously wrong or false … and implement them.”

The journal has posted the paper in full for free.

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Around the web

The American College of Cardiology has shared its perspective on new CMS payment policies, highlighting revenue concerns while providing key details for cardiologists and other cardiology professionals. 

As debate simmers over how best to regulate AI, experts continue to offer guidance on where to start, how to proceed and what to emphasize. A new resource models its recommendations on what its authors call the “SETO Loop.”

FDA Commissioner Robert Califf, MD, said the clinical community needs to combat health misinformation at a grassroots level. He warned that patients are immersed in a "sea of misinformation without a compass."

Trimed Popup
Trimed Popup