Researchers have found that engaging in conversations with chatbots can help weaken people’s beliefs in conspiracy theories by an average of 20 percent. These changes were even seen in individuals who believed that these conspiracy theories were central to their worldview. The effects of these conversations persisted for two months after the experiment. Large language models, like the one powering ChatGPT, were used in these experiments to deliver targeted rebuttals to conspiracy theorists, showing more efficiency compared to people trying to persuade others offline.

While up to half of the U.S. population buys into conspiracy theories, rational arguments based on facts and counterevidence seldom change people’s minds. Psychological theories suggest that conspiracy beliefs persist due to unmet needs for feeling knowledgeable, secure, or valued. However, the effectiveness of AI chatbots in challenging conspiracy beliefs adds a new dimension to understanding this phenomenon. Chatting with bots is also known to help improve moral reasoning, suggesting that this study marks a significant step forward in the psychological understanding of conspiracy theories.

The experiments, involving thousands of participants, tested AI’s ability to change beliefs on conspiracy theories. Participants wrote down their beliefs, provided supporting evidence, and engaged in conversations with an AI chatbot named DebunkBot. By summarizing their conspiracy beliefs, participants rated conviction levels on a scale from 0 to 100. After conversing with the AI, there was a 20 percent weakening in belief conviction on average, and some participants shifted from above 50 to below on the scale. The chatbot also had an impact on general conspiratorial beliefs, beyond the specific theory being discussed.

Professional fact-checking confirmed the accuracy and lack of political bias in the chatbot’s responses. While these findings offer promise in combating misinformation and conspiracy theories, applying them in the real world may be challenging. Research shows that conspiracy theorists are among the least likely to trust AI, which could hinder the adoption of such technologies for persuasion. Additionally, there are concerns about the potential misuse of AI to spread misinformation rather than debunk it.

Overall, the study highlights the potential of AI chatbots in influencing belief systems, particularly in challenging conspiracy theories. While there are limitations and challenges in implementing these findings in society, addressing misinformation through AI interventions could be a valuable tool in promoting critical thinking and combating the spread of conspiracy theories.

Share.
Exit mobile version