Why artificial intelligence can dissuade people from their conspiracy theories higher than humans

LOS ANGELES — Approximately half of Americans subscribe to a type of conspiracy theoryand their fellow human beings didn’t have much success persuade them from their rabbit holes.

Maybe they might learn a thing or two from a AI-powered chatbot.

In a series of experiments, the substitute chatbot managed to unsettle greater than 1 / 4 of respondents about their favorite conspiracy theory. The average conversation lasted lower than 8½ minutes.

The Results were published Thursday within the journal Science.

The failure of the facts to persuade those that we really landed on the moonthat Al-Qaeda really responsible for the attacks of September 11and that President Biden actually win the 2020 electionhas, amongst other things, the fear of a Post-factual era that favors personal beliefs over objective evidence.

“People who believe in conspiracy theories rarely, if ever, change their minds,” said the study leader Thomas Costelloa psychologist at American University who studies political and social beliefs. “In some ways, it feels better to believe that there is a secret society that controls everything than to believe that entropy and chaos reign.”

But the study suggests that the issue lies not within the persuasiveness of the facts, but in our inability to supply the appropriate combination of facts to refute the precise reasons for an individual's skepticism.

Costello and his colleagues attributed the chatbot’s success to the detailed, tailor-made arguments It was created for every of the two,190 study participants with whom it interacted.

For example, a one who doubted that the Twin Towers might have been brought down by airplanes because jet fuel doesn’t burn hot enough to melt steel was told that jet fuel reaches temperatures of as much as 1,832 degrees, enough to cause steel to lose its structural integrity and cause a collapse.

One one who didn’t imagine that Lee Harvey Oswald had the abilities crucial to assassinate President John F. Kennedy was told that Oswald was a sniper within the Marines and would have had little trouble making an accurate shot from about 80 yards away.

And someone who believed Princess Diana was killed so Prince Charles could remarry was reminded of the eight-year gap between Diana's fatal automobile crash and the long run king's second wedding, undermining the argument that the 2 events were related.

The results suggest that “any kind of belief that people have that is not based on good evidence could be changed,” said the study's co-author. Gordon Pennycooka cognitive psychologist at Cornell University.

“It’s really reassuring to know that evidence actually matters,” he said.

The researchers first asked Americans to rate the extent to which they agreed with 15 common conspiracy theories, including the speculation that the virus chargeable for COVID-19 was developed by the Chinese government and that the U.S. military has concealed evidence of a UFO landing in Roswell, New Mexico. After completing one other task, participants were asked to explain a conspiracy theory they found particularly compelling and explain why they believed it.

The prompt prompted 72% of them to share their feelings a few conspiracy theory. Of this group, 60% were randomly chosen to debate the speculation using the massive language model. GPT-4 Turbo.

The conversations began with the chatbot summarizing the human's description of the conspiracy theory. The human then rated the extent to which she or he agreed with the summary on a scale of 0 to 100.

From there, the chatbot got down to prove that nothing was fishy. To be certain it wasn't exaggerating the reality to appear more convincing, researchers hired knowledgeable fact-checker to review 128 of the bot's claims about various conspiracies. One was deemed misleading, and the remaining were true.

The bot also showed itself to be charming. In one case, it praised a participant for “critically engaging with historical events” while reminding him that it’s “important to distinguish between what is theoretically possible and what is supported by evidence.”

Each conversation included three rounds of evidence from the chatbot, followed by a response from the human. (You can try it yourself. Here.) Participants then checked out their summarized conspiracy statements again. Their approval rankings dropped by a median of 21%.

In 27 percent of cases, the decline was so great that researchers could assume that the person “was no longer able to cope with their belief in a conspiracy.”

The 40 percent of participants who served as a control group were also given summaries of their favorite conspiracy theory and rated them on a scale of 0 to 100. They then chatted with the chatbot about neutral topics similar to the U.S. healthcare system or the relative merits of cats and dogs. When these people were asked to reconsider their conspiracy theory summaries, their rankings dropped by a median of just one percent.

The researchers surveyed the people ten days and two months later to see if the results had worn off. They had not.

The team repeated the experiment with a distinct group, asking people about their views on conspiracy theories in a rather more roundabout way. This time, discussing their chosen theory with the bot resulted in a 19.4 percent drop of their rating, in comparison with a 2.9 percent drop for many who chatted about something else.

The talks “really changed people’s minds,” said co-author David Randa computational social scientist at MIT who studies how people make decisions.

“The effect did not vary significantly depending on which conspiracy was named and discussed,” Rand said. “It worked for classic conspiracies like the JFK assassination and the rumors about the moon landing and Illuminatithings like that. And it also worked for modern, more politicized conspiracies like those surrounding election fraud in 2020 or COVID-19.”

In addition, the individuals who were asked by the AI ​​chatbot about one conspiracy theory also became more skeptical about others. After the conversations, their affinity for the 15 common theories decreased significantly greater than that of the people within the control group.

“People are less conspiracy-minded,” Rand said. “They are also more likely to ignore or block social media accounts that spread conspiracy theories or to argue with people who promote those conspiracy theories.”

Another encouraging sign is that the bot was unable to dissuade people from believing in conspiracies that were actually true, similar to the hidden MK Ultra Project that tested whether drugs, torture, or brainwashing could improve interrogations on unsuspecting subjects. In some cases, the chatbot discussions made people imagine these conspiracies much more.

“It wasn't mind control, it was just, you know, getting people to do what they wanted,” Rand said. “Basically, it was following the facts.”

Researchers not involved within the study called it a welcome advance.

In an essay who accompanied the study, psychologist Bence Bago of Tilberg University within the Netherlands and cognitive psychologist Jean-François Bonnefon of the Toulouse School of Economics in France said the experiments showed that “a scalable intervention to recalibrate false beliefs may be within reach.”

The researchers conducted a primary experiment within the week after July 13. Assassination about former President Trump. After helping the AI ​​program find credible information in regards to the attack, they found that talking to the chatbot reduced people's belief in conspiracy theories related to the assassination by 6 to 7 percentage points, which Costello called “a noticeable effect.”

Bago and Bonnefon also questioned whether conspiracy theorists can be willing to work with a bot. Rand said he doesn't think that may be an insurmountable problem.

“One advantage of this is that conspiracy theorists are often not ashamed of their beliefs,” he said. “You could imagine just going to conspiracy forums and inviting people to do their own research by talking to the chatbot.”

Rand also suggested buying ads in search engines like google and yahoo in order that if someone searches for “deep state”, they’ll see an invite to debate it with an AI chatbot.

Robbie Suttona social psychologist on the University of Kent in England who studies why people imagine in conspiracy theories, called the brand new work “an important step forward.” But he noted that the majority people within the study stuck to their beliefs despite receiving “high-quality, factual counterarguments” from a “highly competent and respectful chatbot.”

“In this sense, there is more resistance than openness,” he said.

Sutton added that the findings don’t shed much light on what draws people to conspiracy theories in the primary place.

“Interventions like this are basically an ambulance at the bottom of the cliff,” he said. “We need to focus our efforts more on what's happening at the top of the cliff.”

___

©2024 Los Angeles Times. Visit www.latimes.com. Distributed by Tribune Content Agency, LLC.

Originally published:

image credit : www.mercurynews.com