AI lie detectors lead people to make more false accusations, study finds

by | Jul 10, 2024

Participants with lie-detecting AI were more likely to trust it, more readily agreeing when it falsely labeled something a lie.
People at a desk in conversation.

Researchers have discovered that participants were more likely to accuse others of lying when supported by a lie-detecting AI assistant, suggesting proponents of this lie-detecting technology should take pause before its wider implementation.  

“Our society has strong, well-established norms about accusations of lying,” said Nils Köbis, the study’s lead at the University Duisburg-Essen in Germany in a press release. These norms stem from the fact that being falsely accused or making false accusations can greatly harm one’s social standing.

Since humans are terrible at detecting lies, the risk of making a false accusation is even higher — but technology is now changing this calculation. 

Systemic fact-checking protects the accused from false-accusations. If automated and scaled up, the once time-consuming process may become more widely available. “[However,] the real technological game changer may consist of automatic lie detection that decreases the accountability of the accuser rather than automated fact-checking that reduces harm to the accused,” warned Köbis and colleagues in their paper.

Lie-detecting AI  

Lie-detection technology is advancing on the backs of algorithms trained to spot falsehoods via physical reactions, behaviors, and even patterns in writing. The team therefore hypothesized, “If this AI technology continues to improve and becomes massively available, it may disrupt the way people largely refrain from accusing each other of lying.” 

To test their assumption, the team had 986 participants write one true and one false statement about their upcoming weekend plans. To develop a lie-detecting algorithm, the team used an open-source language model developed by Google. They split the data into five, equally sized groups and repeatedly trained the algorithm using 80% of a subset and tested it using the remaining 20%.  

The accuracy of the algorithm was determined by testing its overall performance on all five test datasets. After training, the algorithm performed significantly better than humans and correctly identified a true or false statement 66% of the time.   

A further 2000 participants were recruited to judge whether the statements were true or not. These volunteers were divided into four groups: baseline, forced, blocked, and choice. The baseline groups received no assistance and the forced group got AI assistance regardless of whether they asked for it or not. The blocked and choice groups were able to ask for AI assistance, but only the choice group received it.  

The baseline group was able to correctly identify true or false statements 46% of the time. However, despite knowing that half of the statements were lies, only 19% of the participants accused a statement as being a lie, demonstrating the reluctance to make a false accusation without the help of lie-detecting AI.  

In the forced group, which received mandatory AI help, the rate of accusations rose, with over a third of participants making an accusation of falsehood. Notably, this was significantly higher than the two groups that received no assistance.  

A tendency to agree with AI

Overall, the participants tended to agree with the algorithm too. Only 13% of people disagreed when the AI predicted a statement was true but 40% agreed with the AI when it accused a statement of being false. The result was more dramatic in the choice group, where AI assistance could be requested.  

A whopping 84% of participants went along with the algorithm and made an accusation of a lie. “It shows that once people have such an algorithm on hand, they would rely on it and maybe change their behaviours,” said Köbis. “This is quite alarming, and it shows we should be really careful with this technology.” 

Surprisingly, most participants avoided using AI assistance when given the chance. Despite being told the algorithm is better than humans at detecting false statements, only a third of participants in the blocked and choice groups requested help from the machine.  

Köbis believes this could be due to a known propensity for humans to overestimate their lie detection capability. “We’ve seen in various studies that people are overconfident in their lie detection abilities, even though humans are really bad at it,” he said. 

The team recommends that policymakers reconsider using lie detection technology on important or highly charged matters such as evaluating asylum claims at borders. Several studies show that algorithms do make mistakes and can reinforce biases, yet people still believe they are infallible.  

“There’s such a big hype around AI, and many people believe these algorithms are really, really potent and even objective,” said Köbis, “I’m really worried that this would make people over-rely on it, even when it doesn’t work that well.” 

Reference: Alicia von Schenk, et al. Lie detection algorithms disrupt the social dynamics of accusation behavior, iScience (2024). DOI: 10.1016/j.isci.2024.110201 

Feature image credit: Headway

ASN Weekly

Sign up for our weekly newsletter and receive the latest science news.

Related posts: