Image credit: camilo jimenez on Unsplash
Misinformation permeates our digital spaces to an alarming degree. These false claims can receive six times as much engagement as validated news on platforms such as Facebook — as reported in an upcoming study. Professional fact-checking is the main method to combat the proliferation of falsehoods, where news outlets and tech companies task an individual or small group with ensuring the validity of an article’s claims. The issue is that this method does not scale efficiently since fact-checking is a highly labor-intensive process in a breakneck news cycle. However, a new study published in Science Advances has demonstrated a secret weapon against misinformation: the wisdom of crowds.
The study, conducted by researchers from Massachusetts Institute of Technology (MIT) and the University of Regina, describes how collective wisdom might help keep the news accurate. The “wisdom of crowds” phenomenon focuses on how the opinions of a crowd of laypeople can align with the with the opinions of a small group of experts.
To test this, the researchers collected 207 articles that were flagged by an internal Facebook algorithm. These articles were included because of suspicions that they were inaccurate, they had been widely shared, or because they chronicled topics involving politics or health. These studies were then analyzed by two groups: one group of 1,128 regular readers of the United States public — dubbed “laypeople” — and three professional fact-checkers.
The laypeople received a headline and the first sentence of 20 news stories and were asked seven questions about how accurate and unbiased those stories were. Concurrently, the fact-checkers received all 207 articles and were asked to research the validity of the articles and then rate their accuracy using a similar set of questions.
All three fact-checkers agreed on the validity of about half of the articles. Two out of the three fact-checkers agreed on 42% of the articles, and none agreed on the remaining 9%.
In order to compare these findings to the laypeople, the researchers first balanced them for political ideation. This new group, which had an equal number of self-identified Democrats and Republicans, was an effort by the researchers to reduce as much political bias from the results as possible, and should hopefully mimic a real-world selection of random individuals. The researchers found that the ratings of this group correlated strongly with the those of the fact-checkers, indicating that a politically balanced group pulled from the public has a very similar fact-checking ability compared to professionals.
“These readers weren’t trained in fact-checking, and they were only reading the headlines and lead sentences, and even so, they were able to match the performance of the fact-checkers,” said Jennifer Allen, a Ph.D. student at MIT’s Sloan School of Management and author of the study. Allen and co-authors argue that the wisdom of the crowd phenomenon is at play and this study illustrates that this phenomenon holds up in the wake of polarizing information.
While these results are promising, the caveat is how these crowdsourced fact checkers are chosen. As demonstrated in this study, it’s important for a group of public fact-checkers to be politically balanced in a future, scaled-up application of this idea. If this group is tilted toward one side of the political spectrum, it’s possible that any subsequent articles will follow that tilt. This means that misinformation will continue to permeate as the accuracy of articles follows the group’s political majority, and this is concerning to the authors.
“Most people don’t care about politics and care enough to try to influence things,” explained David G. Rand, co-author and marketing professor at MIT. “But the concern is that if you let people rate any content they want, then the only people doing it will be the ones who want to game the system. Still, to me, a bigger concern than being swamped by zealots is the problem that no one would do it. It is a classic public goods problem: Society at large benefits from people identifying misinformation, but why should users bother to invest the time and effort to give ratings?”
Social media is an unfortunate breeding ground of misinformation. As tech giants like Facebook and Twitter begin rolling out their versions of this crowdsourcing, they face two major problems. On one hand, how can they incentivize a seemingly apathetic audience caught in an echo chamber of their own beliefs into caring about fact-checking news and content? Assuming that interest has been generated, how can these organizations ensure that their groups of laypeople are balanced for political ideation?
However, there may be an even bigger issue at play: Why should these websites even care? In this era of the click economy — where increased engagement means increased money earned — social media and news sites likely benefit from the spread of misinformation. So, while it’s ethically crucial for these sources of news and engagement to be factually accurate, it may prove lucrative to look the other way.
Reference: Jennifer Allen, et al., Scaling up fact-checking using the wisdom of crowds, Science Advances (2021), DOI: 10.1126/sciadv.abf4393; quotes adapted from a press release by MIT