It specifically examined how rapidly fake news spreads on Twitter, and classified news as true or false by using information from six fact-checking organisations. The study authors aimed to be apolitical in distinguishing what was true or false.
"Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information", the team, led bySinan Aral of the Massachusetts Institute of Technology, wrote in the journal Science. In contrast, the top 1 percent of tweets carrying false news routinely spread from between 1,000 to 100,000 people.
For instance, the true news-related tweets rarely reached over 1,000 people.
In the study, Aral started with around 3000 stories which were classified as facts or fiction by some of the fact-checking companies like Snopes, PolitiFact, and FactCheck.
All told, "falsehoods were 70% more likely to be retweeted than the truth", even though the accounts most responsible for circulating fake stories often had fewer followers, were less active on Twitter and were more often unverified. Similarly, the researchers identified common themes in the phrasing of replies to false rumours - users more frequently expressed words associated with disgust and surprise when they commented on untruths.
"False news is more novel, and people are more likely to share novel information".
Concern over bogus stories online has escalated in recent months because of evidence that the Russians spread disinformation on social media during the 2016 presidential campaign to sow discord in the USA and damage Hillary Clinton. And falsehoods are retweeted by unique users more broadly than true statements at every depth of cascade. He found out that individual tweets of true news only reached to 1000 or more people.
"No matter how you slice it, falsity wins out", said co-author Deb Roy, who runs MIT's Laboratory for Social Machines and is a former chief media scientist at Twitter.
"We conclude that human behavior contributes more to the differential spread of falsity and truth than automated robots do", they wrote. "We don't know of any other way to get a more rigorous data set", Aral said, than using independent fact-checkers that are almost unanimous. They estimate that 60 million "bots" post automatic updates on Facebook and up to 48 million are on Twitter.
Why do people fall for it, whether it's from a bot or a real friend? False political stories - researchers didn't separate conservative versus liberal - and stuff that was surprising or anger-provoking spread faster than other types of lies, Aral said. Plus, people like to repeat information that seems to affirm their beliefs.
On any given news item, the fact-checking organizations were in agreement between 95 and 98 per cent of the time. Others, of course, are false and far more pernicious, such as conspiracy theories about the Parkland high school shooting in Florida.