On Twitter, ‘supersharers’ spread majority of fake news

Hands holding a smartphone with city lights in the background.

Less than 1 percent of users shared 80 percent of fake news, study finds

Release Date: January 24, 2019 This content is archived.

Print
Kenny Joseph.

Kenny Joseph

“While it is necessary to study how foreign and state actors influenced the spread of fake news, it is equally as important to understand why these seemingly ordinary people decided to heavily promote and embed themselves within this content. ”
Kenny Joseph, assistant professor
Department of Computer Science and Engineering

BUFFALO, N.Y. - A new report on the influence of social media during the 2016 election found that a massive amount of fake news was produced and consumed by a very small amount of users.

The report, published online Thursday in the journal Science, was co-authored by University at Buffalo computer scientist Kenny Joseph.

Joseph said he was surprised “at how far a small number of people went to promote fake news.”

“We suspect some people used automation tools typically reserved for large organizations in order to share large volumes of fake news,” said Joseph. “While it is necessary to study how foreign and state actors influenced the spread of fake news, it is equally as important to understand why these seemingly ordinary people decided to heavily promote and embed themselves within this content.”

What is fake news?

There is not widespread agreement on what constitutes “fake news.”

The report’s lead author, David Lazer, a Distinguished Professor of Political Science and Computer and Information Science at Northeastern University, defines it as a “subgenre of misinformation,” calling it “information regarding the state of the world that’s constructed with disregard of the facts and invokes the symbols of existing truth-tellers. It misinforms by appealing to the very worst of human nature, and undermines truth-tellers at the same time.”

Lazer and colleagues used his definition of fake news sources to guide the study. Lazer defines fake news sources as outlets that “lack the news media’s editorial norms and processes for ensuring the accuracy and credibility of information.” They include among these sources such sites as truthfeed.com, dailycaller.com, and gatewaypundit.com.

In order to stop the spread of fake news for the 2020 U.S. elections, Lazer said social media companies may need to limit how frequently its users are allowed to post.

Who are the 'supersharers'?

In the study, the researchers found that 5 percent of political news generated in 2016 came from fake news sources. On Twitter, 0.1 percent of users shared 80 percent of that fake news. And they shared it with a very concentrated group of users. About 1 percent of users were exposed to 80 percent of the fake news shared on Twitter, the study found.

“You have a really small group of people who just cranked out a ton of stuff, and a small number of people who got exposed to a ton of fake news,” Lazer said. The report calls these groups of people “supersharers” and “superconsumers.”

To track the prevalence of fake news among people on Twitter, Lazer and his colleagues matched U.S. voter registration records to Twitter accounts. Using this method, the researchers could sift out bot accounts and focus solely on human users. After an additional vetting process, the researchers were left with more than 16,000 accounts linked to real, voting U.S. residents that they used for the study.

The researchers found that these “supersharers” of fake news sources were people from across the country but “disproportionately aged 50 or above, Republican, and female,” the report reads.

In general, “superconsumers,” or the people who had high proportions of information from fake news sources in their newsfeeds, were “more likely to be right-leaning,” the report reads. Lazer’s research doesn’t specifically delve into why right-leaning users tend to share and consume more fake news than their left-leaning counterparts.

How can we stop fake news?

Since his research shows that it’s a small group of people who “cranked out a ton” of fake news leading up to the 2016 election, Lazer suggested that one way to curb the spread of fake news in 2020 is to put a limit on the amount of times a user can post in a given amount of time.

“We’ve found that sharing tons of content is correlated with sharing tons of garbage; this super-sharing disproportionately affects fake news and misinformation,” he said. “If you put a speed limit on how often you can share, it would dramatically cut down the spread of fake news.”

Other tools, like muting and blocking, are already available to Twitter users, Lazer said. These tools can prevent fake news from ever crossing a user’s screen by cutting off the account that’s sharing it in the first place.

Joseph added: “Our work suggested that the most useful things that platforms like Twitter can do to reduce the spread of fake news are to develop better tools to monitor and demote content from highly active automated accounts, to focus on removing content from the few websites that are responsible for the vast majority of fake news, and to leverage measures of audience overlap between websites to identify when new fake news sources are emerging.”

These solutions will merely stem the flow of fake news, though, Lazer said. He doubts that there’s a way to eliminate fake news entirely.

“It’s not a problem that’s going to go away,” he said. “It may be a problem that can be managed, and I hope it is. But as long as there are people who believe crazy stuff, they’re going to keep sharing that stuff.”

Media Contact Information

Cory Nealon
Director of Media Relations
Engineering, Computer Science
Tel: 716-645-4614
cmnealon@buffalo.edu