The Washington PostDemocracy Dies in Darkness

Racism and anti-Semitism surged in corners of the Web after Trump’s election, analysis shows

September 6, 2018 at 12:30 a.m. EDT
Clashes at the "Unite the Right" rally in Charlottesville, Va., in August 2017. (Evelyn Hockstein for The Washington Post)

Racist and anti-Semitic content has surged on shadowy social media platforms — spiking around President Trump’s Inauguration Day and the “Unite the Right Rally” in Charlottesville — spreading hate speech and extremist views to mainstream audiences, according to an analysis published this week.

The findings, from a newly formed group of scientists named the Network Contagion Research Institute who studied hundreds of millions of social media messages, bolster a growing body of evidence about how extremist speech online can be fueled by real-world events.

The researchers found the use of the word “Jew” and a derogatory term for Jewish people nearly doubled on the “Politically Incorrect” discussion group on 4chan, an anonymous online messaging board, between July 2016 and January 2018. The use of a racial slur referring to African Americans grew by more than 30 percent in the same period.

Gab.ai, a social media site modeled on Twitter but with fewer restrictions on speech, saw even more dramatic increases since the site began operating in August 2016. The use of the term “white,” which often occurred in connection with white-supremacist themes, also surged on both platforms.

These two forums, although small relative to leading social media platforms, exerted an outsize influence on overall conversation online by transmitting hateful content to such mainstream sites as Reddit and Twitter, the researchers said. Typically this content came in the form of readily shareable “memes” that cloaked hateful ideas in crass or humorous words and imagery. (Facebook, the largest social media platform, with more than 2 billion users, is harder to study because of the closed nature of its platform and was not included in the research.)

“There may be 100 racists in your town, but in the past they would have to find each other in the real world. Now they just go online,” said one of the researchers, Jeremy Blackburn, an assistant professor of computer science at the University of Alabama at Birmingham. “These things move these radicals, these outliers in society, closer, and it gives them bigger voices as well.”

Social media has created an era of easy anonymity and instant communication between disparate groups around the world. But it has also created the conditions for outbreaks of extremist ideas.

The research, which has not yet been peer reviewed, sheds new light on how niche hate movements once relegated to dark corners of the Web can abruptly burst into the mainstream. The Charlottesville rally last year, for example, won crucial publicity through racist and neo-Nazi conversations on social media and websites such as the Daily Stormer.

“You can’t act as if the fringe will stay on the fringe, especially in the era of the Web,” said Heidi Beirich, director of the Southern Poverty Law Center’s Intelligence Project, who was not involved in the new research.

Efforts to portray the Parkland, Fla., school shooting as a hoax and its survivors as professional actors initially coalesced on fringe forums on Reddit, 4chan and 8chan, an offshoot for those who consider 4chan too restrictive, before shooting to the top of YouTube's “Trending” list.

We studied thousands of anonymous posts about the Parkland attack — and found a conspiracy in the making

The QAnon conspiracy theory began circulating on the same platforms last fall before exploding into public view in August, after months of refining its central allegations, purportedly from a top-secret government agent, that President Trump is secretly battling a shadowy cabal of sex rings, death squads and deep-state elites.

The 4chan and Gab forums showed similar surges of terms referring to racial identity and white supremacy, with racially and ethnically charged terms increasing steeply on both sites after data collection began.

They also hit dramatic peaks in late January 2017, when Trump’s inauguration was celebrated by members of the “alt-right,” a movement that espouses racist, anti-Semitic and sexist views. A second, higher peak — with posts containing the terms amounting to about 10 percent of all comments on the forums — came in the days surrounding the Charlottesville alt-right rally, in August 2017, which ended in violence and the death of a counterprotester.

When asked for comment on the findings, Gab responded via its Twitter account: “Hate is a normal part of the human experience. It is a strong dislike of another. Humans are tribal. This is natural. Get over it.”

Hate speech crackdown spreads to behind-the-scenes tech

Hiroyuki Nishimura, the owner of 4chan, didn't respond to emailed requests for comment but has in the past defended the site as a bastion of free speech.

The new research draws on a theory that such outbreaks of hate speech resemble contagious diseases that should be confronted as early as possible, before they become widespread epidemics.

“You can’t fight the disease if you don’t know what it’s made of and how it spreads,” said Joel Finkelstein, the group's director and the research’s lead author, who recently received his doctorate in psychology from Princeton University.

Fake Facebook accounts and online lies multiply in hours after Santa Fe school shooting

The Network Contagion Research Institute formed in May as a nonprofit group. The paper on the surge in hate speech is its first, although two members of the research group, Blackburn and Savvas Zannettou, a graduate student in computer science at Cyprus University of Technology, did related work for a paper in May on how memes are created on fringe online forums. Those two and a third member of the group, Barry Bradlyn, an assistant professor of physics at the University of Illinois at Urbana-Champaign, also were co-authors on a February paper about the prevalence of hate speech on Gab.

The researchers used data analytics to show how words and images hopscotched across the online world, as different forums played distinctive roles and influenced each other.

They found, for example, that the 4chan “Politically Incorrect” board — with its high volume of posts, user anonymity and anything-goes ideology — served as the most prolific source for many of the most offensive memes that eventually spread widely across the Internet.

That board also exported anti-Semitic memes to Reddit’s “The_Donald” board, created in 2015 for supporters of Trump, and to Twitter.

Reddit spokeswoman Anna Soellner said the site implemented a new policy in October 2017 forbidding content that encourages, glorifies or calls for violence against an individual or a group of people. “We know there is more to do, and we will continue to evolve our site-wide policies, enforcement tools, and community support resources to ensure that Reddit is a welcoming place for everyone,” she said.

The moment when Facebook’s removal of alleged Russian disinformation became a free-speech issue

Taken together, some users of the various platforms worked in tandem to propel hateful and conspiratorial ideas to the attention of audiences who might not seek out such content but could have their views shaped by it.

"That is very much the stated goal of many of the meme makers who open up on openly neo-Nazi spaces on 4chan and the Daily Stormer,” said Becca Lewis, who researches online extremism for the New York think tank Data & Society and has no involvement in the new group. “They see themselves as attempting to influence mainstream culture through their memes.”

Network Contagion Research Institute researchers attempted to measure the spread and influence of memes by tracking their first appearance, where they later appeared and how they mutated while moving from platform to platform. They also sketched out what they called “neighborhoods” of memes clustered by theme or prominence.

They found that, like a mutating virus, recognizable and popular images often were co-opted and contorted into hateful memes, as anonymous users skewed the features of the originals to demean ethnic groups or popularize racial hostilities.

Snapchat’s maps displayed an anti-Semitic label for New York City. It says it was vandalized.

One meme known as the “Happy Merchant,” an offensive Jewish caricature, was used repeatedly to convey anti-Semitism. It also proved effective at reaching mainstream audiences: One comic showing the caricatures faking anti-Semitic attacks such as spray-painting swastikas onto a temple, subtitled “Makin' Hate Crimes great again!," spread from 4chan to Gab to Reddit, the country's fifth-most popular website, in about two weeks in early 2017, according to the research.

The meme made at least 1,100 appearances across Reddit between mid-2016 and mid-2017, including on “The_Donald” board, the researchers found. After the meme's posting on “The_Donald,” it routinely showed up on Twitter and other platforms tracked by the researchers, suggesting the site was unusually influential in aiding the meme's spread.

Reddit's “The_Donald” board had by that time implemented a policy saying racism and anti-Semitism “would not be tolerated.” Moderators for the board declined to respond.

Parkland shooting ‘crisis actor’ videos lead users to a ‘conspiracy ecosystem’ on YouTube, new research shows

Racial slurs on the sites grew increasingly rampant, the research found. The words “Jew” and “white,” as well as a derogatory term for Jewish people, were used in 7 percent of the roughly 100 million posts on Gab and the “Politically Incorrect” board together.

An epithet for African Americans was used in more than 2 percent of all posts. Use of the hateful term exploded in July 2016, the day after a black man killed five police officers during a mass shooting in Dallas.

The findings, researchers wrote, suggested a “worrying trend of real-world action mirroring online rhetoric” — and a possible feedback loop of online and offline hate.

"When people aggregate together” in these communities, Finkelstein said, “they end up radicalizing each other.”

Even as the researchers illuminated how hateful speech and images spread, they struggled with how to combat such powerful online forces. They worry that censoring content or banning users pushes racist and anti-Semitic content to other forums.

Rather, Finkelstein and his colleagues hope that illuminating the way hate speech spreads will make it easier to challenge it — through what he calls “a digital immune system.” But it’s not entirely clear how it would form, who would deploy it or whether it would be up to the task of defusing hateful ideas.