Skip to content
Menu
menu

Illustration by Steve McCracken

How Memes and Internet Irony Are Hijacked for Radicalization

Often humorous, bizarre, or bewildering, Internet humor, hashtags, and memes form a fast-moving language that often divides the insiders from the outsiders, building communities of likeminded people. However, not all communities are positive. Many, particularly online, use the coded language of memes to spread hateful or extremist ideology under the guise of humor.

In recent years, multiple violent far-right attacks have been linked to “chan culture”—subcultures built and proliferated on websites and message boards, such as 4chan, 8chan, or 8kun. Attackers uploaded manifestos and live­streams to the sites themselves, and some—including Christchurch, New Zealand, mosque shooter Brenton Tarrant—included violent, racist, or anti-Semitic memes and in-jokes from the sites in their manifestos.

“The connection between chan sites and violence is concerning not only because of the chans’ tangible connection to specific far-right attacks but of the widespread community support that exists within these online subcultures—in which violence is both trivialized and glorified,” found the authors of Memetic Irony and the Promotion of Violence within Chan Cultures, a report released in December 2020 by the Centre for Research and Evidence on Security Threats (CREST).

The report, which analyzed memes and visual culture on 12 chan sites from March 2020 through June 2020, found that memes are “deployed to promote extremist narratives under the guise of pop-cultural aesthetics, humor, and irony, thus lowering the barrier for participation.” This attracts a younger generation of digital natives, who may be initially drawn in by the visual culture and community component of message boards and then slowly become more tolerant of radical and extreme ideologies—including racism, misogyny, and bigotry—due to prolonged exposure.

Aesthetics-deployed-within-these-memes-are-really-important-to-that-chan-culture-way-of-drawing-people-in.png

The in-group element cannot be overemphasized, say Blyth Crawford and Florence Keen, two of the King’s College London researchers who authored the report. Many of the memes and jokes, when taken out of context, seem innocuous or merely odd. But when viewed by insiders—who have been soaking in the culture, language, and themes of the message board—the memes take on a different, more insidious tone.

“Aesthetics deployed within these memes are really important to that chan culture way of drawing people in,” Keen says. “So, it might look to someone unversed in that community or in misogynistic, racist narratives like sort of a joke, and in that way, it would lower the barrier to entry to some of these more extremist mind-sets.”


Furthermore, memes and in-jokes evolve over time, picking up additional layers of meaning. For example, the smirking cartoon image of Pepe the Frog was co-opted by alt-right extremists, despite the original artist’s best intentions. Over the past few years, it has morphed into a myriad of versions to suit different situations, from a nihilistic, disengaged frog cozied up in a blanket to watch events unfold to depictions of a Pepe character committing graphic acts of violence against Jewish people.

The researchers found that the intent and context of an image’s use is paramount to understanding the meaning on a message board, which can make it particularly challenging for outsiders to evaluate potential threats and risks that emerge on chan sites, Keen and Crawford say. Additionally, the visual element—which can begin with a benign image of a smug cartoon frog—“provides users with a degree of inherent deniability,” the report said. “The harsh depictions of violence when juxtaposed with such a trivial aesthetic, not only allows extreme visuals to be masked by a guise of humor, but allows users to mock outsiders who might take the brutality of the images seriously by responding with shock or condemnation.”

Memes are used across all spectrums of intent, whether that’s communicating with an in-group that has a shared understanding of the context behind racist memes and tropes, using images and editing them to add to the joke or make the message more extreme, and deliberately targeting people or companies in mainstream spaces by flooding media and websites with incendiary or racist images, Crawford says.

Disinformation-related-to-vaccines-has-the-potential-to-kill-tens-of-people.jpg

Memes could also be weaponized in response to a major event, fomenting conflict. During the research period for the report, Crawford and Keen saw memes and dark in-jokes about COVID-19 transition from being anti-Chinese to anti-Semitic. The rise of Black Lives Matter protests in the United States triggered a spike in the use of racist memes about race wars or civil war.

“That idea of accelerationism—driving society to collapse and rebuilding it from that point of collapse—has really become something that’s so important to the far-right more consciously in recent years, and it has been taken up on chan sites in particular,” Crawford says. “That was a trend we saw developing—memes developed in response to Black Lives Matter being taken as an incitation for race war.”

In addition, chan culture appears to simultaneously glorify, trivialize, and gamify violence—challenging users to achieve “high scores” by committing acts of real-world violence. On some chan sites—including 4chan, which attracts approximately 27.7 million unique visitors per month—Christchurch shooter Tarrant has been lauded as a “saint,” and subsequent attackers like El Paso, Texas, shooter Patrick Crusius or Stephan Balliet, who attacked a synagogue in Halle, Germany, in 2019, have been dubbed Tarrant’s “disciples.”

The mass alt-right protests, counterprotests, and subsequent fatal violence in Charlottesville, Virginia, in 2017 should have served as a warning sign for governments and private security professionals about the potential risks of Internet subcultures—especially those that foster white supremacy or other extremist views, says Alex Goldenberg, lead intelligence analyst for the Network Contagion Research Institute (NCRI). However, many institutions remain unprepared to respond to threats spawned from online message boards and coded, context-dependent content, as seen when online subculture spilled over into a physical security incident during the riot at the U.S. Capitol on 6 January 2021. While the attack was shocking, it was not unprecedented, Goldenberg says.

“Disinformation is often trafficked through coded language and memes,” Goldenberg adds. Viral messaging—the wildfire spread of memes, in-jokes, and hashtags—can be leveraged to attack an organization, institution, or person, significantly shortening the period between flash and bang—the initial warning sign and a crisis.

Disinformation-is-often-trafficked-through-coded-language-and-memes.png

For example, the use of hashtags to proliferate slogans and push extremist or conspiracy theorist views into mainstream media can rapidly escalate an incident. In 2019, online furniture retailer Wayfair was broadsided by a conspiracy theory that children were being trafficked in their cabinets. The rumor was easily debunked, Goldenberg says, but it went viral, with the hashtag #SavetheChildren being co-opted by QAnon supporters and others who were duped into believing the theory.

Shortly after the conspiracy theory took off, the Wayfair website was overrun with negative reviews and ratings, and message boards were awash with theories and discussions on how to short company stock. The company’s CEO, who had previously been largely unknown to the public, had more than a 1,000 percent increase in mentions online, in addition to death threats over the trafficking conspiracy theory, Goldenberg adds. Overall, the incident—which could have happened to any company, he notes—impacted brand integrity, the bottom line, and employee safety.


During the COVID-19 pandemic, multiple conspiracy theories disrupted public health measures and mitigation strategies, and they often started in the coded language and chan culture of message boards. In January 2021, a group of anti-vaccine and far-right protesters blocked the entrance to a vaccination site at Dodger Stadium in Los Angeles, California, delaying motorists in line to be vaccinated. Right before the protest, however, there was a huge spike in the hashtag #scamdemic across message boards, warning of impending action, Goldenberg says.

Monitoring messaging sites for early spikes in potentially inflammatory content could help law enforcement and security personnel prepare for potential outbursts of protests or violent incidents. There is also a public health element to monitoring for changes in tone across chan sites, because analyzing shifts could better inform public outreach campaigns.

Disinformation related to vaccines has the potential to kill tens of thousands of people,” Goldenberg notes.

There is also an insider threat element to meme monitoring, he says. Especially as organizations prepare to return to in-person workplaces, security professionals should be aware that employees’ discontent—which may have built up after long months of isolation or time spent in online forums—can bleed over into the workplace. Additional threat monitoring or risk reporting mechanisms could be valuable in detecting early signs of radicalization so that the organization can intervene prior to an incident.

While there are many push factors for radicalization, online subcultures play a large role. Anecdotally, Crawford says, researchers observed a rise in message board comments about social isolation during the COVID-19 pandemic, which has pushed more people to seek social connection online—including on chan sites.

The CREST researchers recommend the development of a database for hateful memes, which would include contextual factors to help analysts better assess intention behind their use and any potential threats. Currently, widespread meme databases like Know Your Meme exist, but having a shorthand version specifically for extremist content could streamline analysts’ work.

Additionally, Crawford and Keen recommend investing in media literacy education and resources. Giving people more tools and tips on how to identify disinformation, coded language, and offensive material online can help them to avoid inadvertently spreading malicious content and normalizing extremist views.

Crawford notes that it is essential to take care of threat analysts—particularly their mental health. Spending every day monitoring extremist message boards can quickly affect analysts’ wellbeing—especially when explicit written content is evaluated along with images.

Keen adds, “Having good mental health checks in place in any kind of workplace where you would be monitoring these could be really critical and sometimes can be missing.”

arrow_upward