An Explosive Situation: Strategies to Prevent Youth Radicalization
As January rolled into February 2022, U.S. historically Black colleges and universities (HBCUs) began receiving an unprecedented number of bomb threats.
More than 30 of these 101 institutions responded to 49 total bomb threats in the first two months of the year, according to the U.S. Office for Bombing Prevention (OBP) at the U.S. Cybersecurity and Infrastructure Agency (CISA). As the academic year progressed, other HBCUs were targeted—raising the tally of threatened institutions to more than 50 as of November 2022.
Protect Your Intellectual Property by Connecting the Dots—Trillions of Them
Strider combines open-source data, proprietary risk methodology, and subject-matter expertise to provide organizations direct visibility into the tactics, techniques, and procedures that lead to state-sponsored IP theft.
While authorities did not recover or detonate any explosives in response to these threats, they continued to investigate who was responsible for them. Their work led them to believe that threats targeting at least 19 institutions between 8 February and 2 March may have been made by individuals located overseas. But investigators maintain that most of the other threats against HBCUs were made by a juvenile, according to an FBI announcement in November 2022.
“Given the federal limitations for charging under-age perpetrators with federal crimes, the Department of Justice worked with state prosecutors to hold them accountable on charges unrelated to the specific threats to the HBCUs,” the Bureau said in a press release. “This individual is under restrictions and monitoring of his online activities.”
In a U.S. House Homeland Security Committee hearing later that month, FBI Director Christopher Wray declined to provide more specifics about the juvenile thought responsible for the HBCU threats. He did, however, reiterate that the greatest terrorism risk that the United States faces is posed by lone actors or small cells of individuals who typically radicalize to violence online and use easily accessible weapons to attack soft targets.
And the domestic violent extremists that pose the greatest risk are those categorized as Racially or Ethnically Motivated Violent Extremists (RMVEs) and Anti-Government or Anti-Authority Violent Extremists (AGAAVEs), Wray added.
One concerning trend is the number of minors who are being exposed to extremist ideologies—including white-supremacist ideology and themes—online and via video games. In an annual survey in 2021, the Anti-Defamation League found that nearly one in 10 gamers 13- to 17-years old were exposed to these ideologies.
“Our 2022 survey finds that adult exposure to white supremacy in online games has more than doubled to 20 percent of gamers, up from 8 percent in 2021,” according to the ADL’s 2022 report, Hate is No Game: Hate and Harassment in Online Games 2022.
While there is no scientifically proven connection between video games and real-world violence, there is research that shows negligence in moderating hate in an online space allows extremist ideologies to be normalized. Individuals who believe those ideologies may then commit acts of violence.
For instance, New Zealand’s government released a report on the Christchurch attack that showed the gunman began to be radicalized via online multiplayer games where he expressed “far-right views” without constraint from the community or the platform, the ADL said.
Understanding how far-right extremists spread their ideologies to young people is something that Cynthia Miller-Idriss, director of the Polarization and Extremism Research and Innovation Lab (PERIL) and a professor at American University, has been studying for decades.
Her latest book, Hate in the Homeland: The New Global Far Right, explored where far-right nationalists are recruiting young people—including on college campuses, online gaming chat rooms, and social media sites.
“We might need to ask a different set of questions—not just how and why this happens, but also where it happens,” Miller-Idriss says. “Once you look at it from a question of where, you can start thinking about interventions in a different way.”
Now, it’s much more likely that that content comes and finds you wherever you are.
Thinking about the spread of extremist views and recruitment in this way has been key during the COVID-19 era, when everyone—especially young people—has been spending more time online interacting with close connections and new acquaintances alike. Approximately 100 million young people, for instance, were doing some type of distance learning during COVID-19, according to Lydia Bates, program manager, partnerships, in the Intelligence Project Department at the Southern Poverty Law Center (SPLC).
“People had more time online—more unsupervised time online,” Bates adds. “And extremists who wanted to spread their harmful rhetoric took advantage of that.”
Historically, to find hateful, conspiratorial, or anti-government content, individuals had to seek it out, Miller-Idriss says.
“You’d go find a group or backwoods militia or a hate group or a Ku Klux Klan group and join it with initiation rights, believe their manifesto, and there are membership lists and the whole thing,” she says. “Now, it’s much more likely that that content comes and finds you wherever you are.”
There are some places where individuals are more vulnerable to this kind of content—such as in online gaming, and some parts of the mixed martial arts (MMA) and combat sports worlds—but generally everyone is about two clicks away from harmful online content, Miller-Idriss says.
“As long as there are user-generated content options, there’s going to be harmful online content because the content moderation can’t keep up with it by a long shot and there’s bad actors,” she explains. “People are either sharing it inadvertently, who are trying to monetize the situation, or are trying to recruit and actually draw people in and get them to click on a URL that takes them to another site or an encrypted chat room.”
Easy cloud-delivered video surveillance for schools
Milestone Kite™ is an ideal plug-and-play cloud solution for schools. Learn how it makes analytics for advanced, cloud-delivered VMS a realistic, affordable, and feature-filled option.
Once young people connect with that content or the individuals sharing it, there are a variety of reasons why they might continue to interact with it. One is that the messaging might provide them with a sense of purpose, belonging, or meaning that’s greater than their individual self.
“We know that this generation of young people is more isolated than any other generation, that they’re actually extremely lonely,” Miller-Idriss says. “Even though they’re connected to other people, they are not very connected in person.”
This can be especially true for boys and young men, who might find a sense of brotherhood, structure, and purpose by connecting with groups or individuals who espouse extreme beliefs. The recruit might be encouraged to “get off their couch, get off the game console, start exercising, and develop a plan for themselves,” she adds. This might lead to personal life improvements, along with the adoption of a new ideological package.
“We see that with a lot of modern far-right and white supremacist movements in particular, where they are promoting a straight-edged lifestyle with a huge nostalgia about this being the way you’re going to get a wife and have a life that you’ve always wanted, but they’re also railing against immigrants or those who are coming to replace you—the scapegoats—for why all these problems existed to begin with,” Miller-Idriss says.
Many parents and teachers in 2020 were unprepared with how to respond once they knew the young people in their lives had made these connections with harmful content.
Parents “knew something was happening. Their kids were encountering stuff online but they didn’t know what to do about it and by the time they realized it—it was harder to intervene the further down the rabbit hole they’d gone,” Miller-Idriss adds.
Sometimes concerned parents, teachers, or peers would report an individual to law enforcement to see if they could help. But if the individual isn’t engaged in illegal activity—such as planning to commit an act of violence—then there is little law enforcement can do.
To try to address this gap and give parents and teachers better tools to detect and intervene when a young person might be engaging with harmful content, Miller-Idriss and PERIL worked with the SPLC to create emergency guides for parents and caregivers, as well as resource materials, on what online radicalization is, signs that a young person might be engaging with extremist ideas, and how to respond effectively to prevent further engagement.
Some of the resources include how to talk about male supremacy and gender-based violence, as well as red flags for parents or individuals who might see a young person talking or posting about these topics.
Including male supremacy in the conversation is important because many of the extremist groups that PERIL and SPLC study—white nationalist, Ku Klux Klan, and anti-government groups—are premised on traditional gender roles, Bates explains.
“Men are hypermasculine; their identity is premised on this idea of protection—protecting the home, women, the country, protecting white people, essentially,” she says. “Women’s roles are premised on traditional, stay-at-home-taking-care-of-the-family, raising and birthing white babies and children. So, much of the groups that we look at are very much premised on that gendered dichotomy.”
For individuals in a school setting, including school resource officers, the guides reinforce that it’s important for everyone to be trained on recognizing red flags and warning signs of students who might be exposed to, or sharing, extremist content and who to report it to—such as a school counselor or principal.
It is crucial that the adult not react to the situation by attempting to shame the student.
Once it’s reported and an adult is having a conversation with a student, however, it is crucial that the adult not react to the situation by attempting to shame the student—such as saying “this goes against our school values” or “you weren’t raised this way,” Miller-Idriss adds.
“In a way that makes them feel even more excluded or shamed or rejected, and that can drive them further online where they find more supportive people who say, ‘Oh, they’re just a bunch of triggered snowflakes. They don’t understand. They can’t take a joke. This is where you belong,’” Miller-Idriss explains. “That can actually create a further spiral into the rabbit hole.”
Instead, PERIL and SPLC encourage adults to put the young person in the position of the expert teaching them about where they spend time online and what those environments are like—to engage the young person with curiosity instead of shutting him or her down.
The two groups also published a new guide at the end of 2022 for trusted adults, mentors, and community leaders. The resources in Building Networks & Addressing Harm: A Community Guide to Online Youth Radicalization provide aids for family members, coaches, religious leaders, and others who interact with young people outside of a school setting—the extended support network.
“This guide tries to fill in the cracks so the approach to preventing radicalization as a whole-of-community approach,” Bates adds.
Stay Alert, Stay Updated
Find out your top 7 security-news articles,
The community guide provides an overview of what might drive a young person to be susceptible to extremist radicalization, warnings signs of youth radicalization, responding to extremist rhetoric and violence, strategies for prevention and resilience building, and additional resources and support for specific types of caregivers (extended family members, after school caregivers, youth mentors, and more).
It also includes strategies and recommendations for specific caregivers, including youth mentors who may have more in common with young people than other caregivers.
And to assess if those materials are effective, PERIL and SPLC conduct pre- and post-testing of parents who attend webinars or read their guides.
Based on those surveys, Miller-Idriss says they found it only takes parents on average seven minutes of reading to show they’re more aware of disinformation and misinformation online, that they feel more empowered to intervene when they come across it, and that they know where to get more help if they need it. Just one group failed to improve its scores—the group of most highly educated parents, who came into the training much more confident that they could already detect extremist content, misinformation, and disinformation online.
“That data helped us understand that we have to do more outreach to the highly educated—and especially highly educated and urban parents—who came in pretty confident that they knew already what all that their kids were seeing online and what all the risks are, and that they didn’t need the help,” Miller-Idriss adds.
For more insights on threats against HBCUs and resources to improve responses to bomb threats, read more from Security Management in “Amid Bomb Threats, Schools Seek Support and Resources” and “13 Steps Organizations Can Take to Prepare for a Bomb Threat.”
Megan Gates is senior editor at Security Management. Connect with her at [email protected] or on LinkedIn. Follow her on Twitter (@mgngates) or on Mastodon (@[email protected]).