Extreme Internet Control
Print Issue: January 2017
Last June, the United Nations Human Rights Council determined that Internet access is a basic human right. However, many countries and organizations continue to limit access to the Internet. Last August, Russia briefly turned off Internet access in Crimea for unclear reasons. Ghana switched off its Internet during the country’s November elections. Bangladesh has been testing Internet lockdowns since August. And countless countries block selected social media platforms, news websites, and other content, often under the auspices of national security.
It may be unsurprising that oppressive regimes are throttling Internet access. But national security leaders in many nations around the world are working with social media platforms to restrict content that encourages violent extremism, which privacy advocates say is no different from the Internet censorship taking place in North Africa and the Middle East.
ISIS and other extremist groups are using social media platforms excessively—and effectively—to recruit members, raise money, and spread their ideologies. In May, digital platforms, including Facebook, Twitter, YouTube, and Microsoft, signed a European Commission Code of Conduct agreeing to remove “illegal, online hate speech” from their sites. Since then, Twitter has stepped up its monitoring of users’ content, deleting hundreds of thousands of accounts linked to radical extremism. Facebook will remove any content celebrating terrorism. And Google redirects people searching for information about ISIS to anti-extremism websites.
However, privacy advocates note that there is no standing definition of illegal online hate speech, and that there is no way that censorship by social media platforms can be objective. Indeed, Facebook is working with Israeli officials to remove pro-Palestinian posts that incite violence against Israel. In September, Israeli officials noted that Facebook, Google, and YouTube are complying with 95 percent of the government’s requests to delete content.
“What is extremist speech? The state doesn’t know,” says Shahid Buttar, director of grassroots advocacy at the Electronic Frontier Foundation, a nonprofit civil liberties defense organization. “And when it’s tried to define it, online or offline, it has always swept up constitutionally protected speech. It’s well documented that people silence themselves when they know they’re being watched.”
Mark Wallace, the CEO of the Counter Extremism Project (CEP), helped develop one of those algorithms. Wallace explains that the nonprofit CEP “fills the gaps” when it comes to fighting extremists on a theater that has moved from sea, land, and air to online. Wallace worked with Hany Farid, who previously developed an algorithm to identify child pornography online, to find a way to report violent extremist images. The technology uses hashing, which identifies the unique digital signature of audio, video, or images and scans a database for matches—in this case, of violent beheading videos and other powerful extremist recruiting tools. The algorithm will automatically report the content to the host platform, which will ostensibly remove it.
“We have collected systematically thousands of video, audio, and photographic items that we think are extremist content,” Wallace tells Security Management. “We can take that database, and it immediately identifies that content wherever it resides on those platforms, including at the Internet Service Provider (ISP) level. The Internet has been a very welcoming place to the cyber jihadi. We hope our algorithm will be the mechanism to make the Internet and social media companies no longer a welcoming place for them.”
Wallace notes that researchers are responsible for initially identifying extremist content, but the same content tends to emerge repeatedly. He points to the messages of Anwar al-Awlaki, an al Qaeda recruiter and U.S. citizen who was killed in 2011 by a CIA drone strike.
“If you look at the domestic terror prosecutions here in the United States, a majority of those tried were radicalized by al-Awlaki’s videos from the grave,” Wallace says. “That’s content we know, and hopefully will be able to remove from social media platforms instantaneously.”
Free speech activists also identify al-Awlaki as a prime example of censorship, but for different reasons. There was a federal court proceeding at the time of al-Awlaki’s death in which his family sought due process for him, but he was killed before the courts could address the situation, experts say.
Wallace and the CEP are currently working with social media platforms and governments around the world to deploy their algorithm “in a manner that is effective and responsible,” he says.
“I think we can all agree that removing the worst of the worst content is a good starting place and should be uncontroversial,” Wallace says. “Maybe the next Jihadi John will realize that no longer is a video of a terrorist with his knife at the neck of some poor soul used as a tool to glorify a terrorist group, to propagandize, to call others to act, to fundraise, and to recruit.”
Meanwhile, the Middle East, North Africa, and Russia are still dealing with an increase in state-mandated Internet shutdowns. William Buchanan, a computing professor at Edinburgh Napier University, explains that Internet traffic goes through a countrywide firewall. In times of crisis, the country’s leaders can control the main firewall and drop service if necessary. He suggests that in the coming years, most countries will articulate plans for when and how they can take over the firewall.
“What happens in an emergency is people swamp the network with traffic, so I think many countries will have a plan to cut citizens off the network for a certain amount of time while they cope with something like a cyberattack,” Buchanan says. He says he thinks countries like Bangladesh are testing the network to see if they can take it over and make sure they have priority over the rest of the network.
Buchanan sees the use of firewall control during a major event as justified because it allows emergency and first responders to communicate in a timely manner, but he says in countries with high political tensions, blocking the Internet can be done maliciously. For example, when Bangladesh tested its network control, it blocked news outlets that reported on antigovernment organizations, he notes. And during the coup in Turkey last July, the government cut off access to YouTube, Facebook, and Twitter to quell any uprisings.
Many countries “play the terrorism card” to justify controlling the Internet or viewing private data, Buchanan says, which isn’t logical because terrorists know how to hide their tracks. “Operating systems that boot from USB sticks and leave no presence on devices, VPNs, and proxies…those are the types of tools that a terrorist or criminal will use, and invest a lot of time and energy to create.”
This kind of reasoning, as well as roundabout laws such as Saudi Arabia’s ban on all use of encrypted traffic, can be a slippery slope for privacy concerns and affects law-abiding citizens more than the troublemakers, Buchanan notes.
“The more that we use encryption panels, the less chance that law enforcement will have in actually tracing the real criminals,” Buchanan explains. “What they’ll end up doing is monitoring everyone else for the normal things, and then a data breach at an ISP could release information about the president or prime minister, and everyone else whose information was collected.”
Whether it’s a complete shutdown to Internet access or careful monitoring of potentially dangerous content, countries and companies around the world are taking advantage of the possibilities—and power—inherent in controlling what citizens see online. As criminals and extremists move their activities from land and sea to technology, governments must figure out how to counter digital warfare while simultaneously respecting and protecting citizens’ basic human right to Internet access.