Skip to content
Menu
menu

Interested in receiving information about steps that security professionals around the globe are taking to combat threats presented by global terrorism, political instability, and international crime? Get involved with ASIS International's Extremism & Political Instability Community (EPIC).

By Michael Haggard, The Haggard Law Firm

On 29 May 2020, Dave Underwood, a security guard for a federal building in Oakland, California was shot and killed by a man that associated with the anti-government extremist group, the “boogaloos.” In January 2022, Underwood’s sister, Angela Underwood, filed a lawsuit in California’s Alameda County Superior Court against Facebook’s parent company, Meta Platforms Inc.

The Complaint alleges that the algorithm Facebook uses to feed a user content in response to comments and information posted, helped link the shooter with other extremists and ultimately sent him an invitation to join a group associated with the boogaloo movement. There, the shooter and another extremist began communicating, and ultimately plotted to target a federal government employee in Oakland. As alleged by Underwood’s estate, Facebook aided and abetted the murder.

Whether through politics, reviewing one’s own Facebook page, or speaking with friends, many of us have discussed how the mechanism behind how Facebook works, helps to surround a user with content that supports the user’s own views. Rarely does Facebook suggest content that conflicts with a user’s views, unless the person goes out of their way to search for content that covers contrasting viewpoints.

On one hand, Facebook helps join like-minded people, and brings people together throughout the world that can communicate and form bonds over shared interests. On the other hand, there is an argument to be made that, if not for Facebook’s algorithm, the shooters that murdered Underwood would never have met, and never have plotted an attack on a federal agent. Underwood would still be alive.

In response to the Underwood lawsuit, Facebook defends that they are just a “platform” and have responsibly removed some extremist groups from Facebook. As a result, this lawsuit raises whether the company must do more, and whether Facebook can be held liable for violence that may be manifested from connections that the platform helped facilitate.

Some security experts may believe this lawsuit is a novel concept and are quick to argue that Facebook should be shielded from any liability from violent conduct that the company did not knowingly and intentionally help facilitate. This theory of liability, however, is not a novel concept. For instance, “Backpage.com” once started as an online classifieds ad network to rival Craigslist, but quickly became known as a central hub for soliciting prostitution and sex trafficking. Just as everyone knows that Facebook’s mechanics help bring people together, and at times such people have violent and extremist desires in common, everyone knew that Backpage was helping bring people engaged in illicit sexual behavior together. Having done little to combat this, however, a conflagration of lawsuits attacked Backpage’s role in propagating sex trafficking, and today the platform is no more.

In court, Backpage’s intent was immaterial. A successful lawsuit helped compensate families and people victimized by conduct that was aided by their platform. Whether this same path is even possible when considering the mammoth nature of Facebook is just part of the conversation. More importantly, lawsuits such as these may determine whether Facebook needs to alter its algorithm or implement more proactive security measures to identify and regulate extremist groups seeking to bring violence to our streets. If that is the case, this heightened duty placed upon Facebook will have reverberating effects throughout the tech community.

In the end, we can all agree that the existence of Facebook’s platform aids extremist groups despite that not being the intent of the platform or its founders. Should Facebook be able to knowingly ignore this consequence of its platform, and simply hold any actions it takes against extremist groups as evidence of security practice above and beyond what is required? Or should Facebook be required to do more—to search, locate, and stop these groups from using the platform to communicate in the first place. Some say it is Facebook’s duty to do this, and in turn protect the community from future violence. Others will argue that this forces Facebook into a gatekeeper role, and allows them to decide which views are considered “extremist” and must be shut down, and which views are acceptable. Where is that line? What security measures will become the standard? Only time will tell.

arrow_upward