How to Broaden Digital Threat Assessments for Executive Protection
Is executive protection (EP) any different for female versus male principals?
It’s a nuanced question. When it comes to conducting a good assessment, advance, or close protection operation, if there are differences, they are minimal. However, there are noteworthy differentiations in threats—especially digital threats—directed at female executives.
“Male and female executives are kind of equal in threats,” says David Muse, CEO of cybersecurity firm ZeroFox. “Now, some of that is just the nature of male executives, unfortunately, having a higher population. But where it differs is the type and tone of abuse. On the male side, you’re going to see threats of physical violence, more reputational harm, a lot of financial scams from a corporate level… From a female perspective, unfortunately it’s sexualized commentary, misogynistic.”
There are also different motivating factors for harassers, says Pat Butler, executive vice president of strategic engagement at analytics and threat intelligence company Babel Street. Although most threats against male VIPs and executives focus specifically on the principal’s actions, threats against female VIPs can cite the woman’s actions or statements as well as her family’s actions or statements.
“It appears that women executives get targeted more in terms of trying to go after their family members,” Butler says. Although male executives have family members they care deeply about, too, “it’s an expectation on the part of the attacker that women are more vulnerable on that front, and so they will exploit that vulnerability.”
Muse also notes that women are more likely to be active within social spaces—including online—around their families, so it can be easier to map out a path to the executive through those relations. In one case, a threat actor used a four-year-old obituary of a family member to map out a female executive’s family contacts to stalk her more closely, he says.
Although women are more likely to be targeted through family members or using sexually explicit threats, men aren’t exempt. So, it’s worthwhile to dive deeper into tailoring protective intelligence accordingly.
For organizations and EP professionals conducting open-source intelligence (OSINT) threat assessments for male and female clients, adding family members to protective intelligence programs exponentially expands the amount of ground to cover. Analysts have to search for threats referencing the principal’s family members, friends, and other affiliates alongside the principal him or herself, plus any number of variations on those names.
For one person, that could include a set of names, nicknames, and alternative spellings: Katherine Jones, Kathy Jones, Catherine Jones, etc. Then the analyst would layer those searches with terms for specific threats that might apply to that individual. Searches that relate to female principals might also include sexually explicit threat terminology.
If a threat assessment professional ran an OSINT search the same way every time—with variations of his name and a standard list of threat terms, but skipping family names and variations—he or she is likely to miss something, especially for female executives, Butler adds. “With most male executives, you don’t have to think down that path.”
In addition, changes in online behavior and social media platforms have made real-time threat monitoring and response even more complicated. Fifteen years ago, there were a limited number of frequently used social media platforms, for example. But now, those platforms have splintered, with hundreds of smaller forums, country-specific social media platforms, chat apps, and more.
The threats on these platforms may not have changed much over the years, but it’s more difficult than ever to find relevant information. Analysts need to check across dozens of different platforms and use multilingual keywords as their search terms. That often makes the volume of items to review cost- and time-prohibitive, Butler says. As a result, analysts will pare back the numbers of terms they search for to only the most egregious threats, which means they could easily miss a pattern of concerning behavior or an actionable threat.
More recently, artificial intelligence (AI) tools with risk models layered on top show promise in enabling analysts to once again perform more comprehensive threat searches in social apps. However, this comes with limits. The Internet and social networks are rife with sarcasm, easily misinterpreted comments, troll farms, creepy memes, and dark humor, which can prove difficult for machines and AI to detect. Analysts need to leverage contextual metadata from multiple points to determine which commenters could make the leap offline and become a physical threat.
AI-generated content, including deepfakes, has complicated the speed and believability of fake content and threats online, and emerging deepfake technology gets cheaper and more accessible every year, Muse says. Now, it only takes two to three seconds of a vocal sample to generate a convincing fake voice message, and the sophistication of deepfake video avatars can swiftly cause reputational and emotional harm, particularly when deployed at a larger scale or volume, he adds.
“It’s important to wrap the full threat landscape together, both digitally and physically,” Muse says. “The use case I tell everybody is if I’ve got protesters outside my window and they’re becoming more and more loud, they’re throwing stuff, they’re yelling—I would call the cops immediately. This is happening all the time on social networks. So, this is a reality that we’re seeing, this convergence from digital to physical.”
Sentiment analysis and behavioral analysis over time provide a more accurate picture of the individual’s risk, Muse says.
Analysts need to evaluate individuals’ networks—did the comment come from someone in a community that is specifically targeting an executive or company? Is the commenter more of a lone actor online who comments about a different executive every week? Analysts should look for signs of an organized group, since those have a higher likelihood of action, Butler says.
At that point, analysts start watching closely for signs that the individual could gain access to the principal, whether that is geographic proximity, travel plans, or in-depth knowledge of the principal’s itinerary. Those details help make a compelling case to take further action, whether that’s enhancing close personal protection, changing the executive’s schedule or travel arrangements, or adding more advance assessments.
Plus, EP professionals may need to proactively educate family members and the principal about potential risks, such as oversharing online or potential phishing or social engineering attacks to collect information, Muse says. This awareness training dovetails into an overall holistic and proactive risk management strategy for executives.
Such a strategy involves taking a deep look at what content is available about an executive or her family online, including family members’ digital footprints, any breached or stolen data, or publicly available contact information. Then, teams should repeat that analysis regularly to reassess any changes and adjust security postures accordingly.
The family element has made the argument for proactive security and EP measures more compelling, Muse says.
“In the last quarter, we’ve had a large number of clients where we’ve had to engage because their families were targeted,” he says. “Their kids were targeted via social networks and compromised credentials… So, let’s be as proactive as possible. If we can leverage all this pattern recognition and experience to put you in the best defensive posture—and in some cases, offensive posture—we should do that.”
Claire Meyer is editor-in-chief of Security Management. Connect with her on LinkedIn or email her directly at [email protected].