Skip to content
Menu
menu
Red and blue election icons create a shield with missing icons creating a check mark in the middle on a light blueish background with an abstract technology waves behind.

Illustration by iStock; Security Technology

How AI May Change the Way Security Teams Use OSINT

The assassination attempt against former U.S. President Donald Trump reminded the security community just how challenging it is to provide effective security in today’s politically charged, unstable world.

It also highlighted the difficulty law enforcement faces when it comes to detecting a “clean skin” lone wolf (i.e., one without a prior arrest record), and presumably working without the support of a group or groups. 

As former Obama Administration Press Secretary Josh Earnest said in a White House briefing, citing then FBI Director James Comey, trying to find a lone wolf online is like trying to “detect a particular piece of hay before it turns into the needle in the haystack.”

The investigation into Trump’s attacker is ongoing. While no evidence of a clear pre-attack signal on social media has yet to surface, the FBI testified in July that the shooter searched for details of the JFK assassination.

This assassination attempt is a stark reminder for security stakeholders in both the public and private sector that an attacker only needs to “get it right” once, and that those charged with protecting people must assume that any gap or vulnerability can and will be exploited.

In an attempt to even the playing field, public officials and, increasingly, the private sector have turned to proactively monitoring Open Source Intelligence (OSINT)—the digital dust created on social media, in chat rooms, and on the deep/Dark Web—for signs and signals that someone may be targeting their employees, facilities, or operations. Usually, the public only hears about failures to detect online chatter when an attack comes to fruition (consider the case of the New Zealand shooter that published his manifesto hours before launching an attack in 2019).

What the public doesn’t hear about are the thousands of times that law enforcement or private sector security operatives identify and intervene to prevent a potential tragedy. Most large companies use some form of OSINT monitoring to scrape public facing chat rooms and social media for keywords and Boolean search strings—i.e., “[Company] or [executive name]” + “[kill].” 

That said, there are inherent limitations, particularly for private sector stakeholders.

When social media was just starting to proliferate in the early 2000s, there were hazier rules around data aggregation and monitoring. The reality now is that most users can—and often do—configure their settings in such a way that limits or eliminates detection from aggregating tools. Subjects also inherently understand that information is discoverable. This has likely led to a decrease of their telegraphing incidents or attacks beforehand—especially sophisticated actors or groups.

But there is still a vast amount of online content—ranging from videos to chatrooms in the darker corners of the Internet—that is discoverable and aggregable. OSINT tools already use sophisticated search algorithms to sift through thousands of posts. Now, with the advent of artificial intelligence (AI) disrupting entire industries, platforms are harnessing the power of machine learning to improve speed to and accuracy of detection.

Here are a few examples of how machine learning and AI have changed—or are on the cusp of changing—OSINT as a discipline.

First, the ability to convert images and video to natural language—which can be queried like a search engine—now exists. Tools can crawl troves of images and videos to detect guns, weapons, or even client logos.

AI can also be leveraged to correlate seemingly disparate user data sets—at scale and in real-time. Historically, this would have taken an analyst hours of time, particularly for similar names or usernames (i.e., determining that John Smith in Portsmouth isn’t John Smith in Portland). AI and machine learning will make, and on some platforms already are making, these queries a near-real-time exercise, with confidence scores and analysis at the click of a button.

While AI tools provide efficiency, humans in the loop are absolutely necessary. These powerful tools make intelligence professionals better at their tasks but do not replace human judgement.

What does this mean for security stakeholders—particularly in the private sector?

First, it means that duty of care requirements may change. Right now, an ordinary, reasonably prudent company likely monitors for overt threats against assets, employees, or executives. If AI makes this process even easier, and near frictionless, this could signal a sea change for how the legal system will view duty of care and subsequently impose a heightened standard.

In the same way that social media upended how the private sector detects a potential attack, AI will undoubtedly disrupt how intelligence analysts correlate and detect open-source posts. As with any new technology or paradigm shift, the challenge will be avoiding the bleeding edge while ensuring you aren’t left behind.

Orwellian fears aside for a moment, one wonders, with the full force of a finely tuned AI co-pilot, if law enforcement could have correlated search results to detect Trump’s shooter before he even arrived at a field in Butler, Pennsylvania, with an intent to commit violence.

Ben Joelson, CPP, is principal and head of security risk & resilience at The Chertoff Group.

© Ben Joelson, The Chertoff Group

arrow_upward