Skip to content

Illustration by Security Technology; iStock

How SOC Teams Can Best Implement AI as a Security Co-Pilot

The ubiquity of artificial intelligence (AI) within the enterprise is approaching that of the cloud: In 2023, generative AI app usage among enterprise employees increased by 400 percent, and more than 10 percent of enterprise employees accessed at least one generative AI application every month, compared to just 2 percent in 2022.

Cybersecurity professionals are adopting these tools at an even higher rate. Fifty-six percent reported already working with AI and machine learning (ML), while 36 percent of cybersecurity practitioners said they are exploring the possibilities of doing so, according to research from CompTIA.

Practitioners told CompTIA they believe AI can help them monitor network traffic and detect malware (as cited by 53 percent of security professionals), analyze user behavior patterns (50 percent), automate incident responses (48 percent), automate configurations of security infrastructure, predict areas where future breaches may occur, and conduct tests of defenses (45 percent, respectively).

Clearly, AI can provide notable support to security operations center (SOC) teams. These teams must continuously monitor an organization’s entire IT landscape to detect potential threats in real-time and thwart them as quickly and effectively as possible. It’s a big job, and one that brings on immense stress and often burnout, with 84 percent of security workers considering leaving their company or changing their roles or career paths.

More than 10 percent of enterprise employees accessed at least one generative AI application every month.

Given such severe consequences and attrition, we should position AI as a helpful office assistant—or a reliable co-pilot—which enables SOC staffers to streamline activities related to incident detection, alert analysis, data-set enrichment, and more.

Immediate Capabilities

To further illustrate, here are three capability areas that AI can immediately help with—or will be able to lend support after a relatively brief initiation period.

Accelerating onboarding. This represents a great way to get started. Incoming analysts will likely command an understanding of core tools from CrowdStrike or Microsoft. But they will need to be brought up to speed on matters such as implementation and organizational structure. This often includes informing new hires of:

  • Whom the security team reports to (i.e., into a business unit, IT, CFO, or another spot).

  • Other technology teams or organizations that the SOC team is working with directly on a regular basis.

  • Technologies that have been previously approved or tried prior to the individual’s arrival.

AI will allow SOC onboarding team members to absorb this information much faster, bringing on new analysts more swiftly and successfully.

Making recommendations. Here is where AI emerges as a valuable co-pilot. By asking the right questions, teams can get AI to provide guidance about integrating products within the enterprise environment or specific threats which could impact their organization.

We should always exercise caution as we look to safely incorporate generative AI into day-to-day SOC operations.

Take the scenario where a new SOC member is joining; the ramp time of understanding the infrastructure, tooling, the way things are configured, and even the processes are all things a copilot can assist with. In this case, the copilot can quickly offer information on technology usage and can provide additional details to assist team members like information on software configuration, ownership details, and any potential system exposure.  

Simulating attack vectors. Over time, AI should be able to develop relevant examples of attack vectors which teams can test capabilities against, such as a sophisticated email phishing scheme. In this sense, enhanced explainability and visualization will greatly improve adversary emulation efforts to test controls more effectively against the tactics, techniques, and procedures (TTPs) that cyber criminals are deploying.

Exploited vulnerabilities typically come in two ways: software vulnerabilities in the form of a bug and vulnerable configurations. Both are difficult to find and track across an enterprise because of the volume of data team members must sift through and remember. The ability to then understand the complex environment for exploitation adds intricate and specific understandings that not all team members may possess. With AI, understanding use as well as configuration and software version data, we can ensure all avenues are covered when exploring attack vectors against our systems and data.

Implementation Guidance

However, we should always exercise caution as we look to safely incorporate generative AI into day-to-day SOC operations. Here are two best practices to consider.

Check everything out. We know that current versions of AI will get things wrong, in the form of outright hallucinations or other errors. In the context of threat reporting, hallucinations may manifest themselves as unexplainable false positives or stories. As a result, integrated generative AI may begin blocking threats that aren’t real, which can result in your systems being jammed up and causing teams to troubleshoot an issue that doesn’t exist.

So, we can’t go from AI “baby steps” to the most advanced of capabilities without reviewing and vetting outputs carefully for accuracy and relevance. If we ask for assistance in summarizing a threat intelligence report, for example, we can’t share the results without thoroughly checking and cross-checking AI’s contributions with what we know to be true—and false.

To effectively review and vet outputs from generative AI, you must treat it like another member of your team rather than just a tool. I have my team use the same process for generative AI that we leverage for employees and their output. That is, projects that are released must be tested, reviewed, and accepted by operations and support teams. The introduction of virtual agents and co-pilots is no different. 

Find out what your vendors are doing. You’re only as protected as the weakest link in your chain. Just as you’re seeking to deploy AI in a safe manner, which grows over time, you’ll want to inquire with vendors about whether they’re doing the same. Ask them about their roadmap for generative AI and how it will impact their platforms.

This process needs to start at your initial engagement with the vendor, asking them whether they currently have—or plan to have—AI or generative AI implemented in their enterprise tech stacks.

For example, your team engages a third-party that does not currently possess any AI integrations. To align with your organization’s governance and compliance program, you need to ask them if there are plans to incorporate AI or generative AI into their product within the next year. This is a critical question to ask on a routine basis as it not only allows you to understand their inventory as a base-level control but also to ask more in-depth questions based on their responses.

AI can lend a helping hand to SOCs. But, like with any new team member, security pros must fully understand AI’s capabilities and limitations. By bringing the tools along in a measured way with the right pathways and guardrails—building upon success stories while acting as a check upon bad information—these pros will remove many of the burnout-causing manual tasks from their daily lives so they can focus on more strategic objectives. As a result, they’ll emerge as highly formidable defenders of the enterprise who are in it for the long haul, instead of plotting their next career transition.


James Robinson, CISSP, is the chief information security officer at Netskope with more than 20 years of experience in security engineering, architecture, and strategy. Robinson develops and delivers a comprehensive suite of strategic services and solutions that help executives change their security strategies through innovation.