Can AI Enable ESRM Growth?
Technology is a source of both opportunity and risk, and artificial intelligence (AI) is no exception. A September 2024 joint report from Human Risks and Decis Intelligence, AI Applications in Enterprise Security Risk Management, outlined some of the ways security leaders can apply the benefits of AI to their risk management practices to support efficiency improvements and aid non-expert employees.
It’s shortsighted to ignore the benefits of a megatrend like AI solely because of potential risk factors, says Douglas Gray, risk strategy manager at Human Risks. AI adoption in the private sector is being driven swiftly by pressure from boards and senior management, security vendors and manufacturers adding AI to improve tools and products, and staff, who already have AI tools in their pockets and don’t always understand hesitancy to adopt efficiency-based shortcuts and collaboration tools, he adds.
“AI is the epitome of a very, very complex development trend at the moment, which is almost a black box for most teams,” Gray tells Security Management. “The most important thing is to unravel that black box, understand how it can apply to your processes, and find people who you can work with. If you don’t already have the expertise on your team to know what [AI] actually means and how it could impact you, find people to work with. Then set yourself up with a roadmap for where we could go in the future.
“Otherwise, individual security teams risk being the paper-based process off in the corner or using a typewriter in a computer age,” he adds.
The report’s authors—Gray and Andrew Sheves, founder of Decis Intelligence—reviewed a variety of current literature about AI, enterprise security risk management (ESRM), and business environments to offer several recommendations and discussion points about what AI means for business strategy and security’s involvement.
The report noted that AI brings four core benefits to workplaces at the moment, including:
- Improving the efficiency of routine tasks
- Benefiting lower skilled workers and enabling them to get up to speed quickly
- Improving creative problem solving
- Making routine tasks more tolerable and reducing fatigue
AI models perform poorly, however, when pushed beyond their core capabilities—highly creative problem-solving tasks still remain the province of skilled human employees and analysts.
“Bringing together the strategic and operational challenges faced by security risk leaders across industries alongside the benefits from application of AI in the workplace, we are able to see a number of immediate practical applications for utilizing AI tools in the security risk management industry,” the report said.
In particular, the authors found that AI can improve ESRM posture by:
- Reducing repetitive workloads to increase team efficiency and free up capacity for further value-adding activities
- Improving the value of proactive insights generated by security risk teams to support greater engagement in security objectives
- Decreasing learning curves for stakeholders to provide high-value inputs into security risk management processes
Current AI uses are very operational, Gray tells Security Management. However, there are opportunities to expand its use into more long-term value building by saving time, increasing accuracy, and contextualizing security and risk decisions for a variety of stakeholders and risk owners to drive a more informed decision cycle.
For instance, security teams can use AI to rewrite security briefings or other texts in a more personable manner or in different languages to connect more deeply with audiences and drive engagement in ESRM objectives. AI large language models (like ChatGPT and other tools) can also be used to generate summaries of large reports or datasets to make them more digestible for a variety of audiences.
“At the end of the day, when you need to actually build relationships, it’s a marketing or PR campaign, right?,” says Mads Pærregaard, CEO of Human Risks. “And if there’s one industry that’s using AI today, then you know, it’s the advertising and marketing industry. So, I think every tool they’re using should also be available and used for relationship building from a security risk management perspective.”
When it comes to AI enabling ESRM maturity, Gray and Pærregaard note that AI can help streamline organization analysis to maintain an accurate map of current assets and risk owners, which is an essential but tedious part of ESRM. AI tools can also help security analysts speed up their information synthesis and assessments to help risk owners make more effective, timely decisions about operations.
“AI has a lot of capability around advancing analysis or automating analysis. So, when something occurs, you can deliver insight and analysis quicker when it’s relevant, rather than a fantastic consultant’s report two weeks after the event,” Gray says. This increases the perceived strategic value and relevancy of the security risk team with stakeholders.
AI use can also expand the number of data sources that analysts can review quickly and use to offer guidance. This also lowers the weight placed on any single data source or risk assessment to give a more holistic and analytical view of a situation.
“If you had to do that manually, it would be an impossible task,” Pærregaard says.
Despite AI’s opportunity, the report’s authors recommend that security leaders proceed into the realm of AI usage with a combination of speed, caution, and empathy.
“The change is here,” Pærregaard says. “…We’ve made remarkable progress in a very short time. Now, I think it will be very fast that we see how this technology is implemented in different industries,” he adds, likening the shift to the change newspapers underwent with the advent of the Internet, when successful news outlets quickly began to weave online access into everything they produced.
In addition, Gray says, malicious actors are swiftly leveraging AI tools to proliferate attacks faster and more effectively. Security teams shouldn’t disarm themselves out of principle while they wait to see where AI goes next.
The security industry is notoriously cautious when it comes to adopting new technology, though, and Pærregaard warns that if security leaders watch from the sidelines for too long, they will be unable to catch up.
That’s not to say that caution around AI is misplaced—security leaders are well-placed to offer reasoned counterarguments and caveats around hasty AI adoption, especially if they can align recommendations with recent regulations like those in the European Union, he says.
But the human element should underpin all AI discussions, Gray says.
“First, be conscious of how you’re designing your team and bringing people along the journey,” he says. “…There is a natural cause for concern for a lot of individuals because AI is often talked about as an opportunity to cut costs, reduce headcount, and automate processes.” However, security teams’ expertise plays a significant role in what AI is added to deliver more efficiency and business value, rather than just cutting back on human involvement.
“From an empathy perspective, it’s really just about being conscious that there are individuals in these processes and viewing [AI] as an opportunity to work with them and enhance what they’re doing rather than the opposite,” Gray says.
Want to learn more about ESRM or measure your organization's ESRM maturity? Check out these ASIS International resources.
Enterprise Security Risk Management Guideline
ESRM Maturity Model Self-Assessment
Essentials of ESRM Certificate Course