Skip to content
Menu
menu

Illustration by Security Management; iStock

Meeting the AI Moment: Reflections from RSAC 2024

Last year at the RSA Conference (RSAC), artificial intelligence (AI) was the buzzword you couldn’t avoid. At the 2024 event this week in San Francisco, California, it’s now an essential aspect of the security conversation with the attention of top executives.

CEOs are now directly engaging in conversations about the potential value of AI and the risks it could pose to the business if not incorporated appropriately.

“Of all the technology advances that I’ve seen in the 30 years of cyber, I’m generally optimistic because it’s the first time I’ve seen CEOs just directly engage and not say, ‘I’m going to bolt [AI] on, we’ll deal with it after the fact,’” says Valerie Abend, global cyber strategy lead for Accenture, in an interview with Security Management at RSAC about AI.

Abend regularly meets with clients and CISOs to discuss their cybersecurity and AI journeys. One of the top three topics that come up in conversations with these groups is the EU Artificial Intelligence Act—which is in the final stages of being formally approved—and how businesses can harness AI technology, securely.

For instance, Abend’s firm was working with a major Australian bank on its AI journey. Executives from the bank came and met with Accenture, and Abend expected to meet with the the chief information security officer (CISO). Instead, Abend met with the bank’s CEO for 90 minutes to “make sure this was done right.”

They talked about setting policies, creating monitoring, and empowering the business. “Let’s figure out ways we can also empower the security teams to also use AI and be part of the story,” Abend says of the discussion.

CEOs are taking this level of interest in AI because they understand the opportunities, but also the potential risks, that the technology poses for their organizations. These include threats to intellectual property, new vectors for cyberattacks, as well as openings for regulatory scrutiny with the potential for big fines—up 7 percent of global revenue for the most egregious violations of the EU AI Act, for example.

This desire to innovate securely is trickling down through organizations as security directors assess how teams throughout the business are looking to incorporate AI to achieve their goals.

At Amazon Web Services (AWS), Jenny Brinkley, director, security readiness, AWS, says AI has always been a part of the DNA at Amazon. But during the past year, with the excitement in the space due to computing costs coming down and the availability of learning models, she has been meeting with teams across Amazon to see how they are thinking about using generative AI within their business.

“It was mind altering to see how so many teams were coming together and would ask about security in the very beginning, and want to make the right decision,” Brinkley adds.

Teams reviewed how they should be thinking about their training models, leveraging generative AI use, and ensuring they are using approved data to create quality outputs.

“The fact that they were talking about security at the very beginning, and these are people that security is not their day-to-day job,” Brinkley says. “These are designers, creators, individuals that work in everything from music to books—and to have them talk about security in that first 30 seconds of a meeting is really the most exciting thing I’ve done this past year.”

This is due, in part, to the security culture that Amazon has created through its annual security awareness training of relatable scenarios using real employees speaking in their native languages (more than 17 represented so far) to meet people where they are and make security approachable, Brinkley adds.

“It’s kind of the adage of ‘See Something, Say Something,’ but working in an environment where people feel that someone has their back and will help them,” Brinkley says. “I’ve been doing this work for the past six years, seeing that real positive relationship between our builder populations and the security organization, to not be afraid to ask for help or to understand how I can make a security-first mindset decision.”

Principal Financial Group is also assessing how it will use AI to enable its business. In 2022 when ChatGPT was first released to the public, the firm set up an AI Ethics Committee—which CISO Meg Anderson is part of—and established its AI principles. These include ensuring that humans are in the loop, that AI is not used to carry out bias, and “essentially to do no harm,” Anderson says.

“We’re trying to take a measured approach and making sure that between the security team, privacy, legal, and data analytics teams, as well as technology, that we’re all on the same page with what it is, how are we using it, and how are we not going to use it,” Anderson explains. “And then making sure we’re building security into any large language model development.”

For example, creating the right access controls for information that AI systems are used to sort through to limit how potentially sensitive data is shared within the company.

“If you don’t have the right, proper, access controls on information, it’s going to be made visible to anybody who asks,” Anderson says.

While AI was not widely responsible for cyber incidents or data breaches in 2023, according to the most recent Verizon Data Breach Investigations Report, organizations do need to prepare for how they will respond when an AI incident does occur.

Anderson anticipates that AI will be used in the future to attack its services, from crafting better phishing messages to target customers to writing malware to be used by criminal groups to increasing the volume of cyberattacks against the business.

In a RSAC session on 9 May, Matthew Olsen, assistant attorney general, National Security Division, U.S. Department of Justice, said that AI is “just an accelerant” that “enables an asymmetric threat environment to be more effective.”

Olsen added that we will likely see actors from Russia and China using AI to gain an advantage over the United States to sow disinformation and misinformation, as well as engage in corporate espionage to steal AI-related intellectual property.

This means that organizations need to be prepared to respond when an AI incident occurs. In an opening session on 6 May at RSAC, researchers from Carnegie Mellon University shared best practices for creating your own AI security and incident response team based on their own experience.

Carnegie created its AI CERT team after seeing an increasing level of activity targeting AI related systems and technologies in the wild, said Brigadier General (ret.) Gregory J. Touhill, director of the Software Engineering Institute’s CERT Division at Carnegie Mellon.

“I double-dog dare you to find any element of our society that is not being touched by this technology,” Touhill said of AI.

Lauren R. McIlvenny, technical director—threat analysis—at Carnegie Mellon’s Software Engineering Institute, said they used the Cyber Security Incident Response Teams (CSIRT) framework service areas to establish the AI CERT: vulnerability management, situational awareness, knowledge transfer, information security event management, and information security incident management.

They also defined what is involved in AI incident response, including root cause analysis, mitigation, cataloging AI incidents and explaining them—i.e., was it in attack that violated by condition or behavior existing security policies? The team also created a process for AI vulnerability discovery, such as passive discovery that is reported from others and targeted discovery based on mission-centered threat intelligence. Additionally, they set up a process for vulnerability management to prioritize vulnerabilities, test and verify them, and disclose them safely.

Since debuting in August 2023, McIlvenny says they’ve learned that the vulnerabilities are “really cyber vulnerabilities—it’s nothing brand new,” and that a lot of processes from cybersecurity still work when addressing AI.

“AI comes with great benefits but also risks,” Touhill said. “Cybersecurity is a risk management issue as much as it is a technology issue.”

 

arrow_upward