Skip to content
Menu
menu

Illustration by Security Technology

Breaking Down the Pros and Cons of AI in Cybersecurity

Coleman-Wolf.jpgThe fields of artificial intelligence (AI) and machine learning (ML) are rapidly evolving as new products and techniques are developed. And while much of the discussion surrounding AI is more philosophical in nature—such as ethics and privacy concerns, and what AI means for humanity—the development of real, practical applications marches on.

Within the world of cybersecurity, AI and ML are being used to improve cybersecurity defenses and to launch more effective malware. Cybersecurity companies are using AI and ML to better detect and respond to threats. The power of AI and ML, including subspecialties like deep learning, comes in the ability to rapidly mine large amounts of data, process a huge number of signals, identify anomalies, and develop predictions. Moreover, these systems are continuously learning using new datasets to improve their abilities.

But these same features that make AI and ML useful for protecting systems can also be used by bad actors to identify new vulnerabilities and improve the efficacy of their attacks.

Here are some examples of how both AI and ML are being used for good—and for bad.

Beneficial - Positive Use Cases

Network intrusion detection products use AI to identify anomalies in user behavior or network traffic patterns which signal possible intrusions. They may, for example, analyze a program’s particular sequence of system calls to evaluate whether it is malicious. Or they may look for unauthorized external connections that may have been set up to support an intruder’s command-and-control channel. Or they may flag an unexpected escalation of a user’s privileges. Older systems relied on algorithms which seek certain signatures based on a set of rules, but as the nature of attacks evolve it becomes too difficult to manage this rule base. However, systems that use ML-based algorithms to dynamically augment and adjust its rule base can learn from ongoing patterns of traffic or behaviors to adapt to the changes over time.

Rapid response to a cyberattack is important, and AI and ML techniques may be used in predictive and analytic tools to provide early alerts to potential attacks. Similar anomaly detection approaches used to detect breaches after they occur can be used to also provide alerts to possible impending breaches before they occur by, for example, detecting attempts to scan a network or deliver malware payloads which may be a precursor to an actual intrusion.
Furthermore, AI and ML tools may be used to aid in isolating threats before they can damage systems or to collect forensic data to aid incident response and recovery.

Some video surveillance systems use AI and ML to identify actions that are potential threats, like an object that is left behind that might be an explosive device, or to classify images, such as the color or type of vehicle to aid response.

Botnets are a networked group of computers or devices that can be used to carry out a coordinated assault, such as a denial-of-service attack which floods a victim with an overwhelming amount of traffic. Botnets rely on a command-and-control structure to receive their instructions and to synchronize attacks. One attack mitigation strategy is to disrupt these command-and-control communications. But botnets often use scripted Domain Generation Algorithms (DGAs) to automatically create random addressing to set up the command-and-control structure needed to function—and to quickly restore that function if countermeasures are used to interrupt their communication. Security tools using AI to identify these automatically generated domain names are well suited to rapidly recognize these new domains and shut them down.

Detrimental - Negative Use Cases

Phishing emails are designed to lure victims into following malicious links or providing sensitive information. A person is much more likely to fall victim to a phishing email when it is well-crafted, using personalized information or familiar references. AI can be used to digest and analyze datasets of personal information to automate the process of creating more plausible phishing emails with relatable information.

Bad actors are also looking into methods of attacking AI and ML itself to force it to incorrectly classify data or disrupt the system altogether. Training data is used during the initial development phase for a new ML system, and if a bad actor has access to the system in this phase then the data may be altered or carefully chosen to undermine the system. While people would not typically have system access at this phase, security researchers have demonstrated that when a system is in use, modifications to the data it relies on can cause an error. For example, almost imperceptible modifications to photographic or video images can change how ML systems classify those images.

[ Stay Aware of Threats. SM7 Newsletter: Sign Up ]

The malware of the future is already in development, using AI to target vulnerabilities while avoiding detection. Security researchers are working with malware code that can adapt to avoid detection from anti-virus systems. Rather than following a fixed script, this malware can learn from its own experiences to determine what works and what does not. For example, IBM Research developed DeepLocker as a proof of concept and presented it a recent Black Hat conference. DeepLocker is designed to behave as a normal video conferencing system until it recognizes the face of a specific targeted person at which point it launches the WannaCry ransomware on their system.

Coleman Wolf, CPP, CISSP, is a senior security consultant at Environmental Systems Design, Inc. He is also the chairman of the ASIS IT Security Community and a member of the ASIS Security Architecture and Engineering Community Steering Committee.

arrow_upward