Vulnerabilities Aren’t New, but the Speed of Incidents Is
Little has changed with the technical aspects of modern cyberattacks, and similar tactics are reused victim after victim. Incidents like MGM Entertainment and Caesars Palace renewed concerns about ransomware and resulting operational impacts. Exploits of Progress Software’s MOVEit Transfer and MOVEit Cloud showed that well-known categories of software flaws are still plaguing us. And we’re still dealing with the fallout from the SUNBURST attack against SolarWinds’ ORION platform that shed light on issues with software supply chain security.
If there’s one aspect of evolution we’re seeing with modern cyberattacks, it’s speed. No longer are months, weeks, or days adequate for detecting and responding to potential threats—the time scale we’ve historically used to measure and assess risk is outdated. Attackers are using the same cloud and automation technologies to perpetuate and accelerate their attacks. Victim organizations are breached within seconds or minutes, not days or weeks.
What to Expect From a Cloud Attack
When attackers succeed, there are many potential, undesirable outcomes for victim organizations, including data loss, intellectual property theft, and system compromise. In the context of broader supply chains and partner ecosystems, the impacts are also greatly amplified. Businesses must quickly assess operational impacts to evaluate lost revenue, privacy impacts to determine appropriate customer response per privacy laws, and material impacts to stay compliant with SEC cybersecurity disclosure rules.
Security teams struggle to maintain visibility throughout all their operating environments, but observability is a necessary component of any cybersecurity strategy. Increasingly, infrastructure is less of a thing you physically stand up in a datacenter and more of a virtual, ephemeral asset that is defined in lines of code and deployed within a cloud service provider. Despite the adoption of agile and DevOps practices, systems still generally evolve slowly over time to meet business needs and maintain availability. Organizations find themselves stuck between these traditional and cloud worlds. This combination or hybrid cloud adds complexity and operational burdens, which frequently leads to vulnerabilities.
The time scale we’ve historically used to measure and assess risk is outdated.
What Organizations Can Learn
There’s an endless race of continuous inspection and validation of identities, services, applications, and networks. Analysis must also include cloud control planes and workloads that power it all. These elements are sometimes reviewed in isolation, but it’s critical that they also be evaluated as part of the complete system since security problems often arise with interconnections and integrations.
Trying to catch all types of security problems prior to production release is a fool’s errand since testing can’t be accomplished quickly enough to support release cadences demanded by the business. Detecting and responding to threats also can’t be accomplished with log analysis alone because of the inherent latency, high volumes of data, or low fidelity. Many organizations haven’t fully embraced the technology and automation that make it possible to contextualize risk and determine in real time if attackers are gaining a foothold within their environments. They will be constantly racing against the clock as attackers move faster and cause significant business impacts.
We’ve also seen a renewed focus on governance. The NIST Cybersecurity Framework (CSF) is well-known in security circles. Version 2.0 is currently in draft, which expands the scope beyond just critical infrastructure and also adds a governance pillar.
Many organizations haven’t fully embraced the technology and automation that make it possible to contextualize risk.
Governance is necessary to ensure that tools such as those that support observability or threat detection and response are not only procured but also appropriately configured. Governance is necessary to ensure that teams and the organization are kept accountable when deviations from security policies arise. External partners, suppliers, and regulatory authorities also need to be made aware of potential security problems, within the cybersecurity program itself or from security incidents. Governance processes also ensure that relevant disclosure processes are followed appropriately. Traditionally, this level of oversight is a manual and time-consuming affair, but governance processes can also be codified and automated using guardrails and policy-as-code approaches.
As long as there are vulnerabilities to be exploited, there will be attackers to find them. And though little has changed with the nature of vulnerabilities or misconfigurations, the rate at which they’re being exploited in cyberattacks is constantly increasing.
If organizations want to be in the best position to harden their environments to prevent attacks but also detect and respond when an incident does occur, they need to do the difficult work of establishing and maintaining visibility throughout all of their operating environments and workloads. They must embrace the technology and automation that enables security teams to contextualize risk, not just leave it to attackers to use to perpetuate and accelerate their attacks. Modernizing governance and ensuring that security processes are properly followed is only growing in importance. After all, for both organizations and attackers, it’s a race against the clock.
Michael Isbitski, director of cybersecurity strategy at Sysdig, is a former Gartner analyst, cybersecurity leader, and practitioner with more than 25 years of experience, specializing in application, cloud, and container security. Isbitski learned many hard lessons on the front lines of IT working on application security, vulnerability management, enterprise architecture, and systems engineering. He has guided countless organizations globally in their security initiatives as they support their businesses.