Skip to content
Menu
menu
How to avoid cognitive bias when making security decisions

Illustration by Dan Page

Uncovering Cognitive Biases in Security Decision Making

Do you consider yourself an above-average decision maker? Does logic drive most, if not all, of your security risk mitigation decisions? Think again.

Most security professionals believe that they are better decision makers than the average person, but recent research has proven this is not the case. In fact, security leaders often fall prey to the same biases as the majority of the population, and they may find themselves relying on gut feelings and prior experience instead of facts and probability. Unlike most people, however, security professionals’ biases could have significant ramifications on risk management and safety decisions.

Awareness is the first step to correct this issue, and analysis into security professionals’ decision-making tools has unveiled myriad pitfalls.

DeWit-Background-1.png

Risk Management Processes

Risk management processes—including the guidance published in the Risk Assessment Standard from the American National Standards Institute (ANSI), ASIS International, and RIMS—appear organized and can be interpreted as precise, accurate, and objective. External context, likelihoods, vulnerabilities, existing controls, risk tolerance, and options are communicated and considered fairly, driving additional risk identification, analysis, evaluation, and treatment as required.

Closer inspection, however, reveals subjective human decision making plays a major role. Each step in the process is a decision: What do we consider to be part of the context? What threats do we take into account, against what assets, and within what timeframe? Which controls do we compare and what is their effectiveness? Do we evaluate that effectiveness? If so, how and how often?

Subjective decisions like these structure the content of a security management process, and the output reflects the personal judgment of the security decision maker.

This should not be particularly surprising, given the nature of risk. The Risk Assessment Standard defines risk as the “effect of uncertainty on the achievement of strategic, tactical, and operational objectives,” and the definition clearly indicates the two main components of risk and risk assessments: effects (often referred to as consequences) and uncertainty (often expressed as likelihood or probability). Because any risk refers to a future state of affairs, it is by definition impossible to predict exactly. After all, as Sven Ove Hansson wrote in the Handbook of Risk Theory, “Knowledge about risk is knowledge about the unknown.” The security risk field is dealing with malicious—and therefore manmade—risks. This aspect of security management adds an extra dimension to the uncertainty. People performing malicious actions, such as intrusions or thefts, try to be unpredictable or hidden to evade existing risk controls. This dynamic context—with bad actors’ ever-changing modus operandi and the large variety of situations, including locations and times—adds to the uncertainty.

While past security risks and events help inform the risk assessment process, they do not provide certainty about future risks. Therefore, risk assessments are a combination of experience, expert judgment, and objective facts and evidence. And that judgment—however well informed by past experience—is susceptible to assumptions and biases.

DeWit-Background-2.pngBias vs. Logic

Over the centuries, philosophers worldwide have explored the concepts of human judgment and decision-making processes. Eventually, they settled on maximization theories—humans are supposed to apply a form of rational decision making with the goal of achieving the best possible outcome. For example, a purely rational human would search the supermarket shelves until he or she uncovered the perfect jar of pasta sauce—weighing variables of volume, price, nutritional value, and taste.

In reality, however, few people apply that depth of rationality to everyday decisions. Instead, they grab a sauce that is good enough for what they need and move on—a concept psychologists dub “satisficing.” There are many reasons that cause a consumer to settle on one item instead of another or, from a security lens, to make one risk mitigation choice instead of another. Often, those reasons hinge on personal preferences and cognitive biases.

In 1979, scholars Amos Tversky and Daniel Kahneman introduced the prospect theory, which clearly identified systematic ways in which humans make decisions that are not optimized for the best possible outcome. Decisions turn out to be less logic-based and more prone to heuristics, mental shortcuts, and biases.

[ Stay Aware of Threats. SM7 Newsletter: Sign Up ]

Kahneman, who received the Nobel Prize in Economics in 2002 for his work, espouses that there are two systems of thinking: fast and slow. Fast decisions rely on intuition and prior experience, and they are almost automatic. A firefighter arriving at the scene of a blaze may make split-second decisions based on his or her past experience with similar events. But if the firefighter confronts bright green flames or some other abnormality, decision making is likely to slow down, becoming more deliberate as the firefighter weighs information and debates possibilities. This takes significantly more brainpower, so humans tend to revert to fast decision making whenever possible.

Where security decisions are concerned, however, it can be invaluable to exert the extra effort to slow down, debate different possibilities, and bring in different points of view to make more informed, rational decisions that acknowledge biases but don’t fall prey to them.

DeWit-Background-1.png

Information and Confidence

Humans are notoriously overconfident. An infamous 1981 study by Swedish researcher Ola Swenson found that 93 percent of Americans considered themselves above-average drivers. Statistically, this is impossible. But security professionals seem to fall into the same trap when it comes to making informed decisions, believing that the rules of informed and rational decision making can be circumvented with enough professional experience.

One of the authors of this article (de Wit) has studied security risk decision making within both physical security and cybersecurity domains in recent years as part of a doctoral research program. Across a span of three surveys so far—including approximately 170 security decision makers—researchers explored security professionals’ relationship with information and how it affects decisions and the influence of biases on decision making.

According to the author’s research, 56.6 percent of security professionals indicated that even if they lack exact information on the consequences of security risk, they can still estimate it; 60.9 percent said they could accurately estimate a risk’s likelihood, even without exact information. Three-quarters said that situations where they can estimate neither the consequences nor the likelihood rarely occur.

This lack of exact information—which the security professionals were cognizant of—did not influence the confidence they expressed in their decisions. When asked how confident they were in their security judgments concerning the consequences of risks, 73.8 percent indicated they were always confident or confident most of the time. Likewise, 67.7 percent said the same of their security judgment when it came to the likelihood of risks.

Security professionals have preferences for some information sources over others, as well. Experts (76.3 percent) and peers (56.4 percent) were the most trusted, and 62.2 percent said their own experience is very important for their judgment. Only 15 percent said information from higher management is very important for risk management decisions.

The more experience security professionals have, the more confident—and potentially overconfident—they are in their decisions, the research found. Researchers asked whether the security professionals would like more information to make their decisions, and the more experienced professionals refused, choosing to rely on their gut feelings instead.

DeWit-Background-2.png

Biases to Watch

A huge number of cognitive biases have been identified in recent years, and those biases can wreak havoc on risk management decisions. The author’s research analyzed a set of biases against security decision making practices and found several that are likely to influence risk management.

Certainty effect. It’s time for a gamble: If you had to make a choice between a 100 percent chance of receiving $150 or an 80 percent chance of receiving $200, which would you choose? When the outcome is a gain, decision makers under the influence of the certainty effect will tend to prefer certainty over a 20 percent chance of receiving nothing, even though the choice of an 80 percent chance at $200 can be considered optimal.

Security professionals show a similar level of vulnerability as laypeople for this bias. Three-quarters of security professionals selected the certain but less optimal outcome, indicating that in real-life situations they may not maximize security risk reduction or may spend resources less efficiently. Even when researchers exchanged the monetary gains and losses with security risk reduction to reflect a more realistic situation, this effect guided the decision of the security professionals.

Reflection effect. This is similar to the certainty effect, but the reflection effect looks at losses instead of gains. If you have a certainty of losing $150 or an 80 percent chance of losing $200 (and therefore a 20 percent chance of losing no money at all), people will regularly take the gamble.

DeWit-Quote-1.png

Security professionals gamble here at a similar rate to laypersons. When a possible loss is at stake, 84 percent of security professionals take the gamble for a possible higher loss rather than accepting a certain but lower loss. As one might expect risk-avoidance behavior from security risk professionals, this finding is surprising.

Isolation effect. Very few decisions happen in isolation, and when a decision contains several stages, decision makers tend to ignore the first stages and focus on the last one only. This bias demonstrates a level of ignorance about the comprehensive view on a combination of decisions—how one factor will influence another—and it can lead to suboptimal outcomes.

An example from security praxis affected by this effect is one of the fundamental principles in security. A layered defense strategy is the implementation of multiple, independent risk reduction measures. These layers, when taken in combination—not isolation—should reduce risk to an acceptable level.

Unfortunately, you can test susceptibility to the isolation effect, and 83 percent of security professionals in the research chose suboptimal outcomes. What might this look like in concentric layers of security? Most likely the isolation effect would come into play when one of the layers receives an outsized amount of attention and the rest of the layers are neglected.

Nonlinear preferences. One percent is one percent, no matter which percent it is, right? Wrong—at least where human decision making is concerned.

This bias (also known as value function or probability distortion) demonstrates that the perception of 1 percent when changing from 100 percent to 99 percent is very different than when changing from 21 percent to 20 percent. This also works in larger percentage jumps—100 to 25 versus 80 to 20, for example. Both were divided by four, but the change in perceived value from 100 percent to a quarter feels significantly more drastic. (Research has determined that the single percentage change between 100 and 99 percent is weighted to hold the value of 5.5 percent, oddly enough.)

Small probabilities tend to be overrated as a result of nonlinear preferences, which can strongly affect security decisions. For example, the probability of a terrorist attack is usually quite low, but security professionals are likely to devote outsized resources to mitigating terrorism risks, both because of their potential high impact and the bias for nonlinear preferences, adding additional weight to the low probability.

Conjunction fallacy. Consider two scenarios in which you are the security manager of a private pharmaceutical company:

  1. How would you estimate the likelihood of experiencing a successful attempt to extract intellectual property during the upcoming year?
  2. How would you estimate the likelihood of experiencing a successful attempt to extract intellectual property by suspected state-
    affiliated attacker groups specifically targeting COVID-19 research, using one or more insiders during the upcoming year?

Did the additional conjunctions—“by suspected state-affiliated attacker groups,” “specifically targeting COVID-19 research,” and “using one or more insiders”—change your risk assessment? Logic would lead to the conclusion that the short version is more likely because the additional details make the case more specific and reduce the likelihood. The results of research on security professionals show the opposite effect.

The survey participants were divided into two groups and were presented with either the short or long version of the scenario. On average, the likelihood of the longer scenario was estimated 12.5 percent higher. Nearly three-quarters of security professionals assessed that the detailed case study was more likely than the shorter one, with no significant influence one way or the other for security training or education level.

This is an example of the conjunction fallacy—the more detailed a scenario, the more realistic and likely it feels. This has potentially serious implications for real-life security risk assessments, though. More information on a security risk almost automatically and unconsciously raises the risk assessment of individual security risk decision makers when, logically, the more specific details should reduce the likelihood. This could lead, for example, a retailer to invest time and resources in addressing the risk of a high-profile flash robbery at a flagship store—which is a specific, low-frequency incident at a specific location—instead of shoplifting as a whole.

DeWit-Background-1.png

Corrective Action

The research indicated that security professionals are as vulnerable as laypeople to studied cognitive biases. As a result, their decisions are likely to be influenced by bias and might turn out to be less optimal, efficient, or effective. Security and professional experience, security training, and level of education do not show an observable significant effect on circumventing cognitive bias.

However, all hope is not lost. Once security professionals begin to recognize biases at work in their decision-making processes, they can take action to mitigate them—at least on less time-sensitive, large-scale decisions like organizational strategy or broad risk mitigation efforts. There are many techniques available to root out the influence of bias and mitigate its risks, and while extensive research has been done on this topic elsewhere, a few simple suggestions to start with are listed below.

Gather a group. Multiple viewpoints and healthy debate can help identify cognitive missteps and uncover unorthodox solutions. If appropriate, bring in uncommon participants—such as interns, security officers, HR professionals, or facilities staff—for additional perspectives. Consider appointing a devil’s advocate within the group to challenge every assumption and point out potential pitfalls. This person is meant to be somewhat exasperating, so appoint the naysayer with care and outline his or her responsibilities to the group.

Slow down. Fast thinking often relies on snap decisions and intuition, rather than reason. If the situation allows, plan to make decisions over longer periods of time and use that time to gather additional information and input.

Aim for options. Don’t stop after you reach one strong contender to mitigate risk. Aim for five instead. By fixating on the first strong solution, decision makers fall into systems of fast thinking. Requiring additional options will force a decision maker to slow down, reconsider available information and possibilities, and likely arrive at a better-reasoned conclusion.

Undercut the optimism. Optimistic thinking is a hallmark of many decision-making missteps. Harvard Business Review recommended performing a premortem, which imagines a future failure and then explains the cause. This technique helps identify problems that the optimistic eye for success fails to spot. At the same time, it helps decision makers prepare backup plans and highlights factors that may influence success or failure.

The most important step in this process is to recognize the “flaw in human judgment,” as Kahneman calls it. And despite security professionals’ proclaimed confidence in their own judgment, they are as vulnerable as anyone else to bias and similar cognitive peculiarities. Knowing what you don’t know and acting on that awareness should make security practitioners more reasoned in their decisions.

Johan de Wit works for Siemens Smart Infrastructure as the technical officer, enterprise security, and he is involved in global Siemens’ portfolio development. He holds a master’s degree in security science and a PhD research position at Delft Technical University, where he is exploring the characteristics of security risk assessments. He is a member of various expert communities and advisory boards, including the Dutch National Cyber Security Center, ASIS International, Information Security Forum, and the U.S. State Department’s Overseas Security Advisory Council in The Netherlands (OSAC). He is a regular speaker at conferences and universities and has published multiple papers and articles.

Claire Meyer is managing editor of Security Management. Connect with her on LinkedIn, on Twitter @claireameyer, or email her directly at [email protected].

arrow_upward