A guide to helping people overcome “cyber attack fatigue” -TEISS® : Cracking Cyber Security


15 August 2019

| Author: Keil Hubert


The more that we warn people about threats that don’t manifest in their personal lives, the more likely people become desensitised to our warnings. Rather than send fewer warnings, we need to include context and consequences as integral parts of our warnings to help overcome people’s unconscious decision-making processes.

How would you feel about receiving fewer warnings about emerging cyber threats? Do you think you’d be more productive if you didn’t have to interrupt your work to read, digest, and act on “breaking news” alerts? Or would you feel more vulnerable and less confident, knowing that your organisation’s technology experts were aware of dangerous cyber threats and weren’t keeping you informed?

This problem has been getting some much-needed attention in Security Awareness circles recently thanks to parallel discussions in the severe weather community.

In July, Washington Post reporter Tim Henderson’s article [1] “Consequences of alert fatigue can be deadly” highlighted how forecasters, news agencies, and government offices are considering a change to how they publish hazard alerts. Henderson led with a story of a man who ignored emergency warnings and drove his truck into a flooded section of road – with deadly results.

“The deadly situations illustrate what experts increasingly see as two common reasons for unnecessary storm deaths,” Henderson wrote. “Unfamiliar terrain that leads to bad decisions, and people ignoring too-familiar warnings that haven’t panned out in the past.”

That ominous statement resonates with security awareness practitioners; pushing news of dangers and countermeasures is our stock in trade. We warn our colleagues about new threats so that people can recognize signs of an attack and take the appropriate defensive actions.

What if – as the article suggests – our frequent warnings are actually desensitising our colleagues to the threat universe and are therefore making them more vulnerable to preventable mistakes? Are we making things worse?

Or are we just wasting our time and annoying everyone else? You have to wonder …

Henderson suggests (and I agree with him) that the answer is more complex than a simple yes or no. “The background noise of too many warnings can be just as dangerous as no warning at all.” People do get emotionally and cognitively desensitised to repeated warnings.

Repeated exposure to frightening inputs can cause a person to react to warnings with progressively less fright or anxiety. The more we cry “wolf!” the less people react.

There are likely additional factors in play beyond just desensitisation, and one in particular might be entirely our own fault: survivorship bias. This is a logical error, where people unconsciously focus on experiences that worked out all right (either for themselves or for others) while ignoring examples where things went wrong because of a lack of access to those examples or a lack of relevant personal experiences.

From a hazardous weather perspective, this might manifest as follows: a driver comes across a flooded section of road during a heavy rainstorm. She recalls the times that she’s safely driven across similar flooded stretches of road and decides this occurrence must also be safe to cross. In such an instance, a “flash flood warning” might not be compelling enough to influence the driver’s behaviour.

After all, her experience shows that driving through flooded areas is safe, especially if they don’t personally know anyone who has been harmed in such a gamble. She chooses to proceed despite the warning, gets stuck and drowns.

Now, consider that same scenario from a cybersecurity perspective: a user hears about a neat new mobile application for his smartphone and finds a download site on the Internet. He recalls his security awareness training about only downloading approved apps from approved sites. He also recalls the times that he’s safely downloaded apps to his personal smartphone and decides this app must also be safe to install.

Therefore, a “new malware” article or a “breaking news” cyberthreat alert might not be compelling enough to influence his behaviour. His experience shows that downloading Android apps from random websites is safe.

He chooses to proceed despite a stack of mandatory policies prohibiting it, infects his phone with malware, and thereby infects the network, causing massive damage. As a result of his wilful noncompliance with security regulations and his dismissal of a timely warning, he later finds himself subject to significant administrative action.

Imagine explaining at your next interview that the reason you left your previous role was because you wiped out their entire production network by exposing their network to ransomware. Think they’ll hire you?

This user’s bad decision-making process may be complicated by just how good his company is at protecting him from cyber threats. When he reads an alert about a new virus or vulnerability and then never hears about anyone in the company being affected by the threat, he learns to “tune out” the warnings.

The tremendous work performed by his IT department to patch thousands of PCs overnight or the frantic effort to craft new filtering rules for the company firewall are almost completely invisible to users outside of either IT or security. This becomes a problem when users learn (logically but incorrectly) that “warning” plus “silence” equals a false alarm.

Bear in mind that survivorship bias is only one of several logical fallacies that come into play with the “warning desensitisation” problem. We also have to factor the availability heuristic, selective perception, and several others as contributing factors in users’ conduct.

Unconscious decision-making biases and logical errors affect everyone. As such, a large element of our role in security awareness is finding ways to short-circuit these inherent “wetware” vulnerabilities. These are preventable mistakes; the kind that lead very smart and well-educated people to make poor choices.

Desensitisation is certainly a problem and we need to actively combat it. That said, it’s my opinion that pushing fewer warnings is counterproductive (at least, in the cybersecurity world). If anything, I maintain that we need to push more alerts rather than fewer, specifically to address humans’ natural tendency to associate media silence with a lack of criticality.

First, we send the warning for a new cyber threat, then a follow-up explaining how much effort went in to bracing the enterprise against the coming storm.

Finally, we send after-action review notes about people and places that fell victim to the threat so that users can learn to associate the failure to mitigate threats with real-world consequences. This one-two-three punch will help users associate our early warnings with actual consequences, so they’re more likely to pay attention to those warnings … and take them seriously.

[1] Reprinted in the Dallas Morning News on Sunday, 14th July 2019.





Source link