Keil Hubert on managing human instinct in security


Imagine that you’re attending an industry conference. The organiser chose a casino with an attached conference centre for the event. You’re eating breakfast, chatting with your peers, waiting for the chime announcing the first session of the day. Suddenly, an excited voice carries over the morning chatter: “I just got two hundred eighty!” All eyes turn to the speaker; she looks feverish with excitement. Instantly, several conference attendees sprint for the room’s exits, eager to take advantage of the craps table while the dice are “hot.” They take out their cash and buy a pile of chips, forgetting they’re supposed to be in a scheduled session. Was it worth it? Were the dice rolling their way?

Of course dice games do not work that way. The so-called “gambler’s fallacy” (or “Monte Carlo fallacy”) is a type of Apophenia, or the tendency for humans to recognize patterns where none exist. The woman informing the crowd she just won triggered a flaw in human pattern recognition; some people believe luck comes in “streaks” that can be exploited. Once a game (or a slot machine or a roll of lottery tickets) starts paying out, the thinking is the game will keep paying out for a while.

What makes this 2002 story interesting was that I knew the “two hundred eighty” lady. Ms. 280 was from my organisation which is why I knew that she was an avid bowler in her spare time. When she learned the casino had a 24/7 in-house bowling alley, she felt compelled to play a few frames while the rest of us were having breakfast. She didn’t win $280 in the casino … she bowled 280 points (out of a potential 300) with a wicked twist of a bowling ball. Ms. 280 was proud of her sporting accomplishment and wanted to share it with the rest of our delegation. This turned into a misunderstanding that led to a lot of lost wagers (and wages).

 

Not to mention paying attention to only one factor – the shouted number – rather than the context of the moment. A non-gambler coming into the room from the opposite direction of the casino shouldn’t have caused people to think the speaker had just experienced a financial windfall.

The plot of William Gibson’s 2003 novel Pattern Recognition is based on this exact fallacy. The story’s protagonist gets involved in pursuing the creator of a series of mysterious video clips leaked onto the Internet to learn what they mean. Thomas Wagner wrote in his review: “The very randomness and ineffability of the clips flies in the face of our natural human tendency towards pattern recognition, you see, and even though Gibson’s depiction of the subculture that surrounds ‘following the footage’ seems a bit unrealistically large – it’s hard to buy the idea that some renegade filmmaker posting weird film clips on the internet would literally turn the world on its ear. …But the point is that we as people don’t like uncertainty, don’t like knowing that there’s something we can’t comprehend. And if we can’t fit something into an existing pattern, then by golly we’ll come up with one.” [1]

This fallacy is a known problem for security experts, especially ones whose specialization involve hunting threats. We’re so eager to find a specific trend within a massive sea of data that we sometimes start to see patterns where none exist. This isn’t unique to security people; everyone is prone to it. It’s an impediment to our roles, though, when we deploy resources (like time, personnel, or money) in pursuit of a quarry that isn’t truly there.

This fallacy is also a problem for security awareness professionals. A significant element of our role is to energize our people to take an active interest in what’s going around them, so they’ll be more likely to spot a possible cyberattack and take appropriate countermeasures. This requires us to teach how to spot deviations from the norm; that is, how to recognize what a system, a network, or an office looks, sounds, and feels like in its normal, uncompromised state. That way, when someone experiences something markedly different from the norm – like a PC running slower than usual, or a server requesting a manual re-authentication – they’ll recognize this behaviour as aberrant and as something to be reported ASAP.

This is where apophenia can trip us up. We need to teach people both active and passive pattern recognition for large and complex systems (like an office network). At the same time, we need to teach people to be discerning and sceptical about the patterns they think they perceive within large amounts of random data. It’s difficult to pitch both concepts in the same training course without seriously confusing your audience.

One more reason why early field testing of new material is crucial before you commit to taking a training product live. What makes perfect sense to the designer might not register on a highly-educated, highly-intelligent person because of some obscure concept or factor that the designer knows (but forgot to include in the content).

The approach we’ve taken is the path of encouragement: we start by telling people to trust their instincts. Humans naturally adapt to the sensory inputs and experiences in their environment – mostly unconsciously. As people become familiar with their surroundings, they focus less attention on potential alarming inputs and more on their work. This helps to “tune” their instincts, which then allows them to react viscerally to any unexpected deviation from what they’ve come to accept as normal.

We’re wired this way. Nigel Nicholson,  in a 1998 Harvard Business Review, stated, “In an uncertain world, [the people] who survived always had their emotional radar – call it instinct, if you will – turned on. And Stone Age people, at the mercy of wild predators or impending natural disasters, came to trust their instincts above all else. That reliance on instinct undoubtedly saved human lives, allowing those who possessed keen instincts to reproduce. For human beings, no less than for any other animal, emotions are the first screen to all information received.”

When something feels “off,” we want people to acknowledge and pay attention to the feeling. Rather than react out of fear or anxiety, though, we teach people to acknowledge the emotional prompt and set it aside. To examine the input rationally. What prompted the disquieting sensation? What changed in their environment?

The next step is to react to the emotional prompt deliberately rather than reflexively. We want people to examine their situation dispassionately; to let their conscious mind retake control from their unconscious mind, and thereby allow their reason and technical training to guide their response.

Remember that instincts can be honed through training and experience. This works the same in phish recognition as it does for, say, bowling. The more you practice and learn how the game works, the more parts of the process get delegated to your unconscious mind.

That’s proven to be a difficult topic to teach. Asking people to react primally yet respond intellectually. It isn’t too much to ask of a person. It is, however, a difficult practice to master because it flies in the face of conventional training. Most courses emphasize one approach or the other; not both. “When you feel uneasy, report the message as a phish” plays to the immediate emotional reaction. “Examine the file headers to see if the sending mail server matches the stated sender’s address” plays to the intellectual approach. Both are valid phishing defence tactics. Teaching people to “sense” the attacker’s “call to action” (the emotional attack in the phish) and then stepping back to deconstruct the “call” is the hybrid approach that we’d prefer people apply.

To help people adapt to this approach, consider combining recurring themes in your mass-communications campaigns with triggers from your simulated phishing campaigns to help inculcate peoples’ responses to a react-then-think model. A concern is that this approach may teach specific and incorrect patterns – larger ones than those intended –including extraneous indicators which could lead to suboptimal responses. The last thing we want is to create the “I just got two hundred eighty!” trigger that causes confusion. It’ll be interesting to see how this approach plays out over time.


[1] Emphasis added.



Source link