What can we learn from the WhatsApp breach? -TEISS® : Cracking Cyber Security
Paul Farrington, EMEA CTO, Veracode, discusses WhatsApp’s handling of its vulnerability disclosure and what this breach says about the way organisations detect and disclose software vulnerabilities.
In May, WhatsApp revealed details of a vulnerability in its system that could have allowed hackers to gain access to users’ smartphones. WhatsApp is one of the most popular messaging tools in the world, with a sizeable 1.5 billion monthly users. It is favoured for its high level of security and privacy, as messages are encrypted end-to-end.
The good news for end user is that the vulnerability has a fix and an updated version of the app has been made available as an extra precaution. However, it has raised the importance of secure code, and this breach in particular says a lot about the way organisations more broadly detect and disclose software vulnerabilities.
In this instance the breach was caused by the CVE-2019-3568 vulnerability in the VOIP stack, a buffer overflow flaw. What is important to note is that this isn’t a new vulnerability.
In fact, according to Veracode’s report State of Software Security Volume 9, it is the 25th most common vulnerability, and is found in three percent of applications.
Although it may not be as prevalent as some other flaw categories (such as XSS or SQL injection), it is a highly exploitable flaw. Organisations should be well aware of it and have plans in place for addressing the vulnerability quickly.
Also of interest: Should we fear Huawei?
Detection and resolution
Yet the data from the Veracode report, and this breach, reveals buffer overflow flaws are taking a troubling amount of time to fix. It took organisations an average of 225 days to address 75 percent of these flaws. Which begs the question of why? This is a highly exploitable flaw and the damage it stands to cause is serious.
We know today’s cyber environment is incredibly complex. The sheer size of organisations’ digital footprint and technological infrastructure enables a wealth of opportunities for vulnerabilities to exist and persist.
Managing such challenges means that organisations are making daily tradeoffs between security, practicality and speed. It’s just not possible to tackle everything immediately. Which means it is critical that there is a process in place for smart prioritisation.
Also of interest: Cyber security burnout: risks and remedies
Lessons from WhatsApp
When categorising a vulnerability, organisations most commonly look at the severity of a vulnerability first and foremost. That means understanding how easy it might be for a bad actor to take advantage of it, or how business critical an application containing the vulnerability is.
However, an additional dimension which should be taken into account is exploitability. This is where the WhatsApp incident response was left wanting.
Exploitability specifically addresses the likelihood that a flaw will be attacked based on the ease with which exploits can be executed. By incorporating exploitability ratings along with severity it allows organisations to specifically prioritise those vulnerabilities that are both high impact and easy to take advantage of.
Put simply, a high severity flaw with a very high exploitability score introduces a lot more risk than a high severity flaw with a very low exploitability score.
Also of interest: 5 ways a CISO can tackle the cyber security skills shortage now
Four steps to lower cyber security risk
This categorization is just the first step in a security response. As with anything in business, security has to have a dedicated incident process in place to maximize detection through to resolution, and ultimately limit the impact on the business. And it has to be about more than purely categorisation, an effective process must include these four steps:
- Consider all dimensions of risk: the sheer volume of open flaws within enterprise applications is too staggering to tackle at once. This means that organisations need to find effective ways to prioritise which flaws they fix first. While many organisations are doing a good job prioritising by flaw severity, they are not effectively considering other risk factors such as the criticality of the application or exploitability of flaws.
- Fixing flaws quickly matters: the speed at which organisations fix flaws they discover in their code directly mirrors the level of risk incurred by applications. The faster organisations close vulnerabilities, the less risk software poses over time.
- Collaboration between security and development is critical: the practice of security and development teams working in close collaboration to improve software security, known as DevSecOps, is beginning to provide evidence that it’s proving its worth. Our research shows the more an organisation scans its applications for flaws per year, the faster security fixes are made. The frequent, incremental changes brought forth by DevSecOps makes it possible for these teams to fix flaws lightning fast compared to the traditional development team.
- Address all vulnerable open source and first-party components: enterprises still struggle with the occurrence of vulnerable open source components within their software. As organisations tackle bug-ridden components, they should consider not just the open flaws within open source libraries and frameworks, but also how those components are being used. Some component flaws may have mitigating factors if they’re not being used in such a way that the flaw is exposed to exploit.
Also of interest: How can CISOs be better leaders?
The bottom line is that organisations face challenges in the volume of vulnerabilities in their software and shortage of skilled security and development staff to deal with managing it.
However, consumers are getting increasingly more aware and less tolerant of security incidents which impact on their personal security. It’s critical that businesses are putting the right process in place to prioritise fixing the vulnerabilities which represent not just the highest risk, but the greatest exploitability.
More than this, they must be enabling teams across development and security to work as efficiently as possible to identify flaws early in the software development lifecycle and fix them quickly.