Why 2015 won’t be like 2014—oh, wait
As we all know, 2014 was a banner year for security breaches. I won’t even list the victims, not least because that would make for a very long, boring blog post. Instead, let’s talk about how we can make 2015 the year we fix IT security.
This is, of course, against my own best interests. Anyone who built a presentation about security last year was faced with an overwhelming number of potential examples, in every industry and every part of the world. The difficulty was not in searching out an example, but in making sense of the sheer volume of breaches, vulnerabilities, patches, fixes, rumours, and commentary.
The part that is hardest to explain is that most of these breaches do not involve some elite hackers, wearing sunglasses in the dark and coding fiendishly advanced attacks that overwhelm companies’ carefully-prepared defences. Unfortunately, most of the data breaches that made the news last year were preventable.
Heartbleed bleeds on
Let’s take the example of the Heartbleed OpenSSL bug. This was really the first superstar vulnerability, or “vuln” as they are known in the security community. Vulns are normally identified by CVE (Common Vulnerabilities and Exposures) identifiers—Heartbleed’s proper name is actually CVE-2014-0160. Of course, only security pros remember CVE IDs, and they certainly don’t make the mainstream news. Heartbleed was different because it had a catchy name, a website (heartbleed.com), and a logo.
Together, these factors meant that news about Heartbleed rapidly spread outside of the IT security community and even made it into the mainstream press. It was the most high-profile bug ever, and no IT professional could possibly be unaware of it.
The bug that would be christened Heartbleed was introduced into the OpenSSL library in 2012, where it lurked undiscovered until it was disclosed on the 1st of April 2014. As is generally the case, a patch was available in pretty short order, on the 7th of April.
You would think that would be the end of the story, right? Someone made a programming error, someone else found the issue, a patch was issued, all the users installed the patch, and everybody lived happily ever after.
Unfortunately, that’s not quite what happened.
Despite the unprecedentedly high profile of the Heartbleed bug, by May—a month after the patch was issued—there were still more than 300,000 publicly visible web servers vulnerable to Heartbleed.
By the end of June, that number had still not gone down significantly.
At the end of August, there was a high-profile security breach at Community Health Systems where the attack vector was shown to be Heartbleed.
But surely now, in January 2015, Heartbleed is just an obscure footnote in IT security history, right?
Unfortunately, not quite: it seems that there are a quarter of a million systems out there still vulnerable.
Note that all of these numbers only list systems that are visible on the public internet, not counting systems that are behind firewalls or NAT, or otherwise not visible to a public scan. Another lesson that we have learned from 2014’s security news is that focusing on perimeter defence is no longer enough. Vulnerable systems that are not visible or accessible from the public internet still need to be fixed to prevent any attackers who breach the perimeter from being able to roam freely about the corporate network.
How does this happen, and how can we fix it?
The problem, as we have seen with the Heartbleed example, is not the lack of a patch or a fix for a particular issue. Nor is it a problem of awareness of the issue—Heartbleed might have been the first “celebrity bug” to hit the mainstream news consciousness, but there have been others after that: Shellshock, Poodle, Sandstorm, and so on. There are also dedicated sources of security information that infosec professionals consult on a regular basis.
The problem is the mismatch of priorities between the two different groups involved in solving this problem. The security team uses expertise, tools, and procedures to identify potential and actual vulnerabilities in the IT environment. However, when it comes to actually fixing those problems, a different group is involved: IT operations, who are responsible for all changes to be made to those systems. These two groups perceive risk in different ways.
Security risk is the risk that attackers might successfully exploit a vulnerability and penetrate the IT environment, stealing data, disrupting business, or otherwise causing trouble. The interest of the security team is in minimising the attack surface—the number of services that can be attacked—and minimising the window of vulnerability—ensuring that vulnerabilities are patched as rapidly as possible, so that the time in which a vulnerability remains unpatched is kept as short as possible.
Operational risk, on the other hand, is the risk that IT systems are unable to support the business use cases for whose benefit they were put into place. The interest of the operations team is avoiding downtime or major performance impact. Unfortunately, many security fixes require reboots or other disruption to business operations, and all too often, the immediate fear of disrupting business takes precedence over the possible future impact of a security breach.
The SecOps gap
This mismatch between security risk and operational risk is what leads to the SecOps gap. The only way to prevent 2015 from bringing another crop of bad news about IT security is to close the gap by figuring out how to align security and operations priorities. At BMC, we are working to help operations teams communicate better with their colleagues in IT security.
These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.