The IT security news over the past year or so has been really bad. Sometimes it seemed that each day brought the story of yet another breach, every one bigger than the one before. According to a recent report, more than one billion records were exposed over more than 1,500 individual incidents.
It can be difficult to explain how this can happen. After all, most of these breaches exploit known vulnerabilities for which patches have been available for some time. People outside IT operations ask, what is the hold-up? The experience most people have of patching is on their personal computers, where the process nowadays is fairly streamlined and usually painless. You turn on background patching, and you’re done, right? So what’s so hard about patching servers in a timely manner? Surely, being so much more critical than individual laptops, the process for servers is even more streamlined?
Unfortunately not. Servers are more complicated beasts than laptops, and patching them is correspondingly a more complex and fraught process.
The difference is not so much technological. Server operating systems—Windows, UNIX, Linux—all have desktop variants (with the Mac OS representing desktop UNIX). The issue is what runs on those servers and how many different people have access to them.
An operating system patch breaking a desktop application is rare and noteworthy, and the manufacturer of either the OS or the application hurries to provide an update.
Server applications are more complex, more customised, and more rarefied than desktop applications, and are far more sensitive to their environment—and harder to update. An OS patch having some sort of unforeseen effect on the performance or functionality of an enterprise application is not at all unheard of.
Worse, many users rely on those applications in one way or another to do their jobs. Even the successful deployment of a patch generally requires that some services or even the entire operating system be restarted, which will prevent users from accessing their application. For this reason, all maintenance activity such as patching is scheduled months in advance. These maintenance windows are typically quite constrained as application owners strive for maximum availability of their applications. “Five nines” availability (99,999% uptime) allows for less than one hour’s downtime per year!
This is why sysadmins take time to plan, test, and schedule any changes. Patching takes place on quite a deliberate scale to avoid breaching formal SLAs or users’ expectations in general.
In addition, the story of corporate IT is the story of a struggle between centralisation and decentralisation. Mainframes were centralised, but since minicomputers came on the scene, there has been a constant tug-of-war between departments wanting their own independent IT systems in order to be more agile, and central IT departments trying to maintain control. With virtualisation first and then cloud computing, this dynamic has accelerated enormously.
Anyone can create a server with a couple of clicks, which is great for agility. However, corporate IT teams were not trying to centralise control as part of some bureaucratic turf war. There are all sorts of considerations which need to apply as well as agility—notably, security and compliance. If the central IT team is not aware of all the little IT initiatives flourishing in every department, they are unable to secure them properly or verify that they are following any policy or legal requirements that might apply.
Into this already complex situation comes the dedicated Security team. They have their own tools and processes to audit the environment for vulnerabilities, but they don’t fix them directly; that level of access is reserved for the Operations team. Instead, they produce huge spreadsheets with thousands of lines in them, each one representing a vulnerability that has been identified.
The Operations team receives this data dump and has to process it somehow—all too often by having someone read the file directly and attempt to assign actions. This is time-consuming enough, but as we just explored, remediating detected vulnerabilities takes time, so the next huge spreadsheet may contain references to vulnerabilities whose remediation is already scheduled or maybe even under way, with a patch already deployed but waiting for a planned reboot.
This is the SecOps gap
Security teams report issues and need them fixed fast, but Operations teams have their own constraints, and the communication between the two teams is very difficult because of these mismatched priorities. In addition the two teams’ priorities are implemented in dedicated tools which also do not communicate directly across the Gap. At root, the SecOps Gap is the cause of almost all security breaches. The Hollywood image of elite hackers using advanced skills to breach carefully prepared defences is far from the truth of most security incidents. Usually, the cause of the breach is some weak point overlooked by the defenders and identified by opportunistic automated scans.
BMC and Qualys are here to help companies close the SecOps Gap. By integrating Qualys’ vulnerability assessment tool with BMC’s BladeLogic remediation capabilities, the SecOps Portal gives IT teams the visibility to understand their true security posture across their entire estate, and execute targeted remediation actions in a way that minimises the risk of breaking their commitments of performance and availability.
I will be speaking on a webinar with Jonathan Trull, CISO of Qualys, on Tuesday the 10th of March at 10am CST. Sign up here to attend live or access the recording. To find out more about the SecOps Portal, please visit bmc.com/SecOps.
- SIEM vs. Log Management: What’s the difference?
- IT Security Vulnerability vs Threat vs Risk: What’s the Difference?
- The Benefits of Consolidating Monitoring in a Multiclient, Multitenant Outsourced Environment
- How to Unlock Enterprise IT Gridlock
- SOA Security Best Practices