Skip to content
Article

Configuring Patches By Hand Invites Hacks

You Think Your Patching Strategy Is Good Enough?

If your organization is breached because you have a missing patch, it is not only embarrassing, it can get you fired. We see it all the time in the news. The fact is, there is a level of negligence with most hacks since many of them can be prevented with a proactive, automated patching strategy. A For example, a 2015 report observed that 99.9% of the Common Vulnerabilities and Exposures (CVEs) had been compromised more than a year after the CVE was made public. Some confirmed breaches were associated with CVEs that were published in 1999.

[perfectpullquote align=”full” cite=”” link=”” color=”” class=”” size=”22″]

According to the Verizon 2015 Data Breach Incident Report, this is the count of exploited CVEs in the 2014 calendar year, tallied by the CVE publish date.
FIGURE 1: According to the Verizon 2015 Data Breach Incident Report, this is the count of exploited CVEs in the 2014 calendar year, tallied by the CVE publish date.

[/perfectpullquote]

In today’s datacenter, automating patch management is a necessity. With security vulnerabilities coming out at record speeds, the days of hand jamming patches on servers is gone. It is error prone, exposing systems to unnecessary risk, and your organization simply cannot keep up forever.

 

A Day In The Life Of A Systems Administrator

A security patch is just like a code release (it could even be a code release) when it comes to releasing code in your data center. Many of your admins manually patch. And, they do so constantly. Here is an example of their process (buckle-up; it’s grueling):

[perfectpullquote align=”full” cite=”” link=”” color=”#5b8d94″ class=”” size=”16″]They might start, for example, by downloading the latest Red Hat Package Manager (RPM), which contains updates for Linux servers. Then, they go to each Windows server for scanning, downloading, and installing patches. Most of your admins have this juggling down to a science. They manually remote into as many Windows servers as possible and check to see which patches are needed. As they are checking for patches, they are concurrently fixing a different server. Rinse and repeat. By the time they get to the last Windows server, it is time to go to the first server and select the patches they want to install as highlighted by the scan results. They would then begin installing. Keep in mind, they are still hopping back and forth between Windows servers. Rinsing and repeating, again, until they make it to the last server. When your admins finally do make it to the last server, the Linux RPMs would be finished. They probably fire off a handy-dandy, home rolled, almost reliable, SSH script that would push the RPMs to the servers and kick off the installation of the RPMs.

Now it would be time for them to run a vulnerability and patch scan to see if all of their servers were actually patched. Usually, they could have an environment of a thousand servers done in a week or so. Hooray! Just in time for the next round of patching to begin. [/perfectpullquote]

If this sounds tiring and error-prone, it’s because it is. And, it is unnecessary. With a manual, clunky, and usually reactive patching strategy like this, organizations require a dedicated patching team to constantly hand jam patches. This caused a ripple effect. This worked when environments were smaller 10 years ago. Today, it is often impossible.

 

To Patch, Or Not To Patch

System Administrators will inevitably fall short somewhere, which forces the CIO to have to make a choice:

  1. The first option is to have out of date patches, but let the system administrator do all of his or her regularly assigned work.
  2. The second option is to have up to date patches and deal with a datacenter that never gets upgraded, systems that never get installed, or software that never gets installed or configured.
  3. The third option is to add additional labor, an expensive and often non-viable option.

If this sounds like the state of your organization, it is time to look at automating your patching process.

 

Facebook Never Gets Hacked. Take Small Steps To Emulate Them.

One day, Facebook could get hacked, but they haven’t yet and that is notable. The well known social media giant is a great example for automating patches. Facebook’s release cadence is extreme, and they admit how aggressive they are about automating as much of the dev and deploy environments as possible. While it may not be appropriate for your organization, it is a testament to having a culture of a proactive patching strategy with reportability at the end of a patching cycle. This allows you to have a secure deployment pipeline updating your systems in the data center frequently. There are many tools out there that allow patching of the major datacenter operating systems. At the center of the DevOps function are forward-leaning tools that can run a patch analysis on the entire network of Linux and Windows servers and download the necessary patches. Then the system can push the patches (either automatically or manually by bulk) to the servers during your maintenance window.

No one likes downtime or bogging down a network for the sake of patching. In fact, companies sometimes avoid critical patching entirely if it will affect the end-user experience at all. This is an incorrect way of thinking as a data breach has a larger negative impact on the end-user than temporary downtime ever will. Leverage the ability to “stage” patches that are for mission-critical systems with these forward leaning tools.

 

Patch Smarter, Not Harder.

Your System Administrators will give anything to have an automated patching solution, and you have to provide them the culture for the engineering teams to move in that direction. One of the largest benefits to a CISO or manager is the improved reportability at the end of the patch cycle due to the automation tools sophistication. They have a clear view of what has failed, succeeded, and what exactly has been installed on their servers. Combine this with consistency of patching, returning time back to Systems Administrators, and the speed of patching to achieve unparalleled business value.

Click here to download and save this blog post as a PDF.

Sources:

Paul, Ryan. “Exclusive: A Behind-the-scenes Look at Facebook Release Engineering.” Ars Technica. 5 Apr. 2012. Web. 9 Nov. 2015.
Rossi, Chuck. “Release engineering and push karma: Chuck Rossi.” Interview by Facebook Engineering. Facebook. Facebook, 2009. Web. 5 Apr. 2012.
Verizon (2015). Data Breach Investigations Report. Retrieved from: http://www.verizonenterprise.com/DBIR/2015/

Author