Why Cybersecurity Requires DevOps

Consider for a moment this data spanning 7 years, showing that there are constants in cybersecurity trends. A 2008 report demonstrated that 62% of all data breaches were caused by a significant manual error or an internal mistake. A 2015 report found that over the past 11 years, 96% of all security incidents fell into nine basic patterns, of which the top four patterns, totaling about 90%, involve human error or misuse. Even though millions of security threats exist, this data gives us hints where to focus.

Frequency of incident classification patterns across security incidents for 2013 and 2014.
FIGURE 1: Frequency of incident classification patterns across security incidents for 2013 and 2014.

Savvy CISOs are in the risk mitigation business. Process automation and collaboration are critical to risk mitigation, as they address priority patterns related to a poor patching strategy, pushing bad code to production, and observing manual configuration mistakes. It is also vitally important to balance meeting end-user expectations in production and risk. Cue, DevOps!

Coining the name “DevOps” is a stopgap way of dealing with a problem: the community doesn’t know exactly what to call this methodology otherwise.

Sure, it started as a solution to the Development and Operations collaboration dilemma. Simple. It rolls off the tongue. Fast-forward a few years, the idea of DevOps is much more nebulous. Let’s reflect on the meta of the trend: DevOps is a new and necessary space, formed in the middle of engineering, that is an incredible asset to cybersecurity as a blueprint for building a collaborative culture and automation-centric responsibilities, which are essential for modern cybersecurity.


DevOps for Cybersecurity is Cultural and Functional

The model for why your cybersecurity requires DevOps is twofold: it is a cultural philosophy and a functional role. As a cultural philosophy, DevOps seeks to drive efficiencies, tear down silos, and elicit collaboration, which can put security teams more in-tune with the business objective of delivering engineering results. As a functional role, the DevOps entity lives within an organization’s engineering department, and it is comprised of the people, policies, practices, and tools dedicated to improving the software development life cycle as a whole. From development to production, the role of DevOps is to bridge the gap between Development, Operations/Systems Management, Security teams, and other teams, as well.

Silos tend to haunt successful organizations because the good ones can grow to enterprise size. At that scale, the smallest human error can affect dozens of systems, and degrade the quality of product. Also, mistake-prone manual processes create an incredible security risk. Infrastructure has to scale, development and QA become more robust, continuous integration is implemented, policies and practices need to be implemented and enforced, and the responsibilities of each team become more defined. With this growth, the overlap between the teams diminishes and the void between the teams increases.

In a startup environment (left) it is unavoidable that engineering teams will overlap because “everyone does everything.” In a startup, DevOps is more cultural and a mindset among teams. For the enterprise environment (right) DevOps can become a true, staffed function for the organization.
FIGURE 2: In a startup environment (left) it is unavoidable that engineering teams will overlap because “everyone does everything.” In a startup, DevOps is more cultural and a mindset among teams. For the enterprise environment (right) DevOps can become a true, staffed function for the organization.


Development and Operations cannot support the new demands of facilitating, managing, and scaling enterprise continuous integration, deployment, source control, and effective collaboration without misallocating their own time and effort, which thereby disrupts their ability to do their own work. Security risk skyrockets.

Reasons that DevOps and Security Teams Need to be Connected

  1. DevOps helps bind the security and engineering departments together through collaboration, policies, and procedures. Developers need resources and information from operations, operations need product and information from engineers, and security teams need to cast a wider net on relevant data. In the end, DevOps is charged with ensuring all teams get what they need to be successful across the entire Software Development Life Cycle (SDLC) by implementing policies and procedures such as requiring proper notification of new resource requirements or configuration changes, providing a means to provision development or testing environments, or no production releases without acceptance testing. Additionally, DevOps makes sure product is not “thrown over the wall” to operations by mandating necessary collaboration between teams to keep everyone in the same pool of knowledge.
  2. DevOps improves product quality and security concurrently. One of the mainstays of DevOps is a rapid feedback loop (FIGURE 3). Implementing practices like continuous integration promotes instant feedback on changes from development as they are made. If a piece of code or configuration is broken or caused something else to break, which can cause security vulnerabilities, the faster it is made known the faster it can be fixed. Also, automated test runs, both pre and post deployment, are essential to this effort. The tools controlling this bit of automation provide visibility into the state of the product. In essence, DevOps owns the continuous integration/deployment realm, which is aimed at fast and effective quality checks during integration and deployment.
  3. DevOps ensures a streamlined and dynamic workflow by keeping the SDLC automation infrastructure up to date with the ever changing needs of security, development, and operations. This is accomplished through innovation and staying current on technology. DevOps must create a pipeline for the product. The pipeline needs to remain consistently functional yet easily updatable as product or infrastructure changes occur, which requires time and effort outside of development or operations.
  4. DevOps guides the engineering culture towards agility, innovation, and an automated approach to security. Someone has to assume a governing responsibility over the SDLC within engineering and prioritize secure, bug-free code. DevOps is the perfect entity to assume the role of “security champion”, given it is logically right in the center of engineering. This creates a “single source of truth” for the policies, practices, and tools necessary to build and deliver a secure, quality product in an agile, innovative, and cohesive manner. DevOps falls in line with the team posture by accepting input from all other teams within engineering, sifting through the needs and requirements, and then making intelligent decisions. A few of these decisions include what tools to use, how the development pipeline will look (e.g. involving security testing or not), and how to build a foundation upon which the SDLC for that particular engineering department will operate.
It is important to create a culture that fosters two things: making and learning from mistakes; and practice makes perfect. Only through this continuous effort can true mastery be achieved.
FIGURE 3: It is important to create a culture that fosters two things: making and learning from mistakes, and practice makes perfect. Only through this continuous effort can true mastery be achieved.

When DevOps and security teams collaborate, it helps to address problems proactively at the root level, not retroactively. The security is better because monitoring tools are introduced and hardening guidance is built into the DevOps process. When your cybersecurity requires DevOps, you have competitive advantage, because your organization’s code now tells a richer story: it works, it’s released fast, and it’s secure.

Click here to download and save this blog post as a PDF.


Kim, G. (2013, October 1). DevOps distilled: A new look at DevOps. Retrieved November 6, 2015, from http://www.ibm.com/developerworks/security/library/se-devops/index.html
Mueller, E., Wickett, J., Gaekwad, K., & Karayanev, P. (2010, August 2). What Is DevOps? Retrieved November 6, 2015, from http://theagileadmin.com/what-is-devops/
Schulman, J. (2015, August 17). Why Security Needs DevOps. Retrieved November 6, 2015, from https://www.jayschulman.com/why-security-needs-devops/
Verizon (2015). Data Breach Investigations Report. Retrieved from: http://www.verizonenterprise.com/DBIR/2015/

What is Agile?

Agile is a lot of things. In a nutshell, it is an alternative paradigm to software development. It supports and encourages the philosophy of valuing individuals and interactions, working software, customer collaboration, and response to change. It opposes strict processes, tools, comprehensive documentation, contract negotiation, and detailed planning.

Being Agile about changing our thought process. We must rethink our priorities, how we handle change, stakeholder and developer interactions, teamwork, pride in our work, trust, how we measure success, and refinement. It’s also about placing our main focus on the product itself and stakeholder happiness and staying out of the ruts through continuous improvement and innovation. Finally, it is about executing iterative development cycles to produce incremental pieces of functionality until the stakeholder’s vision has been brought to life.

Agile is a Product development way of thinking that encompasses ‘The Twelve Principles of Agile Software’ in the Agile Manifesto:

We follow these principles:

  1. Our highest priority is to satisfy the customer
 through early and continuous delivery
 of valuable software.
  2. Welcome changing requirements, even late in 
development. Agile processes harness change for 
the customer’s competitive advantage.
  3. Deliver working software frequently, from a 
couple of weeks to a couple of months, with a 
preference to the shorter timescale.
  4. Business people and developers must work 
together daily throughout the project.
  5. Build projects around motivated individuals. 
Give them the environment and support they need, 
and trust them to get the job done.
  6. The most efficient and effective method of 
conveying information to and within a development 
team is face-to-face conversation.
  7. Working software is the primary measure of progress.
  8. Agile processes promote sustainable development. 
The sponsors, developers, and users should be able 
to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence 
and good design enhances agility.
  10. Simplicity – the art of maximizing the amount 
of work not done – is essential.
  11. The best architectures, requirements, and designs 
emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how 
to become more effective, then tunes and adjusts 
its behavior accordingly.

– The Agile Manifesto

Wildcards and Automation Naming Conventions

Automation and configuration management tools are wonderful creatures. They come in many varieties including BMC BladeLogic, Puppet, Salt, Chef, Ansible, Urban Code, etc. Implemented correctly, these tools can take days of manual effort down to minutes with a simple, wizard-like setup.

Splunk is delivered with the optional, and free, Deployment Server (See: Splunk Documentation). The Splunk deployment server is a limited use configuration management system that distributes application configuration across Splunk distributed architectures. Among other uses, we implement deployment servers to deploy input/output configurations to forwarders, props and transformations to Splunk indexers, and applications to Splunk Search Heads.

Automation and configuration management tools create the most wondrous of problems. These tools are neither beneficial nor malevolent. Automation implements instructions regardless of the quality of those mandates. (Go ahead and ask how I know that…)

One day I was deploying a Splunk environment in our lab and I did what any good computer guy does — I borrowed working configurations. (Don’t judge, how many of us thought to make the wheel round on our own?) I built a new Splunk Index Server and Splunk Search Head, and named them <prefix>SplunkIndex and <prefix>SplunkSearch. I installed Splunk, hooked up the Indexer, and then enabled the Deployment Server. I copied applications to the deployment-apps directory on the deployment server, and then reloaded the deploy-server.

My forwarders, indexer, and search head all received their application configurations and data started flowing into my new Splunk Instance. It was great — until it wasn’t.

After a few minutes my Splunk indexes stopped reporting any new events. The Splunk Indexer was still online. The services were running on the indexer and on the forwarders. New apps were still being deployed successfully to the forwarders. I checked outputs.conf on the forwarders, and even cycled those services to no avail. On the indexer “netstat -na | grep 8089” showed connections from the forwarders — for a while. Then the connections would go stale and the ephemeral ports froze.  In splunkd.log I found references to frozen connections. The forwarders ceased transferring data to the indexer and declared the indexer frozen.

You win a brownie** if you know what was going on by this point in the story.

The key to this story is that the deployment server managed a base config application. In the name of automation, this basic config deployed an outputs.conf to every server. However, the person I copied my configs from had the foresight to blacklist the Splunk Index server so it wouldn’t try to send outputs to itself (which can result in a really ugly loop). The configurations were fine until someone (ok, me) changed the names of the Splunk Index server by adding a prefix to splunkindex instead of a suffix (in my defense, it looked better in vCenter).  The blacklist controlling which servers get the outputs.conf listed splunkindex*. If I had used a suffix the indexer wouldn’t have received the outputs.conf, and hence wouldn’t have entered the computer version of an endless self-hug.

But, I decided to get cute on my naming convention and was rewarded with a very nice learning opportunity.

The takeaway, be like Santa: Check your (white and black) lists twice before deploying applications to your environment.

** A figurative brownie in case you wondered.

The Legend of the Engineering Documentation Hunter

Ah, the engineer. The engineer is an elusive and majestic creature capable of amazing feats of creativity, ingenuity, and skill with the tap of a keyboard. Watching an engineer operate in their natural environment can be quite an impressive display.

The only problem with these majestic beasts is when we fail to document their movements. After all, how can we hope to harness their abilities if we know nothing of their characteristics, movements, and body of work? Luckily, there is an answer: documentation.

Having a strong, capable engineering team is a huge asset to any company. The amount of work and knowledge that comes out of a solid group is remarkable. The only drawback comes when these things are not documented. Instead of getting things done efficiently and effectively, too much time is used up trying to track down cause and solution. That’s what makes strong, reliable engineering documentation so important.

Return on Investment

We all know that the bottom line is everything. In today’s market, we all want to find a way to save cost while increasing performance. There are many ways that the ROI can be shown, but we will focus on two here:

  1. Technical support prevention. If an engineer or engineers complete a job, you want to know that someone can drive when the keys are handed over. That’s where the documentation comes in. Solid end-user documentation or technical data gives the customer somewhere to turn for help. As long as the procedures, processes, and systems are documented properly, everything is within reach and easily attainable without the added cost of pulling an engineer back in.
  2. Strong documentation makes more difficult tasks a one-person job. The last thing you want is too many people working on what you thought was a simple task. If documentation is in place and accessible, there are no extra hands or minds involved. In less time than it would take to try and track down someone to answer a question, the individual performing the task can track down his answers and keep moving forward, keeping the task in a proper perspective, man-hour wise.


If you didn’t say it, you didn’t do it

Another reason is even more tangible. There have been many important events and inventions in history, and there is a simple reason we know about them. That is, these events were documented. If no one said it, it never happened.

The same can be said of technical work. Even if the best group of engineers in the country comes in and cranks out work for you, would it matter if you had no idea what they did? If they left and no one knew what they built or how to use it, it would be lost. Engineering documentation makes sure that there is visibility to the work and to the procedures and processes that are there for you to take advantage of that work.

What it looks like

Technical Documentation can come in many forms. There are User Guides, Testing Procedures, Standard Operating Procedures, architecture, and many others. But you should know it when you see it. When the work is finished, you should recognize the associated documentation on site. It should be consistent in look, feel, and voice. When you pick that document up, you should know what it is for and where it came from as soon as you look at it.

But most important is that you should be able to use it without help. Great documentation is designed to be stand-alone. It should not generate a call, it should prevent calls. It should not cause more confusion, it should answer the questions. It should be clear and precise to be easily accessible, easy to navigate, and easy to digest and apply. If it is not any of these things, why use it in the first place?

Where this leaves you

There is no doubt that having an industry-leading set of engineers do work for you will send you to the front of your industry. But much like a Loch Ness Monster sighting, seeing is believing. Grainy footage shot through a screen door at 2 miles is ok for building intrigue. But are you willing to bet your livelihood on it? Whether you are creating cutting-edge software, systems, or processes or you are hunting legendary creatures, the proof is in the pudding. And in our industry, that means the most solid documentation in the business.

Saying you saw some amazing engineering feats are great. But having the proof, now that is what makes history.