What is Datacenter Automation?

A few years ago, I accepted the role of Automation Engineer here at Kinney Group. At the time, I had no idea this would be the title for what I loved to do and had been doing for most of my career. Before I continue, I want to help define what automation is all about. It is not about robotics or manufacturing automation. Automation, in this context anyway, covers a lot of different labels — Orchestration, DevOps, DevSecOps, CI/CD, Configuration as Code, Infrastructure as Code, and so on. All these descriptions seem to confuse people when I try to explain what it is I do. But simply put, automation takes a task or set of tasks and codifies them into a repeatable and consistent process to get the same results every time. 

Automation makes use of tons of different tools, like VMware vRealize Automation and Orchestration, Jenkins, Puppet, Ansible, Terraform, and so on, to get the desired results of deployment and configuration with some stack or software. For most, the installation wizard of their favorite program can be challenging enough. That is one reason I strive to make deployments or configurations as simple as possible.  

Automating Redundant Tasks

Here’s an example of what I mean by automation: Several years ago, I was responsible for the support and maintenance of some Apache webservers for a national organization. Every time a patch was released, I had to go through the process of upgrading and applying those patches across all the systems. After working through the process by hand several times it became, well, boring and I started looking for ways to make my life easier. For the most part, all these were the same setup, so I went to work building scripts that would shorten the deployment time and increase the reliability of each deployment. These were all custom tools designed for a specific environment.  

In the end, this only benefited myself. I would have been happy to share my scripts so others could enjoy the same satisfaction, but these were custom and just for me. So that’s one example of automation.

Automating the Complex and Cumbersome

Later, after acquiring my new title (Automation Engineering Manager), I worked with a team of engineers to perform a similar process. We took a stack of written instructions that would take someone a week or more of manual steps and watching screens to build out and configure eight servers in a virtual environment, down to approximately 10 minutes of data input to accomplish +90% of the same results — in just 20 minutes of computer time.   

This solution was built with a combination of tools that were already owned by the customer (VMware) and open source tools (Jenkins and Puppet). This, again, was a custom solution as a whole, but several of smaller portions are completely reusable on other projects.  

That is automation. It’s what gets me out of bed in the morning. To be able to give back someone’s time — and often their sanity— and to provide a consistently repeatable process. To automate something so complex like multi-server configurations isn’t easy and can cause you to think (way) outside of the box to provide the desired outcome, but it is so rewarding to see the look on the customers face when they realize what you have just saved them. 

So, the next time you think about how boring the repetitive work you are doing looks to you, remember there is often a way to automate the work — especially when computers are involved.   

Want to know how Automation Services from Kinney Group can transform your workload?

We’d love to hear about your challenges and chat about how automation can put hours back in your work week. Fill out the short form below and we’ll be touch!

VMware Orchestrator Tips: Stopping a Second Execution of a Workflow

Kinney Group’s own Troy Wiegand hosts this blog series, which will outline some simple tips for success with VMware vRealize Orchestrator. Join Troy as he applies his automation engineering expertise to shed some light on VMware! 

When you have a really important workflow to run in VMware vRealize Orchestrator, you don’t want another one running at the same time. As of right now, vRO doesn’t have an option for stopping that second workflow execution from happening simultaneously. Never fear, though—Kinney Group has you covered. 

The One and Only Workflow

Some workflows are just more impactful than others. They have the power to do serious damage to your environment—they could tear down your infrastructure, or they could build it up. Depending on what you’re orchestrating, running another workflow simultaneously to this crucial one could be problematic. Luckily, we can use a workaround to prevent any potential damage. When implemented, the workaround will look like this: 

Figure 1 – The workaround functions to stop a second execution while the first one is running

Configuring the Field

The ideal result of this workaround is that when you try to run a workflow for a second time while it’s already running, vRO will send an error message to prevent that redundancy. Let’s investigate how that alert happens by going to the Input Form. The text field that contains the error message is populated with some placeholder text to identify the error. Under “Constraints,” note that “This field cannot be empty!” is marked as “Yes” from the dropdown menu. This ensures that a user cannot submit the form unless the field has something in it. However, that’s only possible if the field itself is visible. Set the “Read-only” menu to “Yes” under the Appearance tab so it can’t be edited. 

Figure 2 – The configuration of the workaround, as seen through “Input Form”

The External source action is set to “/isWorkflowRunning,” and it’s configured from the variable “this_workflow.” You can add this as a new variable and assign the same value as the name of the workflow you wish to run without concurrencies.  

Figure 3 – The form to edit a new variable

By examining the script, we can see that we’ve inputted our workflow ID. We’ve also configured the server to get executions. (Alternatively, you could skip to tokens = executions for a simpler function.) The flag below that function should be set to “false.” We can then check the state of every token, and if it’s “running,” “waiting,” or “waiting-signal,” set the flag to “true.” Those are the three states for a workflow that indicate being in motion or in progress, so it’s essential to identify them as triggers for our error message. 

Figure 4 – The workaround script

The Result: Your Workflow Runs Alone

This combination of settings allows the field to appear when you try to run a workflow while another execution of it is currently running. This way, when you try to run an interrupting workflow, it’ll be stopped.  

We hope this tutorial has been helpful in streamlining your workflows and making the most of VMware Orchestrator! We’ve got more tips for vRO here. Fill out the form below to get in touch with automation experts like Troy:

VMware Orchestrator Tips: Dynamically Viewing Snapshots Based on Selected VMs

Kinney Group’s own Troy Wiegand hosts this video blog series, which will outline some helpful VMware Orchestrator tips. Join Troy as he applies his automation engineering expertise to shed some light on VMware! 

One of the impressive capabilities of VMware vRealize Orchestrator (vRO), a tool that creates workflows to automate IT operations, is the snapshot feature. Have you ever had to manipulate or manage multiple snapshots on different Virtual Machines (VMs), all at the same time? It may seem like the only option is to revert or delete multiple snapshots. Maybe this has led you to revert VMs to snapshots with the same name, or to delete more than you intended. Kinney Group engineers use this simple method to view various snapshots using selected VMs. 

The Secret Menu

Begin with an array input by clicking “Run.” Populate the array with your selected VMs using the dropdown list and click “apply.” When the window closes, hover your cursor over the field just below the list of VMs, and you’ll see a gray dropdown bar appear. This subtle menu feature identifies snapshots attached to each VM! Select one from the dropdown and click “Run.”  

Figure 1 – The gray menu appears as a dropdown when the user hovers over it

The resulting schema populates with a script that reveals the snapshots. From this script, users can delete or revert all snapshots.  

Return to the Input Form to see how we populated these VM names. The “Appearance” tab on the left indicates that the display type we used is a DropDown. Our value options came from an external source, the action specifying “get all snapshots of all VMs.” 

Figure 2 – select “DropDown” as the Display type in the Appearance tab and “external source” in the Values tab

Branching off with Trees

Snapshots are done through trees. For example, you could perform x, y, and z action, then take a snapshot to preserve the state of that VM. This will be saved in a tree structure, with the snapshot containing x, y, and z actions taken. The snapshot you take after performing another action will capture the updated workflow. Working with multiple snapshots allows you to track various changes in the environment, and presents the opportunity to revert to a previous state.

Figure 3 – The action exhausts all snapshot trees

Let’s revisit the action. We’ve fed in the VM array as an input. For all VMs, we ask, “Do we have any snapshots?” If so, we can go through the snapshot tree and run the function on all trees. To get all of our snapshots by name, we must go recursively through the tree structure. A function to do just that is included in the array. This function will loop through the array until it exhausts all trees, after which the resultant snapshots will be captured within the array. A string array will return the snapshots back to our workflow execution input form.

What’s in a Name?

You may be asking: why are we looking at the name rather than the actual snapshot object? The reason for this is that despite the items sharing a name, there will be many different objects. The only way to perform mass operations on our selected VM based off of one snapshot name would be to feed it a singular snapshot. This workaround also allows you to have different behavior based on the names. For instance, if all VMs and questions have a specified baseline snapshot, and you wanted to revert all snapshots to that standard, you could invoke different behavior. You could specify whether you wanted the first snapshot that fit the baseline or the last one. In the scheme of the workflow, the snapshots could go either way.

If your aim is to delete snapshots, you could ask the same question: why look at the name rather than the snapshot object? This method allows you to specify if you want to keep or delete multiple snapshots or only ones with certain properties. You may be in a situation where you’ve given multiple snapshots the same name, and you may only want the first few or most recent few snapshots. Pruning like this will save more space in the data store. (Note: This utility can also be combined with a dual list picker to select the VMs differently than in a standard array.)

A Snapshot is Worth a Thousand Data Points

Applying this KGI utility for VMware Orchestrator in your own workflows makes snapshot management and manipulation a breeze!

We hope you got some inspiration for your own workflows. Stay tuned for more VMware Orchestrator tips! There’s plenty more to come from the blog and on our YouTube channel. In the meantime, fill out the form below to get in touch with automation experts like Troy:

VMware Orchestrator Tips: Using Dual List Input Fields to Select VMs

Kinney Group’s own Troy Wiegand hosts this video blog series, which will outline some simple tips for success with VMware vRealize Orchestrator. Join Troy as he applies his automation engineering expertise to shed some light on VMware! 

An exciting upcoming feature for VMware is the capability to link lists to variables in VMware vRealize Orchestrator (VRO), a solution that creates workflows to simplify IT operations through automation. As we wait eagerly for that tool to be made available, we have a Kinney Group workaround that I’d like to share. This solution provides a user-friendly interface for running multiple operations on our Virtual Machines (VMs) at the same time. 

Utilizing Dual Lists

Today, we’ll be using dual list input fields to select VMs in VMware. Starting within the workflow, go to “Edit” mode along the top banner and click “Run. This will open the dual list field. Similarly to other selecting platforms, use “Shift” to select a range or “Control” or “Command” to select nonconsecutive items. The VRO interface considers anything on the right to be selected. Anything remaining on the left won’t be selected. By viewing the “Virtual Machines” page, you can then see that modifying the contents of each list in turn modifies the value for the input shown below. 

Figure 1 – All VMs are selected (on the right) except the two in the left box

Reviewing Your Script

Viewing the details of the script can help illuminate what exactly you’re inputting into the form. This view can be found within the source value; you can then click “Edit” on a value to get a bigger view. For all VMs in an array, take the name and insert it into a sorted array at the end of the script. Make sure the field includes return result to ensure the input populates correctly in the dual list.  

Top KGI Tip: enter your commands in a while loop to keep track of work within asynchronous tasks! 

Figure 2 – The script, including while loop and array

Input Form

Open the “Input Form” viewer. You’ll see we have one input. Instead of feeding a virtual machine array into this dual list, we’ll be feeding in a string of several VMs. To populate the Dual List pickers, go to “Values” on the right-hand side of the screen and select a source. I’ll be using the “External Source” field (shown here as the Default Value). Using a text field to view the input string, make sure your Value source is set to Compute a value and your operator is set to Concatenate in the “All VMs” field.  

Figure 3 – the “Values” field for in_vm_csv with desired settings

View your Script in Schema

Once the source is inputted, we can view the script in the Schema tab to undo the array formatting. Here you can choose either to use “allVms” or to select only certain VMs. Generally, we turn the visibility “off” for this field to ensure a cleaner visual experience. Save your changes before navigating to the “Run” tab.  

Figure 4 – a view of the script within the “Schema” tab

The Moment of Truth

In the “Run” tab, click on “Virtual Machines” and select which inputs to use—then run the script! Use the “Logs” tab to see how the script is populating. As a result, we can see that VRO found an object for the selected fields. This solution has many applications, but we most often use it when running multiple operations on our virtual machines at the same time.

Figure 5 – the resultant log, with the “Found” object highlighted

We hope you picked up some expertise for your VRO toolkit. Stay tuned for more VMware Orchestrator tips! In addition to handy automation workarounds like this one, you can expect posts on workflow execution and snapshot viewing in the future. In the meantime, fill out the form below to get in touch with automation experts like Troy:  

Contact us!

Automating STIG Compliance and Reporting with Puppet

As a long-time soldier on the Information Assurance battlefield, I am all too familiar with the burdens of security and compliance, not to mention the conflict they can present in maintaining system operations.  As a leading partner with PuppetKinney Group, Inc. has developed a solution to improve operational security posture while dramatically reducing the time and effort required to achieve compliance and produce required documentation.  Automating security compliance, exempting nodes from enforcing security policy as needed, and on-demand, up to date, customized STIG checklistswhere has this been all my life? 

 

Check your STIG Boxes with Puppet 

Puppet is a configuration management tool used to automate compliance with regulatory requirements.  In the Department of Defense (DoD)the Defense Information Systems Agency (DISA) provides the Security Technical Implementation Guides (STIGs) as the security standard.  Deployed systems are configured in compliance with the STIG, after which the system Identity and Access Management (IAM) must document the security posture of each system against this standard using the STIG checklist.  The STIG checklist is an eXtensible Markup Language (XML) document that records the state of each finding from the applicable STIG. It also includes any mitigation or additional details the approval authority might need to certify and accept the system for operations.  The checklist file is included in the accreditation package for security certification and approval to operate. 

Typically, site system engineers and administrators perform the security hardening through some combination of manual and automated methods.  This can be a labor-intensive process, and the results are often inconsistent depending on the level of expertise of the staff performing the task.  DISA’s automation tools will audit the status of findings, meaning they stop short of making corrections.  In addition, the DISAprovided automation is incomplete, leaving around 30% of findings to be manually reviewed. 

The second half of the security accreditation process is the required documentation. A Java-based graphical tool creates the STIG checklists.  The IAMs often rely on inputs from the administrator that performed the security hardening to complete the checklist.  Sites often resort to keeping a master spreadsheet of findings and mitigations, which they then copy and paste into checklists as needed.  It’s also common simply to create a single checklist and copy it for each system to save time.  This process is very timeconsuming, but taking shortcuts often results in inaccurate documentation and defeats the purpose of the accreditation process. 

 

The New Solution for STIG Compliance 

There are several solutions in the marketplace that offer automated security hardening in compliance with the STIG.  While some users may be able to assess compliance and even remediate the security findings, Puppet automation is uniquely suited taddress both the compliance and the reporting use cases.  The KGIdeveloped STIG implementation for RHEL7 can configure a system to implement security as the STIG requires.  With proper Puppet server infrastructure, managed nodes not only apply the required security but maintain that security posture, automatically correcting “configuration drift” that happens as systems customarily change over time. 

This is accomplished by deploying custom Puppet modules developed by KGI that apply configurations to meet STIG requirements. When these modules apply configuration and remediate drift, that activity is logged by Puppet. A key feature of the compliance Puppet modules we develop is allowing exceptions. This is where individual vulnerabilities are skipped or not, applied to a single node or a subset of nodes. Our modules are designed to log this exception information when Puppet runs as well. Another component key to the Puppet solution is “puppetdb, a database that stores the configuration state as reported by every managed node.  This database can be leveraged to dynamically build the STIG checklist files on-demand, using near real-time data from the latest run of the Puppet agent. 

The following is terminal output from a utility generating a STIG checklist file using a combination of outputs from the latest Puppet catalog and reports for the given node, as stored in puppetdb.  This report took about four seconds to generate and automatically populated nearly 250 security findings from the STIG.  Note that the output below has been truncated so we don’t have 250 open/closed/excluded records here:  

 

./checklist.py  rh7.example.com 

Locating certificates for puppetdb authentication 

Obtained latest report for node: rh7.example.com 

Lookup for catalog with uuid: 749fdd6e-9d8c-4f56-b8ca-d6d9a9995fd6 

Found catalog with uuid: 749fdd6e-9d8c-4f56-b8ca-d6d9a9995fd6 

Obtained OS facts for the node 

Obtained the list of assigned STIG classes 

Obtained networking facts for the node 

Result Summary: 

------------------------------ 

Vul-ID: V-204427 is closed 

Vul-ID: V-204424 is closed 

Vul-ID: V-204429 is excluded 

Vul-ID: V-228563 is closed 

Vul-ID: V-228564 is open 

Vul-ID: V-204617 is closed 

Vul-ID: V-204608 is open 

Vul-ID: V-204499 is closed 

Vul-ID: V-204579 is excluded 

Vul-ID: V-204418 is closed 

Vul-ID: V-204419 is open 

Vul-ID: V-204578 is closed 

Vul-ID: V-204470 is closed 

Vul-ID: V-204471 is open 

Vul-ID: V-204472 is open 

Vul-ID: V-204473 is open 

Vul-ID: V-204524 is closed 

Vul-ID: V-204525 is closed 

Vul-ID: V-204608 automation data is being overridden 

Applied overrides from custom YAML 

---------------------------------------- 

Checklist saved to: ~/checklists/rh7.example.com_2021-02-01.ckl 

 

The KGI Puppet STIG Module in Action 

Here are some examples from the generated checklist file as seen from the DISA STIG Viewer application.  The first example is a STIG finding that is marked as Not Reviewed because the corresponding Puppet class is not assigned to the node.  The KGI Puppet STIG module is written in such a way that it is easy to exempt findings as needed for a single node or group of nodes.  This allows for flexibility to balance security compliance with operational requirements that may conflict with specific findings.  In this case, the node is exempt from enforcing the finding, and we can see that the comment reads “Puppet class not assigned.  

Figure 1 - STIG finding what is marked as "Not Reviewed"
Figure 1 – STIG finding what is marked as Not Reviewed – comment reads “Puppet class not assigned”

This next example shows a finding that was determined to be closed.  A closed finding is one that has the corresponding Puppet class assigned in the catalog, but the latest report from the agent has determined that no resources were required from the assigned class.  The status is “Not a Finding, and the comment contains the note “Enforced by Puppet class assignment. 

Figure 2 - STIG finding what is marked as closed
Figure 2 – STIG finding what is marked as closed – comment reads “Enforced by Puppet class assignment”

An open finding will have more detail as to why it is open.  From a Puppet perspective, these are the resources that needed to be applied during the last Puppet run.  In this case, the system did not detect a virus scanner, so a “notify” resource was applied.  The complete Puppet resource information is populated in Finding Details, and the comments contain the message “Identified by Puppet class assignment. 

Figure 3 - System did not detect a virus scanner, so a “notify” resource was applied
Figure 3 – Notify resource was applied when system did not detect a virus scanner- comment reads “Identified by Puppet class assignment”

This solution will save you hours in producing STIG checklists for the security accreditation process (and it’s super cool).  But what if youre mitigating findings outside of the standard STIG prescribed methods?  Do you need to go into the checklist file(s) and update them with actions local to your site?  If you were paying attention to the last couple lines of the report, you may have noticed that “automation data is being overridden.  The reporting tool allows the automated finding data from Puppet to be augmented as needed from a source outside of Puppet.  A previous site where I worked used VMware LogInsight (you could also use Splunk) to forward log events instead of the STIG prescribed “rsyslog.  Because of this, we would mark the related findings as closed because we were implementing the security requirement with a method other than the default as assumed by the STIG. 

 

Stay Tuned!

In an upcoming blog post, we’ll cover the integration of the Puppet security solution with Splunk to provide visibility and analysis of the secure Puppet environment. 

For more information or for help getting these kinds of results at your site, fill out our form below: 

Using Puppet Trusted Facts Part 2: Improving Security

Puppet Trusted Facts - How to Use Them - Kinney Group

In Part 1 of this two-part blog series, I covered the basics of Puppet facts. I defined and provided examples of various types of facts in Puppet, including core, custom, and external facts. Deployed in a similar way, we sometimes refer to these facts addressed in Part 1 as “normal facts,” because they are the most commonly used. That now brings us to the main topic of Part 2: Puppet trusted facts. We’ll examine what trusted facts are and how to use them to create higher levels of security for sensitive data held in Puppet.

Introducing Puppet Trusted Facts

There is a lot capability packed into each of the normal fact types, and they can be a very powerful tool in your Puppet implementation. However, there is one thing about all of these fact types that could present a bit of a security concern: They are all self-reported by the node, which means that there is no guarantee of their accuracy.

Let’s consider our previous example of a custom fact for a site identifier. The Puppet master depends on the node to report that information accurately so that it can send the appropriate configuration information back to the node. A site identifier might be used to determine what secrets or sensitive information should be deployed to a node.

By nature of how they are deployed, custom and external facts could be manipulated, maliciously or otherwise, providing a potential opportunity to compromise sensitive information or worse. Trusted facts can be used to address this security issue.

How to Use Trusted Facts

Trusted facts are embedded in the certificate that is used to secure the connection between the node and the Puppet master. This implies that the certificate authority has checked and approved these facts and will prevent them from being overridden manually. We now have a method available to ensure that the fact being sent to the master is accurate because trusted facts are immutable.

Trusted facts are actually keys available in a special hash called a $trusted hash. Custom information is embedded in another hash that is nested in the $trusted hash called extensions.

The $trusted hash might look something like the following example:

{
     ‘authenticated’ => ‘remote’,
     ‘certname’      => ‘appserver01.sample.com’,
     ‘domain’        => ‘sample.com’,
     ‘extensions’    => {
             pp_site    => ‘datacenter013’,
             pp_region  => ‘alpha’,
             pp_env     => ‘production’,
      },
     ‘hostname’      => ‘appserver01’
}

In the example above we have an extension for site, region and environment but how do we get that information into the certificate? When the connection request is made from the node to the Puppet master these extensions must be included in the request. From a Linux node the command to download the Puppet agent software from the master and embed the extensions into the certificate would look like the following:

curl -k https://puppetserver:8140/package/current/install.bash | sudo bash -s agent:certname=appserver01.sample.com extension_requests:pp_site=datacenter013 extension requests:pp_region=alpha extension_requests:pp_env=production

Once the node request from the master has been accepted, the extension data elements are stored in a file on the node called the csr_attributes.yaml file. This file can typically be found at /etc/puppetlabs/puppet/ on Linux machines and C:\ProgramData\PuppetLabs\puppet\etc on Windows-based machines. The file itself will contain an extension_requests that will look similar to the following:

custom_attributes:
  2.1.243.443568.1.9.7: 343gtrbhryts87739380kdjfjf6376hd
extension_requests:
  pp_site: datacenter013
  pp_region: alpha
  pp_env: production

Another, more manual, method you can use to deploy trusted facts is to have your csr_attributes.yaml file in place on the node prior to making and approving the node request. Since trusted facts are immutable once the certificate request is signed, any desired data must be present before Puppet agent attempts to request its node for the first time.

Trusted Facts as a Security Measure

Now that we have trusted facts implemented, we can easily access them in our Hiera hierarchies and in our Puppet code. Using the $trusted hash we are able to access them just like we would any other fact in Puppet. Trusted facts might take a little more effort to deploy because you need to do it manually or have an automated provisioning process in place that can pre-stage your csr_attributes.yaml file or execute the Puppet agent install by running the curl command to download the software from the master.

Since trusted facts cannot be modified during the lifecycle of a node, there are special use cases where these are much more useful than normal fact types. However, in a Puppet environment where additional layers of security are helpful for meeting security requirements or ensuring the integrity of your systems, trusted facts can be a very useful tool.

Trusted facts may be worth considering in the following use cases:

  1. Preventing the inadvertent application of non-production configuration to production nodes.
  2. Preventing secure data or information from being exposed to the wrong users.
  3. Providing an additional layer of security for passing security audits.

Kinney Group was recently named the recipient of Puppet’s Channel Partner of the Year award for Puppet Government Partner of the Year for 2018. For more information about Puppet IT Automation services offered by Kinney Group, contact us here.

Using Puppet Trusted Facts Part 1: An Intro to Puppet Facts

Using Puppet Trusted Facts - Kinney Group

In this two-part blog series, I am ultimately going to address using Puppet trusted facts: what they are, how to use them, and most importantly, how to use them as an added security measure. However, before I get to trusted facts in Part 2, I’d like to make sure we’ve covered the basics here in Part 1: So first, what are Puppet facts?

The Basics: Puppet Facts

Facts in Puppet are nothing new. Facts are information gathered about a node by a tool called Facter. Facts are deployed as pre-set variables that can be used anywhere in your Puppet code. Facter is installed automatically with the Puppet agent software.

When a Puppet agent run takes place on a node, the first thing that happens is that Facter gathers up information about the node and sends that information to the Puppet master server. The Puppet master then uses that information to determine how the node should be configured and sends configuration information back to the node. The Puppet agent uses that information to apply the desired configuration to the node.

Some examples of core facts that are generated by Facter by default are:

  • Operating System
  • Kernel
  • IP Address
  • FQDN
  • Hostname

Typically, facts—once they are sent to the Puppet master—are stored in the PuppetDB (when in use), which means that the Puppet master actually has a detailed inventory of information about your infrastructure. PuppetDB’s API provides a powerful way to share that information with other systems.

What makes Facter even more powerful is that you can also create your own Facter facts called custom facts or external facts. These facts are either deployed within your Puppet modules, generated via a script, or embedded in designated files on your Puppet nodes.

Puppet Custom and External Facts

Custom and external facts give you the ability to attach your own metadata to a node so that you can use them in your Puppet code. One common example would be a custom fact for a site identifier that indicates where a node is deployed in the data center. This fact could be generated in a couple of ways: either by a custom script or a flat file deployed to a designated facts directory on the node.

Most facts can be changed during the lifecycle of a node when the characteristics of a node are changed. For example, if a machine’s operating system is updated, that information is automatically updated by Facter and sent back to the master on the next Puppet agent run.

Fact Types in Puppet

Core, custom, and external facts are deployed in a similar way even though they are generated slightly differently. We sometimes refer to these as normal facts because they are the most commonly used.

  • Core facts: Built-in facts that ship with Facter.
  • Custom facts: Require Ruby code within your Puppet module to produce a value.
  • External facts: Generated by either pre-defined static data on the node or the result of running an executable script or program.

Now that we’ve covered the basis with regard to Puppet facts, we’re prepared to pick back up in Part 2 of this series to cover Puppet trusted facts. More specifically, I will address how trusted facts in Puppet can add additional layers of security for meeting security requirements or ensuring the integrity of your systems.

And with that, we can move on to Part 2 in this series: Using Puppet Trusted Facts: Improving Security.

Kinney Group Named Puppet’s Government Partner of the Year 2018

Kinney Group Named Puppet Government Partner of the Year

INDIANAPOLIS, IN – February 14th, 2019 – Kinney Group today announced it has been named the recipient of Puppet’s Channel Partner of the Year award for Puppet Government Partner of the Year for 2018. In receiving this award, Kinney Group was recognized for being a top performing partner in revenue, solutions, and field engagement, as well as for making continuous contributions to drive customer success with automation.

Kinney Group Named Puppet Government Partner of the Year for 2018

“We are honored and humbled to be recognized as Puppet Government Partner of the Year.” said President and CEO Jim Kinney. “We continue to view Puppet as the best platform in the market for helping Government customers harness the power of automation to address security requirements, enable digital transformation, and also to save millions in funding each year through eliminating manual processes.”

The annual Puppet Channel Partner of the Year awards honor Puppet’s channel ecosystem for delivering customer excellence and innovative solutions. This year’s award winners also demonstrated exemplary performance in the implementation of Puppet technology. The program recognized thirteen partners globally in seven categories.

“Puppet is dedicated to building solutions that allow customers to automatically deliver and operate all of their software across their entire lifecycle in any environment,” said John Schwan, vice president of global partner sales, Puppet. “Key to this success is our customer-centric partners. We congratulate Kinney Group on its Puppet Channel Partner of the Year Award and applaud its ongoing commitment to drive enterprises forward.”

Shawn Hall, Director of the Next Generation Data Center team at Kinney Group, offered this:

Puppet as a platform provides tremendous value to our Government customers every day, especially in the areas of security and compliance. With Puppet’s newest capabilities and integrations with tools like Splunk, we are able to utilize Puppet to deliver on even more compelling use cases. Puppet is a leader in their space, and we are excited to continue this great partnership as we do great things for our customers with Puppet.”

About Kinney Group

Kinney Group is a solutions-oriented professional services consulting firm specializing in automation and analytics to harness the power of IT in the cloud to improve lives. Security is in Kinney Group’s DNA, enabling the company to integrate the most advanced automation, analytics, and infrastructure technologies as an optimized solution powering IT-driven mission and business processes in the cloud for federal agencies and Fortune 1000 companies. We are an elite team with a unique combination of credentials for strict security environments who serve our customers with an unexpected experience. We specialize in Splunk, AppDynamics, Puppet, and VMware to serve our customers as they journey through digital transformation. Learn more at kinneygroup.com.

The Cybersecurity Roadmap

Improving your cybersecurity posture is a journey. Don’t be frozen by where to start. Transcend the traditional security monitor and utilize real-time, all-time analytics for security.

Cybersecurity Roadmap

To download a copy of The Cybersecurity Roadmap, please click on the image.

Configuring Patches By Hand Invites Hacks

You Think Your Patching Strategy Is Good Enough?

If your organization is breached because you have a missing patch, it is not only embarrassing, it can get you fired. We see it all the time in the news. The fact is, there is a level of negligence with most hacks since many of them can be prevented with a proactive, automated patching strategy. A For example, a 2015 report observed that 99.9% of the Common Vulnerabilities and Exposures (CVEs) had been compromised more than a year after the CVE was made public. Some confirmed breaches were associated with CVEs that were published in 1999.

[perfectpullquote align=”full” cite=”” link=”” color=”” class=”” size=”22″]

According to the Verizon 2015 Data Breach Incident Report, this is the count of exploited CVEs in the 2014 calendar year, tallied by the CVE publish date.
FIGURE 1: According to the Verizon 2015 Data Breach Incident Report, this is the count of exploited CVEs in the 2014 calendar year, tallied by the CVE publish date.

[/perfectpullquote]

In today’s datacenter, automating patch management is a necessity. With security vulnerabilities coming out at record speeds, the days of hand jamming patches on servers is gone. It is error prone, exposing systems to unnecessary risk, and your organization simply cannot keep up forever.

 

A Day In The Life Of A Systems Administrator

A security patch is just like a code release (it could even be a code release) when it comes to releasing code in your data center. Many of your admins manually patch. And, they do so constantly. Here is an example of their process (buckle-up; it’s grueling):

[perfectpullquote align=”full” cite=”” link=”” color=”#5b8d94″ class=”” size=”16″]They might start, for example, by downloading the latest Red Hat Package Manager (RPM), which contains updates for Linux servers. Then, they go to each Windows server for scanning, downloading, and installing patches. Most of your admins have this juggling down to a science. They manually remote into as many Windows servers as possible and check to see which patches are needed. As they are checking for patches, they are concurrently fixing a different server. Rinse and repeat. By the time they get to the last Windows server, it is time to go to the first server and select the patches they want to install as highlighted by the scan results. They would then begin installing. Keep in mind, they are still hopping back and forth between Windows servers. Rinsing and repeating, again, until they make it to the last server. When your admins finally do make it to the last server, the Linux RPMs would be finished. They probably fire off a handy-dandy, home rolled, almost reliable, SSH script that would push the RPMs to the servers and kick off the installation of the RPMs.

Now it would be time for them to run a vulnerability and patch scan to see if all of their servers were actually patched. Usually, they could have an environment of a thousand servers done in a week or so. Hooray! Just in time for the next round of patching to begin. [/perfectpullquote]

If this sounds tiring and error-prone, it’s because it is. And, it is unnecessary. With a manual, clunky, and usually reactive patching strategy like this, organizations require a dedicated patching team to constantly hand jam patches. This caused a ripple effect. This worked when environments were smaller 10 years ago. Today, it is often impossible.

 

To Patch, Or Not To Patch

System Administrators will inevitably fall short somewhere, which forces the CIO to have to make a choice:

  1. The first option is to have out of date patches, but let the system administrator do all of his or her regularly assigned work.
  2. The second option is to have up to date patches and deal with a datacenter that never gets upgraded, systems that never get installed, or software that never gets installed or configured.
  3. The third option is to add additional labor, an expensive and often non-viable option.

If this sounds like the state of your organization, it is time to look at automating your patching process.

 

Facebook Never Gets Hacked. Take Small Steps To Emulate Them.

One day, Facebook could get hacked, but they haven’t yet and that is notable. The well known social media giant is a great example for automating patches. Facebook’s release cadence is extreme, and they admit how aggressive they are about automating as much of the dev and deploy environments as possible. While it may not be appropriate for your organization, it is a testament to having a culture of a proactive patching strategy with reportability at the end of a patching cycle. This allows you to have a secure deployment pipeline updating your systems in the data center frequently. There are many tools out there that allow patching of the major datacenter operating systems. At the center of the DevOps function are forward-leaning tools that can run a patch analysis on the entire network of Linux and Windows servers and download the necessary patches. Then the system can push the patches (either automatically or manually by bulk) to the servers during your maintenance window.

No one likes downtime or bogging down a network for the sake of patching. In fact, companies sometimes avoid critical patching entirely if it will affect the end-user experience at all. This is an incorrect way of thinking as a data breach has a larger negative impact on the end-user than temporary downtime ever will. Leverage the ability to “stage” patches that are for mission-critical systems with these forward leaning tools.

 

Patch Smarter, Not Harder.

Your System Administrators will give anything to have an automated patching solution, and you have to provide them the culture for the engineering teams to move in that direction. One of the largest benefits to a CISO or manager is the improved reportability at the end of the patch cycle due to the automation tools sophistication. They have a clear view of what has failed, succeeded, and what exactly has been installed on their servers. Combine this with consistency of patching, returning time back to Systems Administrators, and the speed of patching to achieve unparalleled business value.

Click here to download and save this blog post as a PDF.

Sources:

Paul, Ryan. “Exclusive: A Behind-the-scenes Look at Facebook Release Engineering.” Ars Technica. 5 Apr. 2012. Web. 9 Nov. 2015.
Rossi, Chuck. “Release engineering and push karma: Chuck Rossi.” Interview by Facebook Engineering. Facebook. Facebook, 2009. Web. 5 Apr. 2012.
Verizon (2015). Data Breach Investigations Report. Retrieved from: http://www.verizonenterprise.com/DBIR/2015/