Automating STIG Compliance and Reporting with Puppet

As a long-time soldier on the Information Assurance battlefield, I am all too familiar with the burdens of security and compliance, not to mention the conflict they can present in maintaining system operations.  As a leading partner with PuppetKinney Group, Inc. has developed a solution to improve operational security posture while dramatically reducing the time and effort required to achieve compliance and produce required documentation.  Automating security compliance, exempting nodes from enforcing security policy as needed, and on-demand, up to date, customized STIG checklistswhere has this been all my life? 


Check your STIG Boxes with Puppet 

Puppet is a configuration management tool used to automate compliance with regulatory requirements.  In the Department of Defense (DoD)the Defense Information Systems Agency (DISA) provides the Security Technical Implementation Guides (STIGs) as the security standard.  Deployed systems are configured in compliance with the STIG, after which the system Identity and Access Management (IAM) must document the security posture of each system against this standard using the STIG checklist.  The STIG checklist is an eXtensible Markup Language (XML) document that records the state of each finding from the applicable STIG. It also includes any mitigation or additional details the approval authority might need to certify and accept the system for operations.  The checklist file is included in the accreditation package for security certification and approval to operate. 

Typically, site system engineers and administrators perform the security hardening through some combination of manual and automated methods.  This can be a labor-intensive process, and the results are often inconsistent depending on the level of expertise of the staff performing the task.  DISA’s automation tools will audit the status of findings, meaning they stop short of making corrections.  In addition, the DISAprovided automation is incomplete, leaving around 30% of findings to be manually reviewed. 

The second half of the security accreditation process is the required documentation. A Java-based graphical tool creates the STIG checklists.  The IAMs often rely on inputs from the administrator that performed the security hardening to complete the checklist.  Sites often resort to keeping a master spreadsheet of findings and mitigations, which they then copy and paste into checklists as needed.  It’s also common simply to create a single checklist and copy it for each system to save time.  This process is very timeconsuming, but taking shortcuts often results in inaccurate documentation and defeats the purpose of the accreditation process. 

New call-to-action

The New Solution for STIG Compliance 

There are several solutions in the marketplace that offer automated security hardening in compliance with the STIG.  While some users may be able to assess compliance and even remediate the security findings, Puppet automation is uniquely suited taddress both the compliance and the reporting use cases.  The KGIdeveloped STIG implementation for RHEL7 can configure a system to implement security as the STIG requires.  With proper Puppet server infrastructure, managed nodes not only apply the required security but maintain that security posture, automatically correcting “configuration drift” that happens as systems customarily change over time. 

This is accomplished by deploying custom Puppet modules developed by KGI that apply configurations to meet STIG requirements. When these modules apply configuration and remediate drift, that activity is logged by Puppet. A key feature of the compliance Puppet modules we develop is allowing exceptions. This is where individual vulnerabilities are skipped or not, applied to a single node or a subset of nodes. Our modules are designed to log this exception information when Puppet runs as well. Another component key to the Puppet solution is “puppetdb, a database that stores the configuration state as reported by every managed node.  This database can be leveraged to dynamically build the STIG checklist files on-demand, using near real-time data from the latest run of the Puppet agent. 

The following is terminal output from a utility generating a STIG checklist file using a combination of outputs from the latest Puppet catalog and reports for the given node, as stored in puppetdb.  This report took about four seconds to generate and automatically populated nearly 250 security findings from the STIG.  Note that the output below has been truncated so we don’t have 250 open/closed/excluded records here:  



Locating certificates for puppetdb authentication 

Obtained latest report for node: 

Lookup for catalog with uuid: 749fdd6e-9d8c-4f56-b8ca-d6d9a9995fd6 

Found catalog with uuid: 749fdd6e-9d8c-4f56-b8ca-d6d9a9995fd6 

Obtained OS facts for the node 

Obtained the list of assigned STIG classes 

Obtained networking facts for the node 

Result Summary: 


Vul-ID: V-204427 is closed 

Vul-ID: V-204424 is closed 

Vul-ID: V-204429 is excluded 

Vul-ID: V-228563 is closed 

Vul-ID: V-228564 is open 

Vul-ID: V-204617 is closed 

Vul-ID: V-204608 is open 

Vul-ID: V-204499 is closed 

Vul-ID: V-204579 is excluded 

Vul-ID: V-204418 is closed 

Vul-ID: V-204419 is open 

Vul-ID: V-204578 is closed 

Vul-ID: V-204470 is closed 

Vul-ID: V-204471 is open 

Vul-ID: V-204472 is open 

Vul-ID: V-204473 is open 

Vul-ID: V-204524 is closed 

Vul-ID: V-204525 is closed 

Vul-ID: V-204608 automation data is being overridden 

Applied overrides from custom YAML 


Checklist saved to: ~/checklists/rh7.example.com_2021-02-01.ckl 


The KGI Puppet STIG Module in Action 

Here are some examples from the generated checklist file as seen from the DISA STIG Viewer application.  The first example is a STIG finding that is marked as Not Reviewed because the corresponding Puppet class is not assigned to the node.  The KGI Puppet STIG module is written in such a way that it is easy to exempt findings as needed for a single node or group of nodes.  This allows for flexibility to balance security compliance with operational requirements that may conflict with specific findings.  In this case, the node is exempt from enforcing the finding, and we can see that the comment reads “Puppet class not assigned.  

Figure 1 - STIG finding what is marked as "Not Reviewed"
Figure 1 – STIG finding what is marked as Not Reviewed – comment reads “Puppet class not assigned”

This next example shows a finding that was determined to be closed.  A closed finding is one that has the corresponding Puppet class assigned in the catalog, but the latest report from the agent has determined that no resources were required from the assigned class.  The status is “Not a Finding, and the comment contains the note “Enforced by Puppet class assignment. 

Figure 2 - STIG finding what is marked as closed
Figure 2 – STIG finding what is marked as closed – comment reads “Enforced by Puppet class assignment”

An open finding will have more detail as to why it is open.  From a Puppet perspective, these are the resources that needed to be applied during the last Puppet run.  In this case, the system did not detect a virus scanner, so a “notify” resource was applied.  The complete Puppet resource information is populated in Finding Details, and the comments contain the message “Identified by Puppet class assignment. 

Figure 3 - System did not detect a virus scanner, so a “notify” resource was applied
Figure 3 – Notify resource was applied when system did not detect a virus scanner- comment reads “Identified by Puppet class assignment”

This solution will save you hours in producing STIG checklists for the security accreditation process (and it’s super cool).  But what if youre mitigating findings outside of the standard STIG prescribed methods?  Do you need to go into the checklist file(s) and update them with actions local to your site?  If you were paying attention to the last couple lines of the report, you may have noticed that “automation data is being overridden.  The reporting tool allows the automated finding data from Puppet to be augmented as needed from a source outside of Puppet.  A previous site where I worked used VMware LogInsight (you could also use Splunk) to forward log events instead of the STIG prescribed “rsyslog.  Because of this, we would mark the related findings as closed because we were implementing the security requirement with a method other than the default as assumed by the STIG. 


Stay Tuned!

In an upcoming blog post, we’ll cover the integration of the Puppet security solution with Splunk to provide visibility and analysis of the secure Puppet environment. 

For more information or for help getting these kinds of results at your site, fill out our form below: 

New call-to-action

Kinney Group presents at Puppet Camp Federal Government

Puppet Camp is here! We’re happy to be a part of this conversation with the Federal Government community. Tune in on Thursday, October 22 to hear an incredible list of presenters, including our very own Jim Kinney.



Sponsored by Carahsoft, Kinney Group and Norseman Defense Technologies, this free event focuses on exploring the unique challenges faced by those working in the federal sphere, while offering some practical solutions to make your day jobs easier. From best practices around security and compliance, to automating mundane tasks, you’ll hear the following talks:


  • Lessons learned from a decade of federal compliance automation with Puppet – Trevor Vaughn, Onyx Point
  • The Best way to Secure Windows -Bryan Belanger, Fervid
  • Puppet Foundations and Futures – Jed Gresham and Stephen Potter, Puppet
  • And more!


New call-to-action


Puppet CTO, Abby Kearns will lead the Keynote on how Puppet is helping federal customers achieve greater efficiency and improve security and compliance.

Want to learn more about the event? Click here to register!



The Kinney Way

The Puppet Enterprise and Puppet open source platforms are recognized as the market leaders for data center automation. The Kinney Group automation team has extensive experience with these platforms, and we possess the highest Puppet certifications available. Our services for Puppet include:

  • Puppet platform design, build, and implementation
  • Development of Puppet modules
  • Integration with VMware-based and other third-party orchestration technologies
  • Enablement of DevSecOps and CI/CD development environments
  • Automated configuration management and security control

With experience in deployments of all sizes in both Commercial and Public Sector environments, Kinney Group has extensive field experience delivering results with these platforms. Coupled with the deepest bench of senior Splunk architects and automation professionals in the world, and the most advanced platform certifications available, Kinney Group is uniquely positioned to make your data dreams come true.

New call-to-action

Using Puppet Trusted Facts Part 2: Improving Security

Puppet Trusted Facts - How to Use Them - Kinney Group

In Part 1 of this two-part blog series, I covered the basics of Puppet facts. I defined and provided examples of various types of facts in Puppet, including core, custom, and external facts. Deployed in a similar way, we sometimes refer to these facts addressed in Part 1 as “normal facts,” because they are the most commonly used. That now brings us to the main topic of Part 2: Puppet trusted facts. We’ll examine what trusted facts are and how to use them to create higher levels of security for sensitive data held in Puppet.

Introducing Puppet Trusted Facts

There is a lot capability packed into each of the normal fact types, and they can be a very powerful tool in your Puppet implementation. However, there is one thing about all of these fact types that could present a bit of a security concern: They are all self-reported by the node, which means that there is no guarantee of their accuracy.

Let’s consider our previous example of a custom fact for a site identifier. The Puppet master depends on the node to report that information accurately so that it can send the appropriate configuration information back to the node. A site identifier might be used to determine what secrets or sensitive information should be deployed to a node.

By nature of how they are deployed, custom and external facts could be manipulated, maliciously or otherwise, providing a potential opportunity to compromise sensitive information or worse. Trusted facts can be used to address this security issue.

How to Use Trusted Facts

Trusted facts are embedded in the certificate that is used to secure the connection between the node and the Puppet master. This implies that the certificate authority has checked and approved these facts and will prevent them from being overridden manually. We now have a method available to ensure that the fact being sent to the master is accurate because trusted facts are immutable.


New call-to-action


Trusted facts are actually keys available in a special hash called a $trusted hash. Custom information is embedded in another hash that is nested in the $trusted hash called extensions.

The $trusted hash might look something like the following example:

     ‘authenticated’ => ‘remote’,
     ‘certname’      => ‘’,
     ‘domain’        => ‘’,
     ‘extensions’    => {
             pp_site    => ‘datacenter013’,
             pp_region  => ‘alpha’,
             pp_env     => ‘production’,
     ‘hostname’      => ‘appserver01’

In the example above we have an extension for site, region and environment but how do we get that information into the certificate? When the connection request is made from the node to the Puppet master these extensions must be included in the request. From a Linux node the command to download the Puppet agent software from the master and embed the extensions into the certificate would look like the following:

curl -k https://puppetserver:8140/package/current/install.bash | sudo bash -s extension_requests:pp_site=datacenter013 extension requests:pp_region=alpha extension_requests:pp_env=production

Once the node request from the master has been accepted, the extension data elements are stored in a file on the node called the csr_attributes.yaml file. This file can typically be found at /etc/puppetlabs/puppet/ on Linux machines and C:\ProgramData\PuppetLabs\puppet\etc on Windows-based machines. The file itself will contain an extension_requests that will look similar to the following:

custom_attributes: 343gtrbhryts87739380kdjfjf6376hd
  pp_site: datacenter013
  pp_region: alpha
  pp_env: production

Another, more manual, method you can use to deploy trusted facts is to have your csr_attributes.yaml file in place on the node prior to making and approving the node request. Since trusted facts are immutable once the certificate request is signed, any desired data must be present before Puppet agent attempts to request its node for the first time.

Trusted Facts as a Security Measure

Now that we have trusted facts implemented, we can easily access them in our Hiera hierarchies and in our Puppet code. Using the $trusted hash we are able to access them just like we would any other fact in Puppet. Trusted facts might take a little more effort to deploy because you need to do it manually or have an automated provisioning process in place that can pre-stage your csr_attributes.yaml file or execute the Puppet agent install by running the curl command to download the software from the master.

Since trusted facts cannot be modified during the lifecycle of a node, there are special use cases where these are much more useful than normal fact types. However, in a Puppet environment where additional layers of security are helpful for meeting security requirements or ensuring the integrity of your systems, trusted facts can be a very useful tool.

Trusted facts may be worth considering in the following use cases:

  1. Preventing the inadvertent application of non-production configuration to production nodes.
  2. Preventing secure data or information from being exposed to the wrong users.
  3. Providing an additional layer of security for passing security audits.

Kinney Group was recently named the recipient of Puppet’s Channel Partner of the Year award for Puppet Government Partner of the Year for 2018. For more information about Puppet IT Automation services offered by Kinney Group, contact us here.

New call-to-action

Using Puppet Trusted Facts Part 1: An Intro to Puppet Facts

Using Puppet Trusted Facts - Kinney Group

In this two-part blog series, I am ultimately going to address using Puppet trusted facts: what they are, how to use them, and most importantly, how to use them as an added security measure. However, before I get to trusted facts in Part 2, I’d like to make sure we’ve covered the basics here in Part 1: So first, what are Puppet facts?

The Basics: Puppet Facts

Facts in Puppet are nothing new. Facts are information gathered about a node by a tool called Facter. Facts are deployed as pre-set variables that can be used anywhere in your Puppet code. Facter is installed automatically with the Puppet agent software.

When a Puppet agent run takes place on a node, the first thing that happens is that Facter gathers up information about the node and sends that information to the Puppet master server. The Puppet master then uses that information to determine how the node should be configured and sends configuration information back to the node. The Puppet agent uses that information to apply the desired configuration to the node.

Some examples of core facts that are generated by Facter by default are:

  • Operating System
  • Kernel
  • IP Address
  • FQDN
  • Hostname

Typically, facts—once they are sent to the Puppet master—are stored in the PuppetDB (when in use), which means that the Puppet master actually has a detailed inventory of information about your infrastructure. PuppetDB’s API provides a powerful way to share that information with other systems.

What makes Facter even more powerful is that you can also create your own Facter facts called custom facts or external facts. These facts are either deployed within your Puppet modules, generated via a script, or embedded in designated files on your Puppet nodes.


New call-to-action


Puppet Custom and External Facts

Custom and external facts give you the ability to attach your own metadata to a node so that you can use them in your Puppet code. One common example would be a custom fact for a site identifier that indicates where a node is deployed in the data center. This fact could be generated in a couple of ways: either by a custom script or a flat file deployed to a designated facts directory on the node.

Most facts can be changed during the lifecycle of a node when the characteristics of a node are changed. For example, if a machine’s operating system is updated, that information is automatically updated by Facter and sent back to the master on the next Puppet agent run.

Fact Types in Puppet

Core, custom, and external facts are deployed in a similar way even though they are generated slightly differently. We sometimes refer to these as normal facts because they are the most commonly used.

  • Core facts: Built-in facts that ship with Facter.
  • Custom facts: Require Ruby code within your Puppet module to produce a value.
  • External facts: Generated by either pre-defined static data on the node or the result of running an executable script or program.

Now that we’ve covered the basis with regard to Puppet facts, we’re prepared to pick back up in Part 2 of this series to cover Puppet trusted facts. More specifically, I will address how trusted facts in Puppet can add additional layers of security for meeting security requirements or ensuring the integrity of your systems.

And with that, we can move on to Part 2 in this series: Using Puppet Trusted Facts: Improving Security.

New call-to-action

Kinney Group Named Puppet’s Government Partner of the Year 2018

Kinney Group Named Puppet Government Partner of the Year

INDIANAPOLIS, IN – February 14th, 2019 – Kinney Group today announced it has been named the recipient of Puppet’s Channel Partner of the Year award for Puppet Government Partner of the Year for 2018. In receiving this award, Kinney Group was recognized for being a top performing partner in revenue, solutions, and field engagement, as well as for making continuous contributions to drive customer success with automation.


Kinney Group Named Puppet Government Partner of the Year for 2018


“We are honored and humbled to be recognized as Puppet Government Partner of the Year.” said President and CEO Jim Kinney. “We continue to view Puppet as the best platform in the market for helping Government customers harness the power of automation to address security requirements, enable digital transformation, and also to save millions in funding each year through eliminating manual processes.”


New call-to-action


The annual Puppet Channel Partner of the Year awards honor Puppet’s channel ecosystem for delivering customer excellence and innovative solutions. This year’s award winners also demonstrated exemplary performance in the implementation of Puppet technology. The program recognized thirteen partners globally in seven categories.

“Puppet is dedicated to building solutions that allow customers to automatically deliver and operate all of their software across their entire lifecycle in any environment,” said John Schwan, vice president of global partner sales, Puppet. “Key to this success is our customer-centric partners. We congratulate Kinney Group on its Puppet Channel Partner of the Year Award and applaud its ongoing commitment to drive enterprises forward.”

Shawn Hall, Director of the Next Generation Data Center team at Kinney Group, offered this:

“Puppet as a platform provides tremendous value to our Government customers every day, especially in the areas of security and compliance. With Puppet’s newest capabilities and integrations with tools like Splunk, we are able to utilize Puppet to deliver on even more compelling use cases. Puppet is a leader in their space, and we are excited to continue this great partnership as we do great things for our customers with Puppet.”

About Kinney Group

Kinney Group is a solutions-oriented professional services consulting firm specializing in automation and analytics to harness the power of IT in the cloud to improve lives. Security is in Kinney Group’s DNA, enabling the company to integrate the most advanced automation, analytics, and infrastructure technologies as an optimized solution powering IT-driven mission and business processes in the cloud for federal agencies and Fortune 1000 companies. We are an elite team with a unique combination of credentials for strict security environments who serve our customers with an unexpected experience. We specialize in Splunk, AppDynamics, Puppet, and VMware to serve our customers as they journey through digital transformation. Learn more at

New call-to-action