Meet Atlas STIG Compliance

Like you, cyber criminals and bad actors are hard at work developing their technology, and they’re laser-focused on discovering new ways to infiltrate and exploit your organization. Modern operating systems (OS) and commercial off-the-shelf (COTS) networking equipment have come a long way in terms of security, but none are secure out of the box. And especially not to the exacting standards maintained by Department of Defense (DoD) commands, agencies, and contractors.

Are you prepared to defend your organization?

DISA’s configuration standards — known as Security Technical Implementation Guides or STIGs — provide a way to make systems comprised of COTS equipment and operating systems infinitely more secure. But it’s also a time-consuming, and incredibly manual process. Hardening, documentation, and monitoring take tons of hours, making the effort not only tedious and time-consuming, but costly as well.

In this article, we’ll introduce you to the Atlas STIG Compliance application, which makes monitoring, reporting, and documentation push-button simple, reducing what used to be days of work to just minutes. Atlas could save you thousands of hours of effort and millions in STIG-compliance related costs.

What is Atlas STIG Compliance?

Atlas STIG Compliance is designed to help you collect, analyze, and interact with your compliance data all from within Splunk, near real-time visibility into the status of your compliance documentation, powerful automation tools for documentation and checklists, and the ability to manage your STIG checklists, Security Content Automation Protocols (SCAP) scan outputs, and user-generated data.

The benefits of utilizing Splunk & automation

While there are other software-based solutions for automating STIG compliance, none provide the level of deep integration with Splunk, or the transformative automation capabilities of Atlas STIG Compliance. Utilizing industry-leading technology already in use by DoD agencies and 85%+ of Fortune 500 organizations, Atlas STIG Compliance ensures your ability to STIG faster, get secure more quickly, pass your audits, and save millions in the process.

With Atlas STIG Compliance, you can:

Collect STIG compliance data multiple sources to get a real-time view of your STIG compliance posture within Splunk.

Within Atlas STIG Compliance you can create systems that allow you to view the status of your infrastructure directly in an easy to create and view dashboard.

Manage compliance documentation within the Splunk UI and export it out in DISA STIG Viewer format.

Managing compliance documentation is an incredibly tedious, mind-numbing process. Due to the nature of the work, and the ever-changing requirements, it’s easy to miss items that could cost your organization during an audit or, even worse, a cyberattack. Atlas STIG Compliance allows you to bulk update STIG checklists within the Splunk UI to reduce manual editing of STIG checklists, and continuously collect STIG Compliance data from multiple sources to prepare for an audit or for continuous compliance monitoring

Automate STIG remediation with included automation modules.

Don’t risk non-compliance. Utilizing industry-standard automation technology from Puppet, Atlas customers can enforce automated remediation and checklist generation via included Puppet modules, shaving thousands of hours of remediation and documentation effort — and all the costs associated with those efforts.

See Atlas STIG Compliance in action:

The stakes are high

With Atlas STIG Compliance, you never have to risk non-compliance (or getting shut down… or losing funding…) again. Leading the way into the future requires being on the front-end of adapting game-changing technology. Automation is more than a buzzword — it’s a necessity to maintain your sanity, your budget, and your mission in an increasingly complex compliance landscape.

Ready to learn more? We’d love to give you a 1:1 introduction of Atlas, or get you started with a free 30-day trial so you can put Atlas STIG Compliance to the test in your own environment:

New call-to-action

Meet Atlas Scheduling Inspector

Search is at the heart of a great Splunk experience, but poorly configured searches could give be giving you inaccurate results, wasting system resources, or both. This is precisely why we built Scheduling Inspector for Atlas. In this article we’ll take a look at the problems that lead to gaps, wasted time, and orphaned searches in Splunk, and how Scheduling Inspector can help you solve them instantly. The end result? Finely-tuned searches, displaying results you can trust.

What is Scheduling Inspector?

Scheduling Inspector ensures your Splunk searches are meeting best practices by investigating your alerts and scheduled searches for common errors when it comes to time spans and ownership. Scheduled Searches can be improperly set to where the time span and schedule differ, leading to either missed alerts and events or wasteful searches that are overtaxing your system with overlapping time spans. See it in action in the video below:

The benefits of inspecting scheduled searches

Working on the fly with search, it’s easy to fall out of alignment with best practices. Revisiting scheduled searches and inspecting them — especially searches providing mission critical information — will ensure you’re working with the most reliable data available.

With Atlas Scheduling Inspector, you can:

Identify search coverage gaps by revealing misconfigured scheduled searches with missing data based on the schedule and time range.

For example, a search scheduled to run every 15 minutes which only looks at the past 5 minutes of data will be missing 10 minutes of data every time it runs. If this search is looking for critical errors or other notable events, it will miss them entirely if it falls within this gap

Find wasteful time windows and eliminate them with powerful automation capabilities.

Imagine a search scheduled to run every 15 minutes which looks at the past 60 minutes of data — this search will look at the same “bucket” of events multiple times, wasting CPU resources and taking up valuable search slots.

Scheduling Inspector identifies orphaned searches and allows you to utilize powerful automations to reassign them to active Splunk owners or delete them.

Orphaned searches — created by accounts that no longer exist, and which Splunk doesn’t run until their ownership is reassigned — could lead to missing alerts or broken dashboards.


Atlas Scheduling Inspector inspects your search configurations — including time spans and ownership — to ensure they meet best practices. Doing this work manually could take hours or days of time, and it could still be easy to miss the gaps and wasteful time windows that Scheduling Inspector’s capabilities quickly and effortlessly bring forward.

You don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate.

Cue Atlas Assessment: Instantly see where your Splunk environment is excelling and opportunities for improvement. From download to results, the whole process takes less than 30 minutes using the link below:

New call-to-action


Meet Atlas App Awareness

Over time the installation, upkeep, and management of applications in your Splunk environment can become increasingly difficult. The more apps you install and use, the more you have to keep track of. What apps are installed? Are the versions of my apps the same throughout my environment? Is anyone even using these apps? While there’s not an easy way to answer these questions from within Splunk natively, App Awareness in the Atlas Platform allows you to get these answers and more, all from one easy-to-use interface.

Better Management of Splunk Apps

App Awareness Atlas allows admins to see which Splunk apps are deployed in Splunk environment, how they are being used, and who is using them. App Awareness also identifies any inconsistencies in your Splunk apps that may exist by displaying differences in app versions or default/local Knowledge Objects that are distributed across your Splunk deployment.

App Awareness is useful for helping to identify changes made to Splunk Apps over time, and understanding if any customizations or other preparations are needed before migrating to other servers, data centers, or Splunk Cloud (App Awareness is a companion application to the Atlas Splunk Migration Helper application).

Better Management of Splunk Knowledge Objects

App Awareness also provides expanded information for each installed application which includes a list of all known knowledge objects for that app, their type, owner, who can see them, where they are stored, and the last time that KO was updated.

Tracking local knowledge objects is notoriously difficult without a tool like App Awareness. This information is crucial for troubleshooting problematic apps, planning a migration to other Splunk servers, or considering a move from Splunk On-premise to Splunk Cloud.

Now that you know how the Atlas App Awareness application can take the guess work out of your Splunk migration and application management, why not check out our Atlas documentation for a closer look, or schedule a 1:1 discovery session to answer any questions you may have?

Meet Atlas Migration Helper

Whether you’re moving to Splunk Cloud or migrating your Splunk instance from one server, system, architecture, or filesystem to another, there are a lot of factors to consider before making the move. What apps need to move to the new location (and are they even compatible)? Do I have all of my datasets and forwarders? As if that weren’t enough, what about local knowledge objects?

Migrating Splunk isn’t for the faint of heart. But you’re in luck! The Atlas Platform for Splunk takes the pain and guesswork out of migration, and provides you with a step-by-step plan for moving your instance.

When Should I Migrate?

The reasons for migration are as varied as the organizations that use Splunk themselves. But some reasons for moving from Point A to Point B might include:

  • You want to move Splunk to a new or different file system
  • You’re upgrading from a 32-bit to 64-bit architecture for performance gains
  • You need to switch operating systems (from Windows to Unix, for example)
  • You’re upgrading infrastructure components or retiring hardware
  • You’ve decided to move to Splunk Cloud (or AWS or Azure)

Considerations When Migrating Splunk

Atlas’s Splunk Migration Helper is a powerful element geared towards helping Splunk owners move their Splunk environment with precision, reporting, and speed. Atlas Migration Helper contains everything you need for identifying what is useful and necessary to move to your new Splunk environment, empowering you to enter your new Splunk environment worry-free.

Some of the ways Atlas Migration Helper guides and supports the migration process:

  • Easily identify Splunk Applications that should be moved and select them for migration
  • Identify what data ingests are being utilized by users and apps and select them for migration
  • Identify Forwarders required to support selected data ingests and applications and select them for migration
  • Analyze your Splunk Environment for issues to resolve before migration to ensure stability
  • Track the Migration using automated dashboarding that reports current status.

All of these features and more are summarized with top level KPIs for simplified tracking.

Now that you know how the Atlas Migration Helper application can take the guess work out of your Splunk migration, why not check out our Atlas documentation for a closer look, or schedule a 1:1 discovery session to answer any questions you may have?

Meet Atlas Monitor

Creating incredible results and outcomes in Splunk requires what we call “data certainty.” Meaning, you know you have all the data you need, and you have a way to be alerted when you don’t (due to a data source going “offline,” for example).

Atlas Monitor provides unparalleled visibility into your Splunk data, and powerful alerting capabilities. Dashboards, alerts, visualization, and Enterprise Security all rely on a constant and reliable feed of data flowing in to Splunk. Without pro-active measures to monitor and alert, these data streams can fail, causing inaccurate reporting and cascading failures.

And bad data is worse than having no data at all.

Track and alert on data failures

Splunk admins can utilize Atlas’s simple interface to create “Monitors,” which track and alert on data ingest failures, preventing errors and increasing reliability. Monitors efficiently utilize Splunk resources to do more with less, while providing highly-detailed reporting (without adding complexity).

Further, admins are able to create Monitor Groups within the application to consolidate reporting, and can leverage lookup tables and custom searches to make effective use of Change Management Knowledge Objects. With Atlas Monitor, admins have a powerful tool that will increase data flow stability and awareness.

Monitor Capabilities

  • At-a-glance summaries to empower creators to quickly assess data flow health with custom thresholding
  • Group related data flows together for visualizations and simplified reporting
  • Leverage metric indexes and enhanced searching to reduce resource utilization
  • Report on outages to enable historical tracking of downtime
  • Automatically send alerts by email when Monitors breach thresholds
  • Integrate lookup tables for integrating CMDB and assets & identities files
  • Create custom searches to monitor unique data sets utilizing advanced base searches

Now that you know how the Monitor application can take the guess work out of your Splunk data, why not check out our Atlas documentation for a closer look, or schedule a 1:1 discovery session to answer any questions you may have?

Defining Data Sprawl in Splunk: Why it Matters, and What it’s Costing You

“Data Sprawl” isn’t really a technical term you’ll find in the Splexicon (Splunk’s glossary). Here at Kinney Group, however, we’ve been around Splunk long enough to identify and define this concept as a real problem in many Splunk environments.

What exactly is Data Sprawl? It’s not one, single thing you can point to, rather a combination of symptoms that generally contribute to poorly-performing and difficult-to-manage Splunk implementations. Let’s take a look at each of the three symptoms we use to define Data Sprawl, and break down the impacts to your organization:

  1. Ingesting unused or unneeded data in Splunk
  2. No understanding of why certain data is being collected by Splunk
  3. No visibility into how data is being utilized by Splunk

Ingesting unused or unneeded data in Splunk

When you ingest data you don’t need into Splunk, the obvious impact is on your license usage (if your Splunk license is ingest-based). This may not be terribly concerning if you aren’t pushing your ingest limits, but there are other impacts lurking behind the scenes.

For starters, your Splunk admins could be wasting time managing this data. They may or may not know why the data is being brought into Splunk, but it’s their responsibility to ensure this happens reliably. This is valuable time your Splunk admins could be using to achieve high-value outcomes for your organization rather than fighting fires with data you may not be using.

Additionally, you may be paying for data ingest you don’t need. If you’re still on Splunk’s ingest-based pricing model, and you’re ingesting data you don’t use, there’s a good chance you could lower Splunk license costs by reducing your ingest cap. In many cases, we find that customers have license sizes higher than they need to plan for future growth.

We commonly run into scenarios where data was being brought in for a specific purpose at one point in the past, but is no longer needed. The problem is that no one knows why it’s there, and they’re unsure of the consequences of not bringing this data into Splunk. Having knowledge and understanding of these facts provides control of the Splunk environment, and empowers educated decisions.

New call-to-action

No understanding of why certain data is being collected by Splunk

Another common symptom of Data Sprawl is a lack of understanding around why certain data is being collected by Splunk in your environment. Having the ability to store and manage custom metadata about your index and sourcetype pairs — in a sane and logical way — is not a feature that Splunk gives you natively. Without this knowledge, your Splunk administrators may struggle to prioritize how they triage data issues when they arise. Additionally, they may not understand the impact to the organization if the data is no longer is coming in to Splunk.

The key is to empower your Splunk admins and users with the information they need to appropriately make decisions about their Splunk environment. This is much more difficult when we don’t understand why the data is there, who is using it, how frequently it is being used, and how it is being used. (We’ll cover that in more detail later.)

This becomes an even bigger issue with Splunk environments that have scaled fast. As time passes, it becomes easier to lose the context, purpose, and value the data is bringing to your Splunk mission.

Let’s consider a common example we encounter at Kinney Group.

Many organizations must adhere to compliance requirements related to data retention. These requirements may dictate the collection of specific logs and retaining them for a period of time. This means that many organizations have audit data coming in to Splunk regularly, but that data rarely gets used in searches or dashboards. It’s simply there to meet a compliance requirement.

Understanding the “why” is key for Splunk admins because that data is critical, but the importance of the data to end users is likely minimal.

(If this sounds like your situation, it might be time to consider putting that compliance data to work for you. See how we’re helping customers do this with their compliance data today with Atlas.)

The Atlas Data Management application allows you to add “Data Definitions,” providing clear understanding of what data is doing in your environment.

No visibility into how data is being utilized by Splunk

You’ve spent a lot of time and energy getting your data into Splunk but now you don’t really know a lot about how it’s being used. This is another common symptom of Data Sprawl. Making important decisions about how you spend your time managing Splunk is often based on who screams the loudest when a report doesn’t work. But do your Splunk admins really have the information they need to put their focus in the right place? When they know how often a sourcetype appears in a dashboard or a scheduled search, they have a much clearer picture about how data is being consumed.

Actively monitoring how data is utilized within Splunk is extremely important because you can understand how to effectively support your existing users and bring light to what Splunk calls “dark data” in your environment. Dark data is all of the unused, unknown, and untapped data generated by an organization that could be a tremendous asset if they knew it existed.

Kinney Group’s Atlas platform includes Data Utilization — an application designed to show you exactly what data you’re bringing in, how much of your license that data is using, and if it’s being utilized by your users and admins.


Most organizations may not realize that Data Sprawl is impacting their Splunk environment because it doesn’t usually appear until something bad has happened. While not all symptoms of Data Sprawl are necessarily urgent, they can be indicators that a Splunk environment is growing out of control. If these symptoms go unchecked over a period of time they could lead to bigger, more costly problems down the line.

Knowledge is power when it comes to managing your Splunk environment effectively. Kinney Group has years of experience helping customers keep Data Sprawl in check. In fact, we developed the Atlas platform for just this purpose. Atlas applications are purpose-built to keep Data Sprawl at bay (and a host of other admin headaches) by empowering Splunk admins with the tools they need.

Click here to learn more about the Atlas platform, to get a video preview, schedule a demo, or for a free 30-day trial of the platform.

New call-to-action

Bridging the Splunk Usability Gap to Achieve Greater Adoption and Expansion

Splunk, the amazing “Data to everything” platform, provides some of the best tools and abilities available to really control, analyze, and take advantage of big data. But you don’t build such a powerful and expansive platform over a decade without it being a bit technical, and even difficult, to fully utilize.

This technical hurdle — that we lovingly call the “Usability Gap” — can stop Splunk adoption in its tracks or stall an existing deployment to its ruin. By clearing the Usability Gap, however, a Splunk environment can prosper and deliver a fantastic return on your investment.

So it begs a question — “what is the Usability Gap, and how do I get across?”

How to Recognize the Gap

What exactly makes up the steep cliff sides of the “Usability Gap?” Well, these symptoms can manifest themselves in any Splunk deployment or client ecosystem, and is caused just as much by human elements as technical blockers. 

The key to any good Splunk deployment is a properly focused admin. Many admins or admin teams were handed Splunk as an additional responsibility instead of a planned and scoped aspect of their job. This disconnect can lead to under-certified admins who lack the time and experience needed to quickly solve issues and incoming requests from Splunk users.

Splunk users can also be underequipped and undertrained. While formal training is available for users with Splunk Fundamentals certification and other online training, they may not meet the user where they are, and those solutions lack the benefits of in-person training with real, actionable data. These issues can be big blockers for learning Splunk and increase the time it takes for users to become confident with the system.

If you’re still not sure if you have a Usability Gap issue, check the activity found on the system itself. If your Splunk search heads are getting little action from users and admins, you know for a fact that something is coming between your users and their Splunk goals.

What a Gap Means for You

What are the consequences of a Usability Gap? They are wide ranging and impactful.

With a lack of focus and experience, admins are going to be severely hampered in achieving outcomes with Splunk. When technical issues arise with the complex Splunk ecosystem, or a unique data set requires attention, admins will have to carve out time to not only work on the issue at hand but learn Splunk on-the-fly as well. Without the proper support, progress slows and a lack of Splunk best practices is to be expected in these deployments.

Users without a watchful or knowledgeable eye will be left to their own devices. This can lead to poorly created searches and dashboards, bad RBAC implementation (if implemented at all), or worse — no movement at all. Without a guiding hand and training, the technical nature of Splunk will eventually cause users to misconfigure or slow down the platform, or just not adopt such an imposing tool. These issues together can lead to a peculiar event, where Splunk is labeled as an “IT tool for IT people.” This is far from the truth, but if users are not properly trained, and admins don’t have time to be proactive, only the technical savvy or previously experienced will be able to utilize the investment. While some outcomes will be achieved, many organizations will realize their significant investment isn’t aligned with their outcomes and will drop Splunk altogether, putting all the effort and time invested to waste.

New call-to-action

Mind the (Usability) Gap

Fortunately, there’s an easy answer for solving these problems and bridging the Usability Gap in your environment — the Atlas™ Platform for Splunk. Atlas is geared towards increasing and speeding up Splunk adoption and enabling Splunk admins to do more with their investment. Let’s look at the elements of Atlas that help bridge the Usability Gap!

The Atlas Application Suite, which is a collection of applications and elements that reside on the search head, helps admins improve their deployment, and zero in on giving users a head start with achieving outcomes in Splunk. One such application is the Atlas Search Library.

Search Library gives users an expandable list of Splunk searches that are properly described and tagged for discoverability and learning. Using the Search Library, a Splunk User can create a library of knowledge and outcomes when it comes to the complex nature of Splunk’s Search Processing Language. This greatly accelerates skill sharing and education around SPL — one of Splunk’s biggest roadblocks.

Another element is the Atlas Request Manager. This application greatly increases the usability of Splunk by quickly linking admins and user with a request system built into the fabric of Splunk itself. Admins no longer need to spend time integrating other solutions, and users receive a robust system for asking for help with creating dashboards, Splunk searches, onboarding data, and more — all within Splunk!

Adding a data request is quick and painless thanks to Atlas Request Manager

Last, but certainly not least in bridging the Usability Gap, is Atlas Expertise on Demand. Expertise on Demand (EOD) is a lifeline to Kinney Group’s bench of trusted, Splunk-certified professionals when you need them most. EOD provides help and guidance for achieving outcomes in Splunk, and can lead the charge in educating your admins and users about all things Splunk. With EOD, your admins and users have all the help they need to maximize their Splunk investment.

Wrapping up

The Usability Gap is too big a problem to ignore. Frustrated users, overtaxed Splunk admins, and a clear lack of outcomes await any Splunk team that ignores the clear symptoms and issues presented by the Usability Gap. Hope is not lost, however! The Atlas platform is purpose-built to help you get over the hurdles of adopting and expanding Splunk. With incredible tooling to simplify searches, SPL gaps, and managing requests, not to mention Expertise on Demand, Atlas provides the support admins need and Splunk users with the attention they deserve for education and meeting their Splunk goals!

This just scratches the surface of what Atlas can do for your Splunk journey, so read more about our incredible platform and discover what you are missing!

New call-to-action

Meet Atlas ES Helper

Enterprise Security Helper app icon


ES Helper is the purpose-built tool for getting Splunk Enterprise Security over the hump and actionable for Security Teams. Enterprise Security is a complex tool that takes all kinds of data to create its interesting visuals and track its notable events, but due to its complexity it can be off-putting to new or inexperienced Splunk Admins. ES Helper is here to bridge the gap and help those get a head start on utilizing an amazing security tool. 

Where are we at?

Every action plan needs a starting place, and ES Helper figures that out for you with automation. With a set of interesting and complex searches, the Atlas Element analyzes your ES Deployment and gives out the ES Utilization Score.


This score immediately tells Splunk and ES Owners where they ‘are’ with their deployment, and what room they have to grow. This is supremely beneficial for tracking growth of the platform and analyzing how well you are utilizing your investment into security and simplifies the complex workflows of ES into a digestible format.

New call-to-action

What’s next?

After gaining perspective on the status of the Enterprise Security deployment with the ES Utilization Score, the next logical question is to ask how to improve it. ES Helper is right there with you with the ES Datamodel Report. This Report shows how much data is being ingested into Enterprise Security, and furthermore, layers a priority lens over it for context.

The Priority labels are derived from Team Atlas’s investigation in how Splunk ES utilizes the data, the importance of the tied outcomes from the data, and how much bang for the data buck each data point gives Security Analysts. Using this Priority, and the investigation into how filled the datamodels are, Splunk Admins can quickly identify which datamodel should be buffed up with more data to improve data coverage in Enterprise Security.

Lucky for the Admins, selecting the datamodel in Atlas ES Helper quickly identifies any recommended sourcetypes to fill out the datamodel with actionable data.

This workflow enables Admins to go from zero to hero with ES with a clear line of sight on next steps for improving their security monitoring and posture!

What’s Changed?

After updating a datamodel with a whole slew of additional data sources. An Admin may ask what impact they actually had. With ES Helper, Admins can utilize our analysis to get quick results on which dashboards and searches changed, enabling a quick validation check and reward for hard work!


ES Helper speed up a technical and slow process of improving an Enterprise Security deployment. By fast tracking a Splunk Admin’s ability to analyze their environment, identify new data sources, and track changes, Splunk Admins can quickly improve and track their improvement to their Security CIM. This effort is even more improved by bringing Expertise on Demand into the mix, who will further enable Admins to meet their security needs ahead of schedule!

New call-to-action

Meet Atlas Data Utilization

Data Utilization application icon

Data Utilization is an excellent companion Element to Data Management. While Data Management is focused on tracking ingests with metadata and awareness alerts, Data Utilization is centered on using automation to help Admins and Users track how Users, Scheduled Searches, and Dashboards are utilization data being ingested into Splunk.

How is this being used again?

Data Utilization helps Admins quickly identify how data is being used across their environment by users. By tracking how ad-hoc searches and scheduled searches are searching across all data, Data Utilization can highlight active data streams. Furthermore, Data Utilization investigates dashboards that have been used lately, and investigates what data is being utilized on each dashboard load. All of this comes together into an easy-to-understand report.

Admins can change the filter for the search, splitting the data by either index, for high level investigations, index-sourcetype, for normal baselines, and index-sourcetype-source to identify individual data points that slipped the cracks. Admins can select any one of these findings to learn more about its utilization.

New call-to-action


Using Data Utilization, Admins can quickly identify who is searching a sourcetype, using what scheduled searches, and on what dashboards, and when! Admins can also inspect the SPL associated with each of these three options!

Make way for the new!

Data Utilization also offers a powerful perspective for Splunk Owners. By analyzing how data is being utilized, Admins can quickly identify any depreciated data streams that could be removed from Splunk. The benefits for this are evident, as it can make room for other ingests for more important use cases, or bring a deployment down below their license level, reducing Splunk operating costs. Another benefit is the reduction in technical debt, as Splunk Admins can now focus on data streams that matter for their users!


Data Utilization is a powerful tool, enabling Splunk Admins to quickly come to terms how their environment is being used by both Users and Scheduled Searches, while empowering Admins to jumpstart discussions for prioritizing data streams. With Data Utilization, Admins can more easily reduce license utilization while increasing visibility. 

New call-to-action