Installing Splunk

Getting started with Splunk is easy and straightforward (mostly) — especially if you’ve already made your architecture decisions. For the purpose of this tutorial, we’ll assume you’ve already checked Splunk’s documentation on system requirements. It’ll also be helpful to keep the full Splunk installation manual handy.

Note: If you’re using AWS for your Splunk deployment, Splunk offers a Splunk Enterprise Amazon Machine Image (AMI) that installs to AWS with one click. There are also containerized options for Splunk for Docker and Kubernetes.

Let’s dive into installing Splunk Enterprise

Installing Splunk Enterprise on Linux

You can download Splunk Enterprise for Linux from the Splunk website (you’ll need a free account).

Once you select your operating system from the tabs, and choose the package option you prefer (.deb, .tgz, or .rpm), you can simply click to download the file. Once you click, however, you’ll also be directed to a page with instructions for downloading directly from the command line using wget (filename below will be different depending on the version available at the time you click):

wget -O splunk-9.0.0-6818ac46f2ec-linux-2.6-x86_64.rpm https://download.splunk.com/products/splunk/releases/9.0.0/linux/splunk-9.0.0-6818ac46f2ec-linux-2.6-x86_64.rpm

Why doesn’t Splunk put this on the page where you choose your download? A great question. Nobody knows. Maybe Buttercup? We’ll have to ask them next year at .conf23.

Once the .rpm has downloaded successfully, you can install it with this command:

rpm -i splunk-9.0.0-6818ac46f2ec-linux-2.6-x86_64.rpm

(Again, your file name may be different depending on the available version at the time of download.)

User Settings

First, we’ll want to make sure we can run Splunk as the splunk user — the install should have created that user and group, but you can verify with this command:

cut -d: -f1 /etc/passwd

This will display a list of local users. If you don’t see splunk in the list, create this user and group with the following:

adduser splunk
groupadd splunk

ulimits Settings

There are limits on the Linux platform known as ulimits that impact maximum file size, number of open files, user processes, and data segment sizes. On the command line, type:

ulimit -a

This will present a list of limits that you can verify against your settings. Need to adjust your settings to meet/exceed? Edit the /etc/system/system.conf file and adjust the following settings:

[Manager]
DefaultLimitFSIZE=-1
DefaultLimitNOFILE=64000
DefaultLimitNPROC=16000

I like big pages and I cannot lie…

Some Linux distros enable the transparent huge pages feature by default. Splunk recommends this feature be disabled due to performance hits in Splunk Enterprise (30%+). A quick Google search will help you find the process for doing this for your Linux distribution and version.

Starting Splunk on Linux

Once you’ve installed and tweaked your settings, you’re ready to fire Splunk up for the first time! First, make sure you’re operating as the Splunk user:

su - splunk

Then, from the /opt/splunk/bin directory, type the following:

.splunk start

Want to skip the license agreement? You can also start Splunk by typing ./splunk start –accept-license to get to the good stuff without all the bothersome “reading” the kids are into these days.

Start on Reboot

Out of the box, Splunk doesn’t start when the server is rebooted. You can, however, have Splunk create a script that will enable this functionality by executing an “enable boot-start” command:

[root@ip-172-31-28-164 ~]# cd /opt/splunk/bin
[root@ip-172-31-28-164 bin]# ./splunk enable boot-start -user splunk
Init script installed at /etc/init.d/splunk.
Init script is configured to run at boot.

You’ll want to edit the /etc/init.d/splunk file and add USER=splunk after the RETVAL entry:

#!/bin/sh
#
# /etc/init.d/splunk
# init script for Splunk.
# generated by 'splunk enable boot-start'.
#
# chkconfig: 2345 90 60
# description: Splunk indexer service
#
RETVAL=0
USER=splunk

. /etc/init.d/functions
…

It’s important to specify -user splunk when you execute the enable boot-start command and implement this change to init.d or you’ll end up with file ownership headaches.

Stopping Splunk on Linux

Best practices dictate that you should stop Splunk from the command line before rebooting the server:

/opt/splunk/bin/splunk stop

Ready to Learn More?

Installing Splunk, of course, is just the beginning! Ready to learn more about getting the most from Splunk? Check out other entries in our Splunk 101 content. Want to take Splunk to the next level in your organization but need some help? We’d love to chat!

Architecting Splunk Primer

If you’re just starting out with Splunk, you most like won’t be expected to architect or implement your Splunk environment from scratch. (That type of project is usually — and highly recommended to be — led by or assisted by Splunk-certified professionals.) That said, maybe you’re trying to spin up a Splunk sandbox, joining an existing team and need to come up the curve, or you’re looking to improve your existing architecture.

Regardless of your situation, there are a few considerations when taking a look at your Splunk environment’s architecture:

Splunk On-prem vs. Splunk Cloud

While on-prem deployments of Splunk have a variety of infrastructure considerations, Splunk Cloud presents some compelling benefits — simply forward your data to Splunk Cloud, and it will “automagically” make sure you have the resources you need to handle the data, and data is managed securely and efficiently.

Splunk Cloud also introduces Workload Pricing rather than ingest-based pricing. Meaning you can ingest all the data you want, and only pay for what you actively use (workload).


What’s the best choice for you?

Do you prefer to have Splunk running locally and have control over your hardware and infrastructure components? Or do you prefer to let a third-party manage the infrastructure and only concern yourself with the results you’re getting from the data? (That’s not a trick question, by the way — there are pros and cons with each approach that are entirely dependent on your organization’s unique needs and requirements.)

Splunk Validated Architectures

If you choose an on-prem approach for your Splunk deployment, there are a variety of solutions that can help you get started. One such solution is leaning on Splunk’s catalog of Splunk Validated Architectures (SVAs).

Splunk’s product documentation is excellent, but there are gaps relative to architecture, best-practices, and — frankly — what works. And it makes sense. Everyone has different needs, so documentation couldn’t realistically cover every possible scenario. SVAs provide standardized “blueprints” for deployment you can leverage that Splunk has vetted.
Check out the “Splunk Validated Architecture” white paper from Splunk for more information.

Of course, SVAs are just a starting point. Kinney Group’s team of Splunk-certified experts would love to help you figure out what would work best for your specific needs.

Use Cases

Determining your approach to architecture has a lot to do with the data you need to bring in. If you find yourself stuck on architecture design, it may be helpful to start with your use case and work out from there.

If your primary use case is compliance, for example, you’ll need an architecture and environment that allows you to keep data ingested for a period of time, makes it accessible for another period of time, makes audits easy and as pain-free as possible, etc. If you’re a system administrator, you’d be bringing in different data sets and have different expectations of how to work with that data. Security your main focus? Insider threats? Application Management? You’d have an entirely different set of expectations and needs.

We recommend taking a look at Splunk’s Use Case Definitions and Use Case video library for more details (particularly helpful for beginner and intermediate Splunk users).

Kinney Group Reference Designs

Whatever your use case and needs, the bottom line is that there’s not a “push-button” type solution for Splunk architecture available from Splunk directly. And Splunk Validated Architectures, while a great starting point, don’t always utilize the most modern techniques and available infrastructure.

Kinney Group is leading the way with Reference Designs for Splunk that take the fundamentals and best practices of Splunk’s Validated Architectures and modernizes them for incredible performance gains. Our FlashStack and MSP Reference Designs, for example, provide a 10x boost in performance while utilizing 75% fewer physical indexers.

We’ve published four white papers to date that provide an understanding of our approach and associated benefits — all of which can be downloaded from our website — that are worth a look as you consider your next steps for planning your environment.

DIY vs Professional Services vs MSP…

While it’s possible to architect a ground-up solution yourself (if you have the right team in place), you may be better served to engage with Splunk architecture experts that know the right questions to ask, the best way to meet your unique needs, and have the expertise to mitigate risk and create opportunities for success with the platform.

One word of caution, however — traditional professional service providers tend to “blow in and blow out.” They may answer the mail for the immediate need, but often leave the internal team without the tools and knowledge they need to be successful and enjoy continued success.

With nearly 700 Splunk engagements under our belt, we’ve learned a lot about providing incredible solutions that are sustainable. Our approach is to empower the Splunk Creators who will be tasked with making the environment produce results by bringing them alongside each step of the journey, providing knowledge transfer throughout the process, and leaving them with what they’ll need to be successful long after our engagement has ended.

We’d love the opportunity to talk to you about your Splunk environment and architecture needs. Click here to schedule a quick meeting with a member of our team.

Three Fundamental, Actionable Steps to Improve Cybersecurity

The tragic events unfolding in Ukraine are a stark reminder there are entities in the world that wish to do harm to our country’s business and public institutions.

Cyber warfare has been a fact of life for some time now, and the war in Ukraine has returned this fact of life to the headlines. Combatting cyber warfare with cybersecurity best practices is, perhaps, now back to being top-of-mind for leaders of all organizations.

Addressing cybersecurity issues in a manner which measurably enables protection can be a daunting task. As the ancient proverb says, “the journey of a thousand miles begins with a single step.”

Organizations can immediately (and dramatically) improve their overall cybersecurity posture by pursuing these three fundamentals:

Prioritize Protecting Your Most Important Assets

Cyber attackers today are using sophisticated strategies and tactics that employ artificial intelligence, optimized attack algorithms, and automation techniques that enable attacks at scale. Given this reality, it is mathematically impossible to effectively defend all points of entry vulnerable to cyber attacks.

A simple step organizations can take immediately is to identify critical applications and data stores to quickly get an understanding of the adjacent points of entry an attacker can exploit. The majority of all security-related activities should be targeted at protecting the most valuable assets. Simply put, organizations should prioritize vigilant protection of their “crown jewels.”

While this sounds obvious, most organizations we work with cannot quickly identify those digital assets that should be defended as a priority. Leaders that pursue this simple step will quickly improve their overall security posture.

Security Harden Your Critical Applications and Systems

Hardening critical software and systems is a fundamental the U.S. defense and intelligence ecosystem has practiced for years. Security hardening software application stacks and associated systems and infrastructure provide basic hygiene for effective cyber defense.

At first blush, this might seem daunting for organizations that are not familiar with security hardening practices. This is a reasonable concern given that most organizations have no visibility into the steps that U.S. security, defense and intelligence agencies take to secure their most prized digital assets.

The Defense Information Systems Agency (DISA) System Technical Implementation Guides (STIGs) are a great place to start. DISA STIGs provide a fundamentally sound framework for executing system security hardening immediately. They are the foundational guidelines that the US defense agencies use today, the current STIG guidelines are available to the public online.

Consistently executing basic hygiene for security is something all organizations should pursue immediately. Just as we all do when protecting our own personal health, pursuit of basic hygiene for security is a fundamental that all organizations should pursue every day.

Remove Human Error Risk Through Automation

Human error remains the #1 cause of security vulnerabilities. Today’s systems and application stacks are simply too complex for continued use of manual processes for deployment, patching, and change management coupled with expectation to mitigate human error.

Organizations that identify their critical digital assets and systems, and then employ security hardening basic hygiene, must absolutely do so in an automated fashion. Automating the deployment of secured software dramatically reduces human error as a cause for creating security vulnerabilities.

Software deployment automation should be a fundamental starting point for all organizations. Automation of change management, threat response, and vulnerability remediation should also be pursued. As with most things, the “first step” is always the best place to start, and automating software deployment is a fundamentally sound first step.

Takeaways

Pursuit of a comprehensive and contemporary cybersecurity strategy may incorporate many elements such as zero trust, secure access service edge (SASE), frameworks such as MITRE ATT&CK, security orchestration and automated response (SOAR), encryption, and network microsegmentation, among numerous other technologies and techniques. While building a modern cybersecurity capability may appear as a daunting prospect for many organizations, a sound cybersecurity protection foundation can be quickly achieved by any organization pursuing the three fundamental strategies discussed above.

Don’t wait to start the cybersecurity journey — it begins with the first steps of prioritization, security hardening, and automation. We believe all organizations can and should begin their cybersecurity journey by addressing these fundamentals as a priority. From a risk mitigation perspective, pursuit of these three fundamental strategies will yield measurable positive impacts on risk reduction. With a foundation of fundamental protections in place, organizations can then continue their journey to weave more advanced technologies and techniques into their cybersecurity strategy.

The people that depend on your organization being secure are relying on leaders to act. Pursuit of security basic hygiene fundamentals is a great place to start.

Defining Data Sprawl in Splunk: Why it Matters, and What it’s Costing You

“Data Sprawl” isn’t really a technical term you’ll find in the Splexicon (Splunk’s glossary). Here at Kinney Group, however, we’ve been around Splunk long enough to identify and define this concept as a real problem in many Splunk environments.

What exactly is Data Sprawl? It’s not one, single thing you can point to, rather a combination of symptoms that generally contribute to poorly-performing and difficult-to-manage Splunk implementations. Let’s take a look at each of the three symptoms we use to define Data Sprawl, and break down the impacts to your organization:

  1. Ingesting unused or unneeded data in Splunk
  2. No understanding of why certain data is being collected by Splunk
  3. No visibility into how data is being utilized by Splunk

Ingesting unused or unneeded data in Splunk

When you ingest data you don’t need into Splunk, the obvious impact is on your license usage (if your Splunk license is ingest-based). This may not be terribly concerning if you aren’t pushing your ingest limits, but there are other impacts lurking behind the scenes.

For starters, your Splunk admins could be wasting time managing this data. They may or may not know why the data is being brought into Splunk, but it’s their responsibility to ensure this happens reliably. This is valuable time your Splunk admins could be using to achieve high-value outcomes for your organization rather than fighting fires with data you may not be using.

Additionally, you may be paying for data ingest you don’t need. If you’re still on Splunk’s ingest-based pricing model, and you’re ingesting data you don’t use, there’s a good chance you could lower Splunk license costs by reducing your ingest cap. In many cases, we find that customers have license sizes higher than they need to plan for future growth.

We commonly run into scenarios where data was being brought in for a specific purpose at one point in the past, but is no longer needed. The problem is that no one knows why it’s there, and they’re unsure of the consequences of not bringing this data into Splunk. Having knowledge and understanding of these facts provides control of the Splunk environment, and empowers educated decisions.

No understanding of why certain data is being collected by Splunk

Another common symptom of Data Sprawl is a lack of understanding around why certain data is being collected by Splunk in your environment. Having the ability to store and manage custom metadata about your index and sourcetype pairs — in a sane and logical way — is not a feature that Splunk gives you natively. Without this knowledge, your Splunk administrators may struggle to prioritize how they triage data issues when they arise. Additionally, they may not understand the impact to the organization if the data is no longer is coming in to Splunk.

The key is to empower your Splunk admins and users with the information they need to appropriately make decisions about their Splunk environment. This is much more difficult when we don’t understand why the data is there, who is using it, how frequently it is being used, and how it is being used. (We’ll cover that in more detail later.)

This becomes an even bigger issue with Splunk environments that have scaled fast. As time passes, it becomes easier to lose the context, purpose, and value the data is bringing to your Splunk mission.

Let’s consider a common example we encounter at Kinney Group.

Many organizations must adhere to compliance requirements related to data retention. These requirements may dictate the collection of specific logs and retaining them for a period of time. This means that many organizations have audit data coming in to Splunk regularly, but that data rarely gets used in searches or dashboards. It’s simply there to meet a compliance requirement.

Understanding the “why” is key for Splunk admins because that data is critical, but the importance of the data to end users is likely minimal.

(If this sounds like your situation, it might be time to consider putting that compliance data to work for you. See how we’re helping customers do this with their compliance data today with Atlas.)

The Atlas Data Management application allows you to add “Data Definitions,” providing clear understanding of what data is doing in your environment.

No visibility into how data is being utilized by Splunk

You’ve spent a lot of time and energy getting your data into Splunk but now you don’t really know a lot about how it’s being used. This is another common symptom of Data Sprawl. Making important decisions about how you spend your time managing Splunk is often based on who screams the loudest when a report doesn’t work. But do your Splunk admins really have the information they need to put their focus in the right place? When they know how often a sourcetype appears in a dashboard or a scheduled search, they have a much clearer picture about how data is being consumed.

Actively monitoring how data is utilized within Splunk is extremely important because you can understand how to effectively support your existing users and bring light to what Splunk calls “dark data” in your environment. Dark data is all of the unused, unknown, and untapped data generated by an organization that could be a tremendous asset if they knew it existed.

Kinney Group’s Atlas platform includes Data Utilization — an application designed to show you exactly what data you’re bringing in, how much of your license that data is using, and if it’s being utilized by your users and admins.

Conclusion

Most organizations may not realize that Data Sprawl is impacting their Splunk environment because it doesn’t usually appear until something bad has happened. While not all symptoms of Data Sprawl are necessarily urgent, they can be indicators that a Splunk environment is growing out of control. If these symptoms go unchecked over a period of time they could lead to bigger, more costly problems down the line.

Knowledge is power when it comes to managing your Splunk environment effectively. Kinney Group has years of experience helping customers keep Data Sprawl in check. In fact, we developed the Atlas platform for just this purpose. Atlas applications are purpose-built to keep Data Sprawl at bay (and a host of other admin headaches) by empowering Splunk admins with the tools they need.

Click here to learn more about the Atlas platform, to get a video preview, schedule a demo, or for a free 30-day trial of the platform.

Bridging the Splunk Usability Gap to Achieve Greater Adoption and Expansion

Splunk, the amazing “Data to everything” platform, provides some of the best tools and abilities available to really control, analyze, and take advantage of big data. But you don’t build such a powerful and expansive platform over a decade without it being a bit technical, and even difficult, to fully utilize.

This technical hurdle — that we lovingly call the “Usability Gap” — can stop Splunk adoption in its tracks or stall an existing deployment to its ruin. By clearing the Usability Gap, however, a Splunk environment can prosper and deliver a fantastic return on your investment.

So it begs a question — “what is the Usability Gap, and how do I get across?”

How to Recognize the Gap

What exactly makes up the steep cliff sides of the “Usability Gap?” Well, these symptoms can manifest themselves in any Splunk deployment or client ecosystem, and is caused just as much by human elements as technical blockers. 

The key to any good Splunk deployment is a properly focused admin. Many admins or admin teams were handed Splunk as an additional responsibility instead of a planned and scoped aspect of their job. This disconnect can lead to under-certified admins who lack the time and experience needed to quickly solve issues and incoming requests from Splunk users.

Splunk users can also be underequipped and undertrained. While formal training is available for users with Splunk Fundamentals certification and other online training, they may not meet the user where they are, and those solutions lack the benefits of in-person training with real, actionable data. These issues can be big blockers for learning Splunk and increase the time it takes for users to become confident with the system.

If you’re still not sure if you have a Usability Gap issue, check the activity found on the system itself. If your Splunk search heads are getting little action from users and admins, you know for a fact that something is coming between your users and their Splunk goals.

What a Gap Means for You

What are the consequences of a Usability Gap? They are wide ranging and impactful.

With a lack of focus and experience, admins are going to be severely hampered in achieving outcomes with Splunk. When technical issues arise with the complex Splunk ecosystem, or a unique data set requires attention, admins will have to carve out time to not only work on the issue at hand but learn Splunk on-the-fly as well. Without the proper support, progress slows and a lack of Splunk best practices is to be expected in these deployments.

Users without a watchful or knowledgeable eye will be left to their own devices. This can lead to poorly created searches and dashboards, bad RBAC implementation (if implemented at all), or worse — no movement at all. Without a guiding hand and training, the technical nature of Splunk will eventually cause users to misconfigure or slow down the platform, or just not adopt such an imposing tool. These issues together can lead to a peculiar event, where Splunk is labeled as an “IT tool for IT people.” This is far from the truth, but if users are not properly trained, and admins don’t have time to be proactive, only the technical savvy or previously experienced will be able to utilize the investment. While some outcomes will be achieved, many organizations will realize their significant investment isn’t aligned with their outcomes and will drop Splunk altogether, putting all the effort and time invested to waste.

Mind the (Usability) Gap

Fortunately, there’s an easy answer for solving these problems and bridging the Usability Gap in your environment — the Atlas™ Platform for Splunk. Atlas is geared towards increasing and speeding up Splunk adoption and enabling Splunk admins to do more with their investment. Let’s look at the elements of Atlas that help bridge the Usability Gap!

The Atlas Application Suite, which is a collection of applications and elements that reside on the search head, helps admins improve their deployment, and zero in on giving users a head start with achieving outcomes in Splunk. One such application is the Atlas Search Library.

Search Library gives users an expandable list of Splunk searches that are properly described and tagged for discoverability and learning. Using the Search Library, a Splunk User can create a library of knowledge and outcomes when it comes to the complex nature of Splunk’s Search Processing Language. This greatly accelerates skill sharing and education around SPL — one of Splunk’s biggest roadblocks.

Another element is the Atlas Request Manager. This application greatly increases the usability of Splunk by quickly linking admins and user with a request system built into the fabric of Splunk itself. Admins no longer need to spend time integrating other solutions, and users receive a robust system for asking for help with creating dashboards, Splunk searches, onboarding data, and more — all within Splunk!

Adding a data request is quick and painless thanks to Atlas Request Manager

Last, but certainly not least in bridging the Usability Gap, is Atlas Expertise on Demand. Expertise on Demand (EOD) is a lifeline to Kinney Group’s bench of trusted, Splunk-certified professionals when you need them most. EOD provides help and guidance for achieving outcomes in Splunk, and can lead the charge in educating your admins and users about all things Splunk. With EOD, your admins and users have all the help they need to maximize their Splunk investment.

Wrapping up

The Usability Gap is too big a problem to ignore. Frustrated users, overtaxed Splunk admins, and a clear lack of outcomes await any Splunk team that ignores the clear symptoms and issues presented by the Usability Gap. Hope is not lost, however! The Atlas platform is purpose-built to help you get over the hurdles of adopting and expanding Splunk. With incredible tooling to simplify searches, SPL gaps, and managing requests, not to mention Expertise on Demand, Atlas provides the support admins need and Splunk users with the attention they deserve for education and meeting their Splunk goals!

This just scratches the surface of what Atlas can do for your Splunk journey, so read more about our incredible platform and discover what you are missing!

Preparing for Splunk Certifications

When it comes to preparing for Splunk Certification exams, there are two questions I see in the Splunk community this post will address:

  1. “I’m going to take the ____ certification test. How should I study?”
  2. “What is the ‘secret’ to passing the cert exams?”

In the post, we’ll advise studying techniques and provide the “secret” for passing Splunk Certifications… and, along the way, you’ll get better at using Splunk.

Note: This information is current as of March 2021. Please check the Splunk Training website for potential changes.

Step 1: Determine Splunk Certification Course Prerequisites

First, review the requirements for the certification. Namely, do you have to take any Splunk Education courses? I recommend the education courses for all certifications, but I understand if experienced Splunkers want to focus their education budgets on new topics or advanced classes.

Head to Splunk’s Training and Certification Page and select Certification Tracks on the left menu. The details for each certification list if the classes are required or strongly recommended (coursework will increase understanding of the concepts and make a pass more likely).

For example, select Splunk Enterprise Certified Admin to open the details and then select the top link. In the description, it states: “The prerequisite courses listed below are highly recommended, but not required for candidates to register for the certification exam.” Ergo, you do not have to take the classes (though you probably should).  

The Splunk Enterprise Certified Architect lists that the prerequisite courses through the Data and System Admin courses are not required. This means the only courses required for Certified Architect are: Troubleshooting Splunk Enterprise, Splunk Enterprise Cluster Administration, Architecting Splunk Enterprise Deployments, and the Splunk Enterprise Practical Lab.

Step 2: Determine Required Splunk Certifications

The same website, Splunk’s Training and Certification Page will also list any certification requirements for taking the certification you wish. For example, to obtain Splunk Enterprise Certified Architect, you must be a current Splunk Enterprise Certified Admin and a current Splunk Core Certified Power User.

To find which certifications are prerequisites for the cert you wish to take, on Splunk’s Training and Certification Page, click on Certification Track and then navigate to the particular certification you want to review.

Step 3: Review What Topics the Exams Cover

One of the most common questions I see and hear is, “What is on the Test?” Fortunately, Splunk publishes an exam blueprint for each of its certification tests. Splunk’s Training site lists these blueprints in the Splunk Certification Exams Study Guide, along with sample questions for most of the tests.

Let’s investigate the Splunk Core Certified Power User:

Splunk’s Test Blueprint states that this is a 57-minute, 65-question assessment evaluating field aliases, calculated fields, creating tags, event types, macros, creating workflow actions, data models, and CIM. Whew, so it spells out the main topics and explains them in more detail before giving out the critical information: exactly what topics are on the exam and the percentage of those topics on the typical exam.

We learn from the document that 5% of the exam deals with the topic “Using Transforming Commands for Visualizations” and further shows two elements: 

The topic “Filtering and Formatting Results” makes up 10% and has elements:

  • Using the eval command.
  • Using search and where commands to filter results.
  • Using the fillnull command.

The exam continues by listing out the ten topics of the exam and their elements. If a candidate is going to pass this exam, they should be knowledgeable on the topics listed. Bonus: if the candidate is good with these topics, they likely can perform the job as a Splunk Power User/Knowledge Manager.

Step 4: Review Material, Focusing on Unfamiliar Topics

In Step 3, we found what topics are on the different exams. Now comes the big question: how do I prepare for the exams?

1. Gather your study material: 

If you took the Splunk Education Classes, get the class docs. Those are great at taking cumbersome topics and presenting them in an accessible method.

Splunk Docs has exhaustive details on the variety of exam topics.

2. Practice on Splunk Instance(s):

We can read until we’re bleary-eyed, and that may be enough for you, but I find people learn better using a combination of reading and practice. If you have a laptop/desktop (windows, Linux, or Mac), then you can download Splunk—for free—install it on your system, and use that for practice. The free install works great for User, Power User, Admin, and Advanced Power User. For ITSI or ES, the best approach is to use a dev instance (if you are lucky enough to have access to one) or the Free Trials from Splunk Cloud. Other exams work best in a private cloud or container system (after all, it’s hard to learn how to use a cluster if you don’t have a cluster). 

Back to our example for Splunk Core Power User: 

Grab the Fundamentals 1 and Fundamentals 2 course material, have a Splunk instance installed, and open a web browser. Then, go through the exam blueprint one topic at a time. In this example, we’ll look at “Describe, create, and use field aliases.” The Fundamentals 2 course material explains what a field alias is and provides examples of its use. You can also supplement that material with the Splunk Knowledge Manager Manual section on Field Aliases. Run through creating field aliases in your Splunk instance until you have the topic down.

Then you can move on to the next section, find the relevant course material/documentation, and practice.

The Non-Step: Or, The Elephant in the Phone Booth

I need to address a question that gets asked far too often…

Q: “Dumps. Where do we find them?”

A: “Don’t do that.” (though sometimes the language is much more colorful)

Q: “Why not?”

Answer 1: Splunk Certification strictly prohibits using dumps, and their use is grounds for being banned from taking Splunk Certs. That’d suck for someone making Splunk their focus to limit their career by never earning any certifications.

Answer 2: The goal of certification is to prove the ability to use the product, not the ability to memorize test questions. If you tell an employer that you have the Power User Cert, it comes with a promise that you have the skills. Don’t be the person faking it. 

The Cert Secret

Finally, the “secret” method for passing Splunk certs: Find the topics and study thoseSometimes the best secrets are the obvious ones.

Best of luck in your testing!

VMware Orchestrator Tips: Stopping a Second Execution of a Workflow

Kinney Group’s own Troy Wiegand hosts this blog series, which will outline some simple tips for success with VMware vRealize Orchestrator. Join Troy as he applies his automation engineering expertise to shed some light on VMware! 

When you have a really important workflow to run in VMware vRealize Orchestrator, you don’t want another one running at the same time. As of right now, vRO doesn’t have an option for stopping that second workflow execution from happening simultaneously. Never fear, though—Kinney Group has you covered. 

The One and Only Workflow

Some workflows are just more impactful than others. They have the power to do serious damage to your environment—they could tear down your infrastructure, or they could build it up. Depending on what you’re orchestrating, running another workflow simultaneously to this crucial one could be problematic. Luckily, we can use a workaround to prevent any potential damage. When implemented, the workaround will look like this: 

Figure 1 – The workaround functions to stop a second execution while the first one is running

Configuring the Field

The ideal result of this workaround is that when you try to run a workflow for a second time while it’s already running, vRO will send an error message to prevent that redundancy. Let’s investigate how that alert happens by going to the Input Form. The text field that contains the error message is populated with some placeholder text to identify the error. Under “Constraints,” note that “This field cannot be empty!” is marked as “Yes” from the dropdown menu. This ensures that a user cannot submit the form unless the field has something in it. However, that’s only possible if the field itself is visible. Set the “Read-only” menu to “Yes” under the Appearance tab so it can’t be edited. 

Figure 2 – The configuration of the workaround, as seen through “Input Form”

The External source action is set to “/isWorkflowRunning,” and it’s configured from the variable “this_workflow.” You can add this as a new variable and assign the same value as the name of the workflow you wish to run without concurrencies.  

Figure 3 – The form to edit a new variable

By examining the script, we can see that we’ve inputted our workflow ID. We’ve also configured the server to get executions. (Alternatively, you could skip to tokens = executions for a simpler function.) The flag below that function should be set to “false.” We can then check the state of every token, and if it’s “running,” “waiting,” or “waiting-signal,” set the flag to “true.” Those are the three states for a workflow that indicate being in motion or in progress, so it’s essential to identify them as triggers for our error message. 

Figure 4 – The workaround script

The Result: Your Workflow Runs Alone

This combination of settings allows the field to appear when you try to run a workflow while another execution of it is currently running. This way, when you try to run an interrupting workflow, it’ll be stopped.  

We hope this tutorial has been helpful in streamlining your workflows and making the most of VMware Orchestrator! We’ve got more tips for vRO here. Fill out the form below to get in touch with automation experts like Troy:

2020: The Year in Kinney Group

Although 2020 was fraught with hardship, uncertainty, and unrest, we can all take pride in moments of success despite adversity. Like so many other businesses, Kinney Group transitioned to a remote workforce in mid-March. This change brought some challenges for sure, but it also revealed the resilience and persistence of our Choppers. Now that last year has come to a close, Kinney Group is busy with our 2021 Kickoff, starting the year off right with setting goals and celebrating our successes. We’re taking a moment to look back at some of the 2020 wins here at Kinney Group:

Company

In 2020, our content team published 90 blog posts.

Engineers undertook 120 new engagements/projects in 2020.

Over the course of the year, we held 15 webinars with 370 attendees.

Our audience engagement on LinkedIn grew by 563% (follow us!).

Our incredible team of engineers spent 51,728.75 hours on engagements, delivering exceptional solutions, services, and results for our customers.

We launched 1 incredible new platform for Splunk: Atlas.

Our incredible team of engineers spent 51,728.75 hours on engagements, delivering exceptional solutions, services, and results for our customers.

Colleagues

28% of our colleagues joined after March 12, 2020, meaning that our Work From Anywhere (WFA) policy is the norm for almost one-third of the company.

Our internal IT department resolved 449 tickets for colleagues over the course of the year.

We welcomed 33 new colleagues and offered 32 promotions.

The average tenure for KGI colleagues is 2.75 years, which exceeds the average tenure of tech companies like Apple and Google by nearly a year.

Altogether, colleagues completed 385 assignments on Lessonly.

Over 200 devices were “delivered” to colleagues working from anywhere.

We welcomed 33 new colleagues and offered 32 promotions.

Culture

Colleagues recognized each others’ work with 377 culture coin nominations.

100+ songs were featured over the course of the year on our Kinney Tunes colleague playlist.

We ordered 150 #hoodies for our Atlas launch in November.

Our 2021 Kickoff boasted 10 sessions hosted by colleagues, for colleagues, including bread-baking, “For Bees’ Sake,” and a virtual fitness challenge.

Colleagues recognized each others’ work with 377 culture coin nominations.

But Most Importantly…

We are One Team, and we can’t wait to see what 2021 has in store!

As the year progresses, make sure to follow us on Facebook, LinkedIn, and Twitter to stay tuned. We’ll be updating the blog regularly with Splunk tips and tutorials, Atlas announcements, insights into Kinney Group culture, and more! Special thanks to Joi Redmon, John Stansell, Christina Watts, Cory Brown, Alex Pallotta, Wes Comer, Brock Trusty, and Zach Vasseur for their help in gathering data for this report.

Meet Atlas’s Scheduling Assistant

Searches are at the heart of Splunk. They power the insights that turn data into business value—and Atlas has plenty of them collected in the Search Library. Simple dashboards and ad-hoc searches, though, are only the first step: the real magic happens with the Splunk scheduler. However, as Splunkers will know, it’s all too easy to bog down an environment with poorly-planned search schedules, redundancies, and heavy jobs. Soon, this leads to skipped jobs, inaccurate results, and a slow and frustrating user experience.

Atlas has a first-of-its-kind solution.

The Scheduling Assistant application provides a real-time health check on the use of Splunk’s scheduler and scheduled searches. In addition, it includes a built-in mechanism to fix any issues it finds. Atlas’s powerful Scheduling Assistant ensures that your scheduled searches in Splunk are running efficiently by providing the visibility you need to make the most of your data resources.

Scheduler Activity

In Atlas’s Scheduling Assistant, you’ll find the Scheduler Activity resource. The Scheduler Activity tab is your starting point for assessing how efficiently your environment is currently executing scheduled Splunk searches. Then, the Scheduler Health Snapshot section offers a health score based largely on historic findings like skipped ratio and search latency, as well as a glimpse forward at future schedule concurrency.

Figure 1 - Scheduled Activity tab in Splunk
Figure 1 – Scheduled Activity tab in Splunk

Below the Health Snapshot, the Concurrency Investigation section lets users view and sort their scheduled searches with a helpful translation of the scheduled run times. These dashboards display Atlas’s computed concurrency limits for a Splunk environment, which dictate the maximum number of searches that can be run at any given time.

These real-time insights inform how users can schedule searches for the fastest, most efficient results.

Figure 2 - Concurrency Investigation tab in Scheduling Assistant
Figure 2 – Concurrency Investigation tab in Scheduling Assistant

Figure 3 - Scheduling Assistant preview for Splunk
Figure 3 – Scheduling Assistant preview for Splunk

Next up is Historical Performance, which interprets how scheduled searches are running. This dashboard and graph display average CPU and physical memory used. Also included are search metrics like run time and latency, for example.

Figure 4 - Historical performance of scheduled searches in Splunk
Figure 4 – Historical performance of scheduled searches in Splunk

After Historical Performance, the Scheduled Search Inventory section provides details on all manually scheduled searches. It also allows users to quickly drill down to the Scheduling Assistant tool for any given search.

Figure 5 - Search Inventory of all searches in Splunk
Figure 5 – Search Inventory of all searches in Splunk

Scheduling Assistant

The Scheduling Assistant dashboard allows users to select a single scheduled search to investigate and modify.

Figure 6 - Snapshot of Scheduling Assistant dashboard
Figure 6 – Snapshot of Scheduling Assistant dashboard

Figure 7 - Key metrics on search activity in Splunk
Figure 7 – Key metrics on search activity in Splunk

This section provides key metrics for the search’s activity to highlight any issues. Atlas users can experiment by changing the selected search’s scheduling setting. By editing the Cron setting and submitting a preview, users can compare the Concurrent Scheduling and Limit Breech Ratio to see if their tested Cron setting improves overall outcomes.

If the modified schedule is satisfactory, the user can then save changes and update the saved search—all within the Atlas platform.

Cron Helper

Splunk uses Cron expressions to define schedules, and Atlas’s Cron Helper tab provides a quick and easy way to test them. Not only does this tool enable fast, direct translations, it also acts as a learning tool for those new to Cron.

The syntax key below the Cron bar displays the definitions of each character, allowing users to try their hand at creating and interpreting their own Cron expressions.

Figure 8 - Atlas Cron Helper
Figure 8 – Preview of Atlas Cron Helper

Scheduler Information

The Scheduler Information dashboard is a knowledge base for the complex definitions and functions that power Splunk’s scheduled searches. The environment’s limits.conf is present for review, and the current statistics on currency limits are provided for clarity.

These relatively static values are vital to understanding the scheduler and taking full advantage of its potential.

Figure 9 - Preview of Scheduler Information dashboard
Figure 9 – Preview of Scheduler Information dashboard

In Conclusion

Powered by these four revolutionary features, Atlas’s Scheduling Assistant provides unprecedented insight into Splunk searches. The power to survey, schedule, and change searches is in the user’s hands, saving your team time and resources.

There’s more to come from Atlas! Stay informed by filling out the form below for more information from KGI.

Contact Us!

Meet Atlas’s Search Library

One key pain point for Splunk admins and users is the inability to track, store, and view searches in one place. On top of keeping tabs on a dizzying amount of searches, users must write queries in Splunk Processing Language (SPL), which is complex and difficult to learn. Writing efficient searches in SPL takes abundant time and resources that many teams can’t afford to spare. Coordinating searches between users and admins eats up further time and can produce confusion for any team—and that’s not to mention the major obstacles that slow or failed searches can introduce.  

Optimizing and keeping track of searches is just one of the issues facing IT teams today—thankfully, we’ve got a solution. Atlas, a platform developed by Kinney Group to help users navigate Splunk, includes a comprehensive and customizable Search Library to aid users in creating and using searches.  

Figure 1 – The Search Library icon from the Atlas Core homepage

The Atlas Search Library

Collected Searches

The Search Library contains a collection of helpful, accessible searches pre-built by KGI engineers. Users also have the ability to save their own custom searches, which can be edited or deleted at any time. These are listed by name and use case, making it easy to identify the purpose of each search. All searches in the library include expandable metadata so that users can see additional information, including the SPL query, within the table. This insight into the SPL enables faster, easier education for those looking to write their own queries. Users can also filter searches to quickly and easily find all applicable listings, giving users and admins an unprecedented degree of visibility.  

Figure 2 – Atlas’s Search Library tab 

Using the Searches

Performing one of these searches couldn’t be easier. Clicking “Launch Search” will open a separate tab where you can view details of the search’s results and tweak the SPL query—all without changing the originally saved search. This capability enables those without a knowledge of SPL to learn and use powerful, intricate searches.  

Figure 3 – The launched search, open in a separate tab

Search Activity

The Search Library component also includes a Search Activity tab, which can be used to monitor which searches are run when, how frequently, and by whom. Having this visibility on one page allows users to see redundancies and overall usage of a search. The Search Activity tab includes the same level of detail as the Search Library, meaning users can dive into the specifics of each search. The tab is also filterable so users can identify exactly which searches they’re shown. You can also add any search in the Search Activity tab to the Search Library, making it easier than ever to keep track of what you need in Splunk.  

Figure 4 – The Search Activity tab of the Search Library

Conclusion

Any user is liable to hit a few roadblocks on their Splunk journey. With Atlas’s Search Library application, your team can be sure that searches won’t be one of them.  

The Search Library is only one of Atlas’s innovative features, and we’re looking forward to sharing so much more from the platform with you. If you’re eager to learn more about Atlas in the meantime, fill out the form below.

Schedule a Meeting