Preparing for Splunk Certifications

When it comes to preparing for Splunk Certification exams, there are two questions I see in the Splunk community this post will address:

  1. “I’m going to take the ____ certification test. How should I study?”
  2. “What is the ‘secret’ to passing the cert exams?”

In the post, we’ll advise studying techniques and provide the “secret” for passing Splunk Certifications… and, along the way, you’ll get better at using Splunk.

Note: This information is current as of March 2021. Please check the Splunk Training website for potential changes.

Step 1: Determine Splunk Certification Course Prerequisites

First, review the requirements for the certification. Namely, do you have to take any Splunk Education courses? I recommend the education courses for all certifications, but I understand if experienced Splunkers want to focus their education budgets on new topics or advanced classes.

Head to Splunk’s Training and Certification Page and select Certification Tracks on the left menu. The details for each certification list if the classes are required or strongly recommended (coursework will increase understanding of the concepts and make a pass more likely).

For example, select Splunk Enterprise Certified Admin to open the details and then select the top link. In the description, it states: “The prerequisite courses listed below are highly recommended, but not required for candidates to register for the certification exam.” Ergo, you do not have to take the classes (though you probably should).  

The Splunk Enterprise Certified Architect lists that the prerequisite courses through the Data and System Admin courses are not required. This means the only courses required for Certified Architect are: Troubleshooting Splunk Enterprise, Splunk Enterprise Cluster Administration, Architecting Splunk Enterprise Deployments, and the Splunk Enterprise Practical Lab.

Step 2: Determine Required Splunk Certifications

The same website, Splunk’s Training and Certification Page will also list any certification requirements for taking the certification you wish. For example, to obtain Splunk Enterprise Certified Architect, you must be a current Splunk Enterprise Certified Admin and a current Splunk Core Certified Power User.

To find which certifications are prerequisites for the cert you wish to take, on Splunk’s Training and Certification Page, click on Certification Track and then navigate to the particular certification you want to review.

Step 3: Review What Topics the Exams Cover

One of the most common questions I see and hear is, “What is on the Test?” Fortunately, Splunk publishes an exam blueprint for each of its certification tests. Splunk’s Training site lists these blueprints in the Splunk Certification Exams Study Guide, along with sample questions for most of the tests.

Let’s investigate the Splunk Core Certified Power User:

Splunk’s Test Blueprint states that this is a 57-minute, 65-question assessment evaluating field aliases, calculated fields, creating tags, event types, macros, creating workflow actions, data models, and CIM. Whew, so it spells out the main topics and explains them in more detail before giving out the critical information: exactly what topics are on the exam and the percentage of those topics on the typical exam.

We learn from the document that 5% of the exam deals with the topic “Using Transforming Commands for Visualizations” and further shows two elements: 

The topic “Filtering and Formatting Results” makes up 10% and has elements:

  • Using the eval command.
  • Using search and where commands to filter results.
  • Using the fillnull command.

The exam continues by listing out the ten topics of the exam and their elements. If a candidate is going to pass this exam, they should be knowledgeable on the topics listed. Bonus: if the candidate is good with these topics, they likely can perform the job as a Splunk Power User/Knowledge Manager.

Step 4: Review Material, Focusing on Unfamiliar Topics

In Step 3, we found what topics are on the different exams. Now comes the big question: how do I prepare for the exams?

1. Gather your study material: 

If you took the Splunk Education Classes, get the class docs. Those are great at taking cumbersome topics and presenting them in an accessible method.

Splunk Docs has exhaustive details on the variety of exam topics.

2. Practice on Splunk Instance(s):

We can read until we’re bleary-eyed, and that may be enough for you, but I find people learn better using a combination of reading and practice. If you have a laptop/desktop (windows, Linux, or Mac), then you can download Splunk—for free—install it on your system, and use that for practice. The free install works great for User, Power User, Admin, and Advanced Power User. For ITSI or ES, the best approach is to use a dev instance (if you are lucky enough to have access to one) or the Free Trials from Splunk Cloud. Other exams work best in a private cloud or container system (after all, it’s hard to learn how to use a cluster if you don’t have a cluster). 

Back to our example for Splunk Core Power User: 

Grab the Fundamentals 1 and Fundamentals 2 course material, have a Splunk instance installed, and open a web browser. Then, go through the exam blueprint one topic at a time. In this example, we’ll look at “Describe, create, and use field aliases.” The Fundamentals 2 course material explains what a field alias is and provides examples of its use. You can also supplement that material with the Splunk Knowledge Manager Manual section on Field Aliases. Run through creating field aliases in your Splunk instance until you have the topic down.

Then you can move on to the next section, find the relevant course material/documentation, and practice.

The Non-Step: Or, The Elephant in the Phone Booth

I need to address a question that gets asked far too often…

Q: “Dumps. Where do we find them?”

A: “Don’t do that.” (though sometimes the language is much more colorful)

Q: “Why not?”

Answer 1: Splunk Certification strictly prohibits using dumps, and their use is grounds for being banned from taking Splunk Certs. That’d suck for someone making Splunk their focus to limit their career by never earning any certifications.

Answer 2: The goal of certification is to prove the ability to use the product, not the ability to memorize test questions. If you tell an employer that you have the Power User Cert, it comes with a promise that you have the skills. Don’t be the person faking it. 

The Cert Secret

Finally, the “secret” method for passing Splunk certs: Find the topics and study thoseSometimes the best secrets are the obvious ones.

Best of luck in your testing!

Data Model Mapping in Splunk for CIM Compliance

Join Kinney Group Splunk Enterprise Security Certified Admin Hailie Shaw as she walks through the process of data model mapping in Splunk for CIM compliance. Catch the video tutorial on our YouTube channel here.

Note: the data visible in this video and blog post are part of a test environment and represent no real insights. 

Starting Out with Data Model Mapping

As Splunk engineers, we constantly deal with the question: How do I make my data, events, and sourcetypes Common Information Model (CIM) compliant? This is especially crucial with Enterprise Security use cases, when data will need to map to a CIM compliant data model.  When we search the environment pre-CIM compliance, our query will return no results. This is what we aim to change with data model mapping. 

Figure 1 - Pre-data mapping, the search returns no results in Splunk
Figure 1 – Pre-data mapping, the search returns no results

Begin by downloading the Splunk Add-on Builder app, which you’ll use later. Let’s transition to Splunk docs. The left panel of the webpage lists all data models. For any option, Splunk docs will list out all fields to which users can map. This should be the first step when choosing the data model that matches your environment’s existing fields as closely as possible.  

Creating the Add-on

Turn to Splunk’s Search & Reporting app and navigate to the data that you’re looking to map; the fields will populate on the left side of the screen. Pay special attention to the sourcetype that you wish to matchyou’ll need to supply this to the Splunk Add-on Builder app that will map the fields for CIM compliance. Within Splunk Add-on Builder, select “New Add-on.” The default settings will be sufficient with the exception of the required Add-on name. Click “Create.”  

Figure 2 - The form in this window creates the Add-on in Splunk
Figure 2 – The form in this window creates the Add-on

Click Manage Source Types on the top banner menu. Click “Add,” and then “Import from Splunk” from the dropdown menu. Select your sourcetype, which should populate within the menu after you import data from Splunk. Click Save, and the events will be uploaded. Next, click Map to Data Models on the top banner menu.  

Figure 3 - Import data by selecting the sourcetype
Figure 3 – Import data by selecting the sourcetype

Mapping the Data Model

From here, select “New Data Model Mapping.” You’ll be prompted to enter a name for the event type and select the sourcetype you’re using. The search will populate below, automatically formatted with the correct name. Click “Save.” The resultant data model will include, on the left-hand side of the screen, a list of the event type fields. Hovering over each entry within this list will reveal a preview of the data that exists within the field. 

Essentially, the app takes the fields from your initial data and transfers them onto the data model. On the right of the screen is where you’ll select a data model for the fields to map to. Each data model is filtered through Splunk’s CIM, and you can select which is most appropriate based on the Splunk documentation with which we began.  

Figure 4 - The empty Add-on Builder field before data mapping
Figure 4 – The empty Add-on Builder field before data mapping

When you select a data model, the Add-on Builder will provide supported fields, which you can cross-reference with Splunk Docs; the app is a field-for-field match. This step will give you a list of CIM-approved fields on the right to complement the original fields on the left. To map them together, click “New Knowledge Object” and select “FIELDALIAS.” This is where you’ll need to select which data model field most closely matches the initial event type or expression.

Once you’ve made a match, select OK, and the app will provide a field alias. Repeat this process for each field you wish to include. Once you’re satisfied, click “Done.” As you can see, the data has now populated with the sourcetype listed. 

Figure 5 - Match the original entry field to its CIM compliant counterpart using "FIELDALIAS"
Figure 5 – Match the original entry field to its CIM compliant counterpart using “FIELDALIAS”

Validate and Package 

It’s important to validate your package to ensure that it follows best practices. To do so, click “Validate & Package” from the top banner menu. Click the green “Validate” button. When the validation reaches 100%, you can download the package. This page will also generate an Overall Health Report detailing which elements, if any, should be addressed. Once downloaded, change the name of the file to a .zip. By double-clicking, the file will extract, and you can open it to view details of the sourcetype and events within.  

Figure 6 - The overall health report indicates that the validated data package is ready to be downloaded in Splunk
Figure 6 – The overall health report indicates that the validated data package is ready to be downloaded

Return to the Splunk instance through Search and Reporting and run the data model command again as a search. Now, your events will populate in the data model! You can also view the CIM compliant aliases in Settings > All Configurations.  

 

Compliance, Achieved

The resultant data model is good to go—CIM compliant and ready to be exported. Check out more Splunk tips on our blog and our YouTube channel. To get in touch with engineers like Hailie, fill out the form below: 

 

The New Standard: KGI’s Managed Services for Splunk

When the KGI team announced our Managed Services for Splunk offering, we made a pretty heady statement: we are launching the new standard for a managed service capability for customers using Splunk.

You’ve probably heard companies claim statements like the new standard. However, in our experience, backing up that claim is a different story. We see the KGI team as defining a new standard for Splunk managed services in three compelling ways.

1. Differentiated Experience and Expertise

The KGI team has worked with the Splunk platform since 2013, and over that time we have delivered over 600 engineering services engagements tied to the Splunk Enterprise and Splunk Cloud platforms. These engagements have covered projects big and small including both commercial companies as well as U.S public sector organizations.

KGI is a Splunk Elite Services Partner and has been twice recognized as Splunk Public Sector Services Partner of the Year. Our engineering services for Splunk are not limited to security and IT operations use cases – we develop business analytics applications on the Splunk platform that perform a variety of functions.

Specific to security – which is Splunk’s #1 use case – the KGI team offers a unique perspective. Since the company’s origins in 2006, we have supported US Federal defense and intelligence activities that demand stringent approaches to cybersecurity. We apply 15+ years of know-how gained by supporting classified networks in everything we do. A traditional commercial-oriented managed service provider simply will not have this perspective. Your Splunk environment should be secure, and we know how to get it there and keep it there.

 

2. The Atlas Platform Combined with Expertise on Demand

In November 2020, KGI launched Atlas, the first platform of its kind purpose-built to help organizations achieve exceptional results from their investments in Splunk. Managing a Splunk environment is table-stakes for a managed services provider – enabling organization’s valued colleagues to achieve great things with Splunk is the new standard.

The Atlas platform works with any Splunk environment, whether it’s Splunk Enterprise running on-prem or Splunk Cloud. Atlas addresses the total ecosystem of capabilities that are needed to gain success with Splunk. This includes functions that do not necessarily exist within Splunk technologies, yet are necessary when operating Splunk at scale.

When it comes to managed services, we don’t stop with Atlas – we complement the platform’s capabilities with our Expertise on Demand (EOD) offering. EOD gives end-users immediate access to Splunk experts to assist them in the creation of comprehensive dashboards and reports. With EOD, there are no more struggles figuring out SPL searches or data correlations. Expert help is as close as a phone call or chat session.

And Atlas helps customers save money – big money – with their Splunk investment. From license optimization to the reduction of Splunk’s “soft” costs, pairing Atlas with EOD ensures that an organization’s investment in Splunk is the best it can be.

 

3. Passionately Focused on Business Value from Splunk: the KGI Delivery Framework

Organizations invest in Splunk to yield business outcomes and expect specific returns on their investments (aka: ROIs).  The KGI Delivery Framework focuses our teams on delivering three ROIs – mission, financial, and human – within every Splunk project that we tackle.

Delivery of technical outcomes should be expected from any managed services provider – and we deliver Splunk technical excellence in everything we do.  Delivery of business results specific to an organization is a key differentiator of KGI’s managed services offering for Splunk. The KGI Delivery Framework is our proprietary delivery methodology, which ensures organizations receive optimal returns on their investments in Splunk. Kinney Group’s managed services offering ensures that business outcomes are every bit as important as technical outcomes.

If organizations cannot achieve business outcomes from the money they’ve invested in Splunk – what’s the point?

 

The Kinney Group Difference

Deep and differentiated experience with Splunk. Leverage of the Atlas platform and EOD for driving customer success. A focus on business outcomes. All three of these attributes of KGI’s Managed Services for Splunk offering are elements of the new standard we’ve created.

In addition to the outstanding capabilities outlined above, we haven’t yet touched on world-class security MDR support, cyber terrain maps for identifying exploitable vulnerabilities, and specific vertical market solutions via Atlas.

The new standard in managed services for Splunk is here. It’s not a tagline, it’s the new journey that customers embark upon as they pursue outstanding results with Splunk.

Want to learn more about Kinney Group’s Managed Services offering? Fill out the form below!

Splunk 101: Data Parsing

 

When users import a data file into Splunk, they’re faced with a dense, confusing block of characters in the data preview. What you really need is to make your data more understandable and more accessible. That’s where data parsing and event breaking come in.

In this brief video tutorial, TechOps Analyst, Hailie Shaw, walks you through an easy way to optimize and configure event breaking— spoiler alert: it boils down to two settings in props.conf.

This function within Splunk can greatly simplify the way you interact with your data. After learning it, you’ll get the insights you need, faster.

Event Breaking

When importing data into your Splunk instance, you’ll want to be able to separate it based on events, which enables legibility and ease of interpretation. Because the imported data file isn’t pre-separated, event breaking is an essential skill. The most important part in separating data is the line breaker within the event boundary, which is how Splunk decides how to group and separate certain events.

LINE_BREAKER is how Splunk separates events into different lines. The default = a sequence of new lines, followed by all carriage returns within parentheses.

SHOULD_LINEMERGE is how Splunk merges separate lines into individual events. The default value = true, but should always be set to false as a best practice. If set to true, the value will automatically set to false once Splunk is supplied with your new regex, which provides a specific guide for breaking up new lines and preventing them from merging.

When you import the data file, the term “<event>” appears several times intermittently within the block of data, which is listed as one large event. By pasting the original block of data into regex101.com, users can identify the <event>s that should ideally begin new lines. Then enter the regex resulting from this step into Splunk in props.conf, which will populate the desired fields.

By selecting regex as the event-breaking policy and entering the pattern from regex101.com, the data preview will display your data separated into events (each beginning with “<event>”). The regex in props.conf defines both terms listed above. This data preview can now be saved as a new sourcetype.

Learn More!

Learning event breaking can help make your data more organized and legible for members of your team. Kinney Group’s Expertise on Demand service offers a wealth of Splunk tips and knowledge, tailored by analysts like Hailie to your team’s specific needs. Check out the blog for additional tutorials from our engineers. To learn more about Expertise on Demand, fill out the form below!

Contact Us!

The Ten Command(ment)s of Splunk

There are many do’s and don’t’s when it comes to Splunk. In our time supporting Splunk customers through Expertise on Demand, Team Tech Ops has seen the good, the bad, and the ugly situations customers can fall into with Splunk.

We’re happy to present the Tech Ops Ten (Command)ments of Splunk best practices.

1) Thou shalt NEVER search index=*

This one is pretty self-explanatory. Splunk has A LOT of data.

Figure 1: "Splunk Slaps" meme
Figure 1: “Splunk Slaps” meme

In most cases, hundreds of gigabytes, maybe even terabytes of data. I’m sure you tried running a search that looks in one index across millions of events and found it took a very long time to complete.

Now, imagine that across all of your indexes. Not many Splunkers can see this full picture because there’s always a search that will not complete (unless you have a tiny environment or use something like tstats).

Searching “index=*” goes into what I like to call the worst practices box.

2) Thou shalt remove real-time & All Time as an option for basic users

Right in line with never looking at every index humanly possible, we also want to avoid looking at every event (that does exist or will exist).

Running a search in real-time or across all-time causes a resource strain on the environment and may even cause disruption for your fellow Splunk users.

3) Thou shalt not ingest data into the main index

Main index is a default index for Splunk Enterprise. Without specifying an index for your inputs, all your data will default to the main index.

Typically, it is the best practice if you never send information to the main index. Ever. If you thought it was confusing to find data when it nicely organized in your indexes and source types, try finding anything when it’s completely jumbled up in one place.

4) Thou shalt leave on ALL search formatting settings

This one is a “You v.s. The Guy she tells you not to worry about” situation.

You:

Figure 2 - Splunk without search formatting settings
Figure 2 – Splunk without search formatting settings

The Guy she tells you not to worry about:

Figure 2 - Splunk with ALL search formatting settings
Figure 3 – Splunk with ALL search formatting settings

5) Thou shalt view the monitoring console before requesting a performance dashboard be built

Most of the information is already there, you don’t need to reinvent the wheel or in this case… the monitoring console.

Figure 3 - Splunk monitoring console meme
Figure 4 – Splunk monitoring console meme

6) Thou shalt look for an add-on first before onboarding new data

If you’re onboarding data, there is probably an app or add-on that can help. You’re going to save a lot of time (and aspirin) utilizing one of Splunk’s app or add-on tools. Note: this does not apply if you’re ingesting something completely unique to you or your company.

Figure 5 - Save time with a Splunk app or add-on
Figure 5 – Save time with a Splunk app or add-on

7) Thou shalt follow correct directory precedence

NEVER save a .conf file in /default. It’s that simple. Just don’t do it.

Figure 6 - Don't use .conf file in /default in Splunk
Figure 6 – Don’t use .conf file in /default in Splunk

8) Thou shalt have all instances of Splunk on a supported version

If you wouldn’t allow an unsupported version of Windows Server in your environment, then why would you allow an unsupported version of Splunk in?

Figure 7 - update your version on Splunk
Figure 7 – Update your Splunk version

9) Thou shalt use forwarder management

Think smarter not harder. Forwarder management makes it easier to keep all your forwarders buttoned up and working properly. The alternative is to make changes and updates manually and individually, and depending on how many clients you have…that might take a while. Splunk’s native forwarder management tool is cool, but Kinney Group’s Forwarder Awareness application (through Atlas) is cooler. Check out this incredible tool that will save you a TON of time in Splunk.

Figure 8- Utilize forwarder awareness so you don't look like this guy
Figure 8- Utilize forwarder awareness so you don’t look like this guy

10) Thou shalt not use join/subsearches unless absolutely necessary

I want to start off by saying that sub-searches aren’t bad, they’re just not as efficient as other solutions. There’s more to come on this rule, but trust our advice, for now, avoid this at all causes.

Conclusion

Your data is important, so how you work with it in Splunk makes all the difference in the value you’ll get out of the platform. The Tech Ops team has worked with hundreds of Splunk customers, from our experience, these tips are a great place to start in adopting Splunk best practices. If you’d like to work directly with us, the experts, please fill out the form below!

Meet Atlas’s Scheduling Assistant

Searches are at the heart of Splunk. They power the insights that turn data into business value—and Atlas has plenty of them collected in the Search Library. Simple dashboards and ad-hoc searches, though, are only the first step: the real magic happens with the Splunk scheduler. However, as Splunkers will know, it’s all too easy to bog down an environment with poorly-planned search schedules, redundancies, and heavy jobs. Soon, this leads to skipped jobs, inaccurate results, and a slow and frustrating user experience.

Atlas has a first-of-its-kind solution.

The Scheduling Assistant application provides a real-time health check on the use of Splunk’s scheduler and scheduled searches. In addition, it includes a built-in mechanism to fix any issues it finds. Atlas’s powerful Scheduling Assistant ensures that your scheduled searches in Splunk are running efficiently by providing the visibility you need to make the most of your data resources.

Scheduler Activity

In Atlas’s Scheduling Assistant, you’ll find the Scheduler Activity resource. The Scheduler Activity tab is your starting point for assessing how efficiently your environment is currently executing scheduled Splunk searches. Then, the Scheduler Health Snapshot section offers a health score based largely on historic findings like skipped ratio and search latency, as well as a glimpse forward at future schedule concurrency.

Figure 1 - Scheduled Activity tab in Splunk
Figure 1 – Scheduled Activity tab in Splunk

Below the Health Snapshot, the Concurrency Investigation section lets users view and sort their scheduled searches with a helpful translation of the scheduled run times. These dashboards display Atlas’s computed concurrency limits for a Splunk environment, which dictate the maximum number of searches that can be run at any given time.

These real-time insights inform how users can schedule searches for the fastest, most efficient results.

Figure 2 - Concurrency Investigation tab in Scheduling Assistant
Figure 2 – Concurrency Investigation tab in Scheduling Assistant
Figure 3 - Scheduling Assistant preview for Splunk
Figure 3 – Scheduling Assistant preview for Splunk

Next up is Historical Performance, which interprets how scheduled searches are running. This dashboard and graph display average CPU and physical memory used. Also included are search metrics like run time and latency, for example.

Figure 4 - Historical performance of scheduled searches in Splunk
Figure 4 – Historical performance of scheduled searches in Splunk

After Historical Performance, the Scheduled Search Inventory section provides details on all manually scheduled searches. It also allows users to quickly drill down to the Scheduling Assistant tool for any given search.

Figure 5 - Search Inventory of all searches in Splunk
Figure 5 – Search Inventory of all searches in Splunk

Scheduling Assistant

The Scheduling Assistant dashboard allows users to select a single scheduled search to investigate and modify.

Figure 6 - Snapshot of Scheduling Assistant dashboard
Figure 6 – Snapshot of Scheduling Assistant dashboard
Figure 7 - Key metrics on search activity in Splunk
Figure 7 – Key metrics on search activity in Splunk

This section provides key metrics for the search’s activity to highlight any issues. Atlas users can experiment by changing the selected search’s scheduling setting. By editing the Cron setting and submitting a preview, users can compare the Concurrent Scheduling and Limit Breech Ratio to see if their tested Cron setting improves overall outcomes.

If the modified schedule is satisfactory, the user can then save changes and update the saved search—all within the Atlas platform.

Cron Helper

Splunk uses Cron expressions to define schedules, and Atlas’s Cron Helper tab provides a quick and easy way to test them. Not only does this tool enable fast, direct translations, it also acts as a learning tool for those new to Cron.

The syntax key below the Cron bar displays the definitions of each character, allowing users to try their hand at creating and interpreting their own Cron expressions.

Figure 8 - Atlas Cron Helper
Figure 8 – Preview of Atlas Cron Helper

Scheduler Information

The Scheduler Information dashboard is a knowledge base for the complex definitions and functions that power Splunk’s scheduled searches. The environment’s limits.conf is present for review, and the current statistics on currency limits are provided for clarity.

These relatively static values are vital to understanding the scheduler and taking full advantage of its potential.

Figure 9 - Preview of Scheduler Information dashboard
Figure 9 – Preview of Scheduler Information dashboard

In Conclusion

Powered by these four revolutionary features, Atlas’s Scheduling Assistant provides unprecedented insight into Splunk searches. The power to survey, schedule, and change searches is in the user’s hands, saving your team time and resources.

There’s more to come from Atlas! Stay informed by filling out the form below for more information from KGI.

Contact Us!

Meet Atlas’s Search Library

One key pain point for Splunk admins and users is the inability to track, store, and view searches in one place. On top of keeping tabs on a dizzying amount of searches, users must write queries in Splunk Processing Language (SPL), which is complex and difficult to learn. Writing efficient searches in SPL takes abundant time and resources that many teams can’t afford to spare. Coordinating searches between users and admins eats up further time and can produce confusion for any team—and that’s not to mention the major obstacles that slow or failed searches can introduce.  

Optimizing and keeping track of searches is just one of the issues facing IT teams today—thankfully, we’ve got a solution. Atlas, a platform developed by Kinney Group to help users navigate Splunk, includes a comprehensive and customizable Search Library to aid users in creating and using searches.  

Figure 1 – The Search Library icon from the Atlas Core homepage

The Atlas Search Library

Collected Searches

The Search Library contains a collection of helpful, accessible searches pre-built by KGI engineers. Users also have the ability to save their own custom searches, which can be edited or deleted at any time. These are listed by name and use case, making it easy to identify the purpose of each search. All searches in the library include expandable metadata so that users can see additional information, including the SPL query, within the table. This insight into the SPL enables faster, easier education for those looking to write their own queries. Users can also filter searches to quickly and easily find all applicable listings, giving users and admins an unprecedented degree of visibility.  

Figure 2 – Atlas’s Search Library tab 

Using the Searches

Performing one of these searches couldn’t be easier. Clicking “Launch Search” will open a separate tab where you can view details of the search’s results and tweak the SPL query—all without changing the originally saved search. This capability enables those without a knowledge of SPL to learn and use powerful, intricate searches.  

Figure 3 – The launched search, open in a separate tab

Search Activity

The Search Library component also includes a Search Activity tab, which can be used to monitor which searches are run when, how frequently, and by whom. Having this visibility on one page allows users to see redundancies and overall usage of a search. The Search Activity tab includes the same level of detail as the Search Library, meaning users can dive into the specifics of each search. The tab is also filterable so users can identify exactly which searches they’re shown. You can also add any search in the Search Activity tab to the Search Library, making it easier than ever to keep track of what you need in Splunk.  

Figure 4 – The Search Activity tab of the Search Library

Conclusion

Any user is liable to hit a few roadblocks on their Splunk journey. With Atlas’s Search Library application, your team can be sure that searches won’t be one of them.  

The Search Library is only one of Atlas’s innovative features, and we’re looking forward to sharing so much more from the platform with you. If you’re eager to learn more about Atlas in the meantime, fill out the form below.

Schedule a Meeting

Clearing the Air: Apps vs Add-ons in Splunk

When talking about apps that we need to bring into Splunk, the conversation can get very confusing, very quickly. This is because apps serve different purposes and come from different sources.

Let’s look at AWS data for example. If I do a cursory search on Splunkbase, the center for Splunk’s Apps and Add-ons, for an app to bring in my data, I might find the following results:

  • Splunk App for AWS
  • Splunk Add-on for Amazon Web Service
  • Splunk Add-on for Amazon Kinesis Firehose

 

Figure 1: Search results in Splunkbase
Figure 1: Search results in Splunkbase

This is just to name a few on the list out of the 38 results that pop up. Of those 38, which do you choose?

There are a number of similarly named apps built around the same data. Without doing extensive research before your search, you probably couldn’t clearly identify when each app needs to be used. Which app is the best fit for the AWS data I’m consuming? How many users have installed this app? What are the users saying about this app?

 

The Tricky Part

There is a lot to decipher when choosing which tool to utilize. And it gets even trickier than that– Splunk provides both apps and add-ons built for users to enhance and extend the value of the Splunk platform. Although the two have very different functions, both apps and add-ons are listed the same on your Splunkbase results: all results come up listed as a “app.”

This can make the process of identifying the correct app or add-on extremely difficult for users within Splunk. That’s why the Tech Ops team has some tips that should make the choice clear.

 

Apps vs Add-ons: The Difference

Let’s see if we can make it easier to decipher this in the future. First, we’ll breakdown the different types of “apps”:

Add-on (TA)

These are the bread and butter of bringing in data from your machines. Add-ons are built to have props, transforms, inputs, are various other configuration files to ensure that the data sources being ingested are parsed, extracted, and indexed correctly.

App

In most cases, an app usually brings in Knowledge objects for the user to utilize. This could be dashboards, alerts, reports, and macros. It uses the data brought in via the add-on to populate the Knowledge objects

To take full advantage of the data we’re bringing in, we generally want to use both Add-ons and Apps in tandem. While neither of these products are required to bring in your data, they certainly make it much easier. Start with your add-ons to help you bring your data in from machines. Then, utilize your apps to do the heavy lifting to help you visualize and analyze your data.

Tips from Team Tech Ops

Here at Kinney Group, the Tech Ops team is dedicated to helping customers fix any issue they face with Splunk (really, we mean anything) through our Expertise on Demand offering through the Atlas Platform. We work with different Apps and Add-ons all day, every day and are constantly recommended the best of these products to our customers. If you want to see the full picture of Splunk, all while snagging out best practice help and guidance, fill out the form below to talk with one of our Splunkers.

A Lesson on Splunk Field Extractions and Rex and Erex Commands

With Splunk, getting data in is hard enough. After uploading a CSV, monitoring a log file, or forwarding data for indexing, more often than not the data does not look as expected. These large blocks of unseparated data are hard to read and unable to be searched. If the data is not separated into events, you may be wondering how to correctly parse, and perform advanced search commands using fields.

This is where field extraction comes in handy.

Field Extraction via the GUI

Field extractions in Splunk are the function and result of extracting fields from your event data for both default and custom fields. Basically, organize your data with field extractions in order to see the results you’re looking for.

Figure 1 - GUI in Splunk
Figure 1 – Extracting searchable fields via Splunk Web

 

Pictured above is one of Splunk’s solutions to extracting searchable fields out of your data via Splunk Web. Within the Search and Reporting App, users will see this button available upon search. After clicking, a sample of the file is presented for you to define from events the data. The image below demonstrates this feature of Splunk’s Field Extractor in the GUI, after selecting an event from the sample data.

 

Figure 2 - Splunk’s Field Extractor in the GUI
Figure 2 – Sample file in Splunk’s Field Extractor in the GUI

From here, you have two options: use a regular expression to separate patterns in your event data into fields, and the ability to separate fields by delimiter. Delimiters are characters used to separate values such as commas, pipes, tabs and colons.

Figure 3 - Regex delim in Splunk’s Field Extractor in the GUI
Figure 3 – Regular expressions vs delimiter in Splunk

 

Figure 4 - Delimiter in Splunk’s Field Extractor
Figure 4 – Delimiter in Splunk’s Field Extractor

If you have selected a delimiter to separate your fields, Splunk will automatically create a tabular view in order to allow you to see what all events properly parsed would look like compared to its _raw data pictured above.

Functionality is provided to rename all fields parsed by the selected delimiter. After saving, you will be able to search upon these fields, perform mathematical operations, and advanced SPL commands.

What’s Next? Rex and Erex Commands 

After extracting fields, you may find that some fields contain specific data you would like to manipulate, use for calculations, or display by itself. You can use the Rex and Erex commands to help you out.

Rex

The Rex command is perfect for these situations. With a working knowledge of regex, you can utilize the Rex command to create a new field out of any existing field which you have previously defined. This new field will appear in the field sidebar on the Search and Reporting app to be utilized like any other extracted field.

Syntax

| rex [field=<field>] (<regex-expression>)

For those who would like to use the Rex command, and would like resources to learn, please utilize websites such as https://regex101.com/ to further your development.

In order to define what your new field name will be called in Splunk, use the following syntax:

| rex [field=<field>] (?<field_name>”regex”)

Erex

Many Splunk users have found the benefit of implementing Regex for field extraction, masking values, and the ability to narrow results. Rather than learning the “ins and outs” of Regex, Splunk provides the erex command, which allows users to generate regular expressions. Unlike Splunk’s rex and regex commands, erex does not require knowledge of Regex, and instead allows a user to define examples and counterexamples of the data to be matched.

Syntax

 | erex <field_name> examples="<example, <example>" counterexamples="<example,

<example>"

Here’s an example of the syntax in action:

 | erex Port_Used examples=”Port 8000, Port 3182”

That’s a Wrap

There is a ton of incredible work that can be done with your data in Splunk. When it comes to extracting and manipulating fields in your data, I hope you found this information useful. We have plenty of Splunk tips to share with you. Fill out the form below if you’d like to talk with us about how to make your Splunk environment the best it can be.

Meet Atlas’s Reference Designs for Splunk

There is immense power built within the Splunk data center model. However, with great power comes even greater complexity. The traditional model is difficult to scale for performance, and Splunk teams already working against license costs are constantly faced with performance issues.

We know your teams have more to do with their time, and your organization has better ways to spend their budget.

With Atlas’s Reference Designs, we’ve found a better path forward.

Background

The Atlas Reference Designs provide a blueprint for modern data center architecture. Each provides blazing-fast performance increases and reduces indexer requirements, leading to a lower cost of ownership for your environment. Our Reference Designs are built to remove any friction caused by building and deploying your Splunk environment, serving all Splunk customers. Whether you’re running Splunk on-prem, in a hybrid fashion, or in the cloud, Atlas’s Reference Designs provide you with a simplified and proven solution to scaling your environment. 

There is a common problem that every organization faces with Splunk: ingesting data effectively.

Figure 1 - Atlas Reference Designs
Figure 1 – Atlas Reference Designs

The Atlas Reference Designs present you with a solution that pairs the Splunk Validated Architectures with the needed hardware or environment in order to deliver proven performance results.

Our first set of Reference Designs were built for on-prem Splunk instances in partnership with Pure Storage, a leader in the enterprise platform and primary storage world. Check out Pure Storage’s many nods in the Gartner Magic Quadrant for their innovative solutions.

With the matched power of Pure Storage with Kinney Group’s Splunk solutions, we’ve been able to decrease the indexers needed through Pure’s speed and increased IO on the back end—and we have the tested results to prove it.

Proven Results

Splunk recommends that for every 100 GB you ingest, you create at least one indexer. Let’s look at a real-world example delivered by the Atlas team: a company is ingesting 2 TB/day in Splunk—based on Splunk’s recommendations, this company relied on 20 indexers.

By applying the Atlas Reference Design, we were able to reduce that physical indexer count to just five indexers. This significantly reduces the cost of owning Splunk while increasing performance by ten times.

Figure 2 - Atlas Reference Design results
Figure 2 – Atlas Reference Design results

For those who invest in Splunk infrastructure on-prem, this means huge savings. Think of the costs that 20 indexers entail versus five—the cost savings are huge. Then factor in the impact of reducing the exhausting manual hours spent on Splunk to just a few minutes of human interactions.

Conclusion

To sum it up, we took an incredibly complex design and made it shockingly easy. Through automation and the great minds behind Atlas, your Splunk deployments can now launch with the push of a button, less time and fewer resources, and guaranteed results. The Atlas Reference Designs are built for all customers, with many releases to come; get in touch using the form below to inquire about our other Reference Designs.