Splunk Search Command Series: Halloween Edition

 

 

Halloween is hands down my favorite time of the year. Candy, costumes, scary movies, cold weather, haunted houses (or hayrides), what’s not to love. Every time Halloween rolls around, I am always looking for a good fright. While this year has been a disappointment for going out and experiencing all the scares, Splunk has been there to provide a terrifyingly good time. 

Today, let’s look at a couple of search commands that are so good…it’s SCARY.

1. Rex command

2. Fullnull

3. Rename

(t)rex

In the land before time, one creature ruled the earth…  

Nah, just kidding, we’re not talking about dinosaurs, we’re looking at the rex command 

Field extractions don’t pull out all the values that we absolutely need for our search. It might be due to irregular data patterns, low visibility, or maybe just not necessary to have as an extracted field. Regardless of the reason, we always come back to the data and extract the values through our search. Rex allows us to use regular expression in our search to extract values and create a new field. 

 

|rex field=<field> “<regular_expression>”

 

Instead of breaking down each section, it might be easier to show an example, here are a few sample events

10:41:35 PM – I saw Casper walking down the hallway 

08:31:36 PM – I saw Zuul running after me 

06:33:12 PM – I saw Jason coming out of the lake 

04:05:01 PM – I saw Jigsaw setting something up in the basement 

02:36:52 PM – I saw Hannibal making dinner 

Apparently, we need to get out of the house we’re staying at…or call the cops, right? (We all know the phone lines have already been cut?).

Before we do anything, we need to assess all the “things” we saw. In my panic, I forgot to set up proper field extractions and didn’t write a line in props.conf for monsters. Luckily, I can use rex to quickly grab these values.  

 

|rex field=_raw saw\s+(?<scary_things>\w+) 

From there we will get a list of our monsters:

 

Casper 

Zuul 

Jason 

Jigsaw 

Hannibal 

Fillnull

You ever look at the results and notice the empty fields? Is that data missing, or was it never really there? (x-files music plays in the background) These are null values in your data, usually caused by a field not being in some events. In a results set this would look like empty cells and all those empty cells might drive you to insanity. To help ease your mind, we can use fillnull to complete our tables  

 

|fillnull value=<value> 

 

By entering a value, fillnull will fill the empty cells with your chose value. This could be a number like 0 or a string like “null” or “empty” 

Rename

Field names don’t always play nicely. In terms of compliance or formatting, field names can really jump out and scare you. In order to blend, we may need to resort to putting a mask over them. Rename search command will let us do just that. 

 

|rename <field> as <new_name> 

 

Here are some examples of rename command in action:  

|rename monsters as users 

|rename insane_asylums as dest 

That’s it for this scary edition of our Search Command Series. I hope these search commands help eliminate the fear behind slow search performance and the ghouls lurking in our data.

Don’t Be Scared of Splunk

Splunk can be pretty frightening, especially when you’re hiding from your searches. That’s where our EOD team comes in. Think the Ghost Busters… but for Splunk.

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

How Do Users Most Commonly Get Lost in Splunk?

When have you been lost in Splunk? We’ve all been there.

In some cases, you’re trying to clean up your data ingest and track down the status of your forwarders. In others, you’re trying to decipher Splunk’s Search Processing Language (SPL) and can’t figure out how to get to the data you need. Then, there’s the constant maintenance, research, and manual hours needed to keep Splunk running efficiently.

Splunk is a journey and, needless to say, most of us have felt a little lost along the way. That’s why we asked our own Kinney Group Splunk experts this question:

 

 

How do users most commonly get lost in Splunk?

 

 

“Practically all my customers thus far don’t know how to use SPL or get data onboarded. They love it after they get that figured out.”

 

“Slow and inefficient searches are often seen reducing deployment or an instance. Additionally, large numbers of scheduled searches often take a toll on performance. Small tweaks can be made to vastly improve searching, however come with much practice.”

 

“Splunk itself is vast and hard enough to learn, however mastering Splunk requires knowledge of SPL, networking, python scripting, regex, XML, Active Directory, AWS etc.”

 

“Logging containerized services. Using the Splunk Syslog Driver for Docker has increased pain-points when bringing in docker logs. This has been a big issue for one of my customers”

 

“I would say that a question I get often in Splunk is ‘How do I find my data dictionary?’ This is equivalent to how do I find all my fields and tables in a database. Because Splunk is SO versatile, sometimes it is hard to know where to begin your search as a newbie to Splunk.”

 

“I think one of the most difficult situations with Splunk is not understanding which configurations are actively affecting data ingest and parsing, and where those configurations are located. You can troubleshoot this on the CLI, but that’s inconvenient when you only have access to the Splunk UI. This confusion leads to lengthy trial and error configuration changes to resolve data format issues.”

 

“Data onboarding with unstructured logs or sourcetypes can be difficult as an intermediate knowledge of regex is often needed to accurately parse these events.”

 

To sum it up, Splunk is hard.

New features, updated products and interfaces, premium apps — when you’re navigating your Splunk journey, it’s easy to get lost. In the 17 year history of Splunk, there’s never been a single solution that removes the roadblocks, provides a clear path forward, and helps you navigate your journey with Splunk. Until now…

 

We have a big announcement. 

We’d love for you to join us Tuesday, November 10 at 1 PM EST.

 

The “Magic 8” Configurations You Need in Splunk

 

When working in Splunk, you can earn major magician status with all of the magic tricks you can do with your data. Every magician needs to prepare for their tricks… and in the case of Splunk, that preparation comes through data onboarding. That’s where the Magic 8 props.conf configurations come in to help you set up for your big “abracadabra” moment.

The Magic 8 (formerly known as the Magic 6), are props.conf configurations to use when you build out props for data – these are the 6-8 configurations that you absolutely need. Why? Splunk serves us with a lot of automation.. but as we know, the auto”magic” parts don’t always get it right. Or at least, it can be pretty basic and heavily lean on default settings.

While you’re watching the video, take a look at this resource, The Aplura Cheat Sheet (referenced in the video).

The Magic 8 configurations you’ll need are…

  1. SHOULD_LINEMERGE = false (always false)
  2. LINE_BREAKER = regular expression for event breaks
  3. TIME_PREFIX = regex of the text that leads up to the timestamp
  4. MAX_TIMESTAMP_LOOKAHEAD = how many characters for the timestamp
  5. TIME_FORMAT = strptime format of the timestamp
  6. TRUNCATE = 999999 (always a high number)
  7. EVENT_BREAKER_ENABLE = true*
  8. EVENT_BREAKER = regular expression for event breaks*

You’ll notice the * on #7 and #8. These configs are new to the list! The * indicates these configurations are useful “with forwarders > 6.5.0.” In Part One, we’ll be covering the first two on our list: SHOULD_LINEMERGE and LINE_BREAKER. In Part Two, we’ll review 3-8.

You may have read a few of Josh’s Splunk Search Command Series blogs, both Josh, and our engineers here at Kinney Group produce weekly content around Splunk best practices. The Tech Ops team runs our Expertise on Demand service. Team Tech Ops is responsible for knowing everything and anything around Splunk best practice… that’s why you’ll get access to a ton of video and written content from these rockstars.

Meet our Expert Team

If you’re a Splunker, or work with Splunkers, you probably have a full plate. Finding the value in Splunk comes from the big projects and the small day-to-day optimizations of your environment. Cue Expertise on Demand, a service that can help with those Splunk issues and improvements to scale. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks. We have the team here to support you. Let us know below how we can help.

Splunk Search Command Series: makemv

 

Have you ever been stick with a single value field and needed it to bring a little more… value? This week’s Splunk search command, makemv adds that value.

Let’s talk about makemv. Makemv is a command that you can use when you have a field, and that field has multiple values. Here is an example of a field with multiple values.

 

Figure 1 - example of a field with multiple values in Splunk
Figure 1 – example of a field with multiple values in Splunk

How to use makemv

Here field1 has the values of 1, 2, 3, 4, and 5. By using the makemv command we can separate out these values. Let’s take a look.

 

Figure 2 - example of separated values using makemv
Figure 2 – example of separated values using makemv

 

Using the delim argument

As you can see, Splunk has successfully divided out the values associated with this field. To use the makemv command successfully you have to give the delim argument, once you let Splunk know what delim it’s looking for, make sure to surround it in quotes. After that, all you need to do is provide the field that has multiple values and let Splunk do the rest! Here is an example of Splunk separating out colons.

 

Figure 3 - Splunk separating out colons
Figure 3 – Splunk separating out colons with makemv

 

Extract field values with regex

The makemv command can also use regex to extract the field values. Let’s take a look at how to construct that. Here is an example.

 

Figure 4 - makemv command using regex
Figure 4 – makemv command using regex

 

Here, all I wanted from the field values was the name of the email address. To do this you need to use the tokenizer argument instead of the delim, while the regex takes care of separating the values. Now that you have some basic understanding of the makemv command, try it out in your environment! Happy Splunking!

 

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Kinney Group presents at Puppet Camp Federal Government

Puppet Camp is here! We’re happy to be a part of this conversation with the Federal Government community. Tune in on Thursday, October 22 to hear an incredible list of presenters, including our very own Jim Kinney.

 

 

Sponsored by Carahsoft, Kinney Group and Norseman Defense Technologies, this free event focuses on exploring the unique challenges faced by those working in the federal sphere, while offering some practical solutions to make your day jobs easier. From best practices around security and compliance, to automating mundane tasks, you’ll hear the following talks:

 

  • Lessons learned from a decade of federal compliance automation with Puppet – Trevor Vaughn, Onyx Point
  • The Best way to Secure Windows -Bryan Belanger, Fervid
  • Puppet Foundations and Futures – Jed Gresham and Stephen Potter, Puppet
  • And more!

 

Puppet CTO, Abby Kearns will lead the Keynote on how Puppet is helping federal customers achieve greater efficiency and improve security and compliance.

Want to learn more about the event? Click here to register!

 

 

The Kinney Way

The Puppet Enterprise and Puppet open source platforms are recognized as the market leaders for data center automation. The Kinney Group automation team has extensive experience with these platforms, and we possess the highest Puppet certifications available. Our services for Puppet include:

  • Puppet platform design, build, and implementation
  • Development of Puppet modules
  • Integration with VMware-based and other third-party orchestration technologies
  • Enablement of DevSecOps and CI/CD development environments
  • Automated configuration management and security control

With experience in deployments of all sizes in both Commercial and Public Sector environments, Kinney Group has extensive field experience delivering results with these platforms. Coupled with the deepest bench of senior Splunk architects and automation professionals in the world, and the most advanced platform certifications available, Kinney Group is uniquely positioned to make your data dreams come true.

Define Your Drilldown in Splunk: $click.value$ vs $click.value2$

In Splunk, there’s a lot to get right when it comes to choosing tokens for your dashboard when using drilldown. Deciding what values to select for your token names and token values in your dashboard drilldown menu can get confusing. You may be asking…

How do I know what to name my tokens?

What are all those prebuilt token values in the dropdown menu, and what do they do?

Passing tokens to your dashboard, linking them to other dashboard panels, or linking them to searches can advance your dashboard visualizations and data analytics. Multiple tokens can be used with dashboard drilldown in Splunk, but today we’ll be discussing two of the most commonly selected token values: $click.value$ and $click.value2$. By the end of this post, you will understand when to select one over the other to fit your dashboarding needs. Let’s get started!

 

The Drilldown

First off, you’ll need to edit your dashboard drilldown menu by going into your dashboard’s edit mode. Then, select “manage tokens on this dashboard”. The screen will look similar to the one below:

a "Drilldown Editor" popup window
Figure 1 – Drilldown Editor menu in Splunk

$click.value$

$click.value$ is where you can set your token names and select token values from the dropdown menu.

I like to give my tokens names that are relevant and easy to remember. I have decided to name my token “cell_clicked” because I will be clicking on a cell within my dashboard and observing the results that populate. In this first example, we will explore what happens when the token “cell_clicked” is paired with the token value of “$click.value$”.

example of dashboard
Figure 2 – Results for status value 200 using the token value of $click.value$

After saving the changes made, I decided to click on the status value of 200 and observe the results below in the linked dashboard panel. The results show us the events related to the GET request method… but why? When you use the token value of “$click.value$”, it will always display the leftmost cell value for the row in which you click. It will not display the events for the individual cell clicked.

 

$click.value2$

If you wanted to return the results from the cell value of status 200 that the mouse is hovering over, you would need to use the “$click.value2$” token. As you can see, when I edit my dashboard drill-down menu and select “$click.vaule2$” instead of “$click.value$”, the events from the exact cell clicked on the dashboard will populate the corresponding events below. Based on your needs when selecting values from a chart-style dashboard, you should be able to decide which of these two tokens is more applicable for your use case.

example of dashboard
Figure 3 – Results for status value 200 using the token value of $click.value2$

Bonus Tip

Keeping the second panel of the dashboard out of view can provide a cleaner look to the initial dashboard display, as seen in the image below. Follow this quick tip to achieve this view:

example of dashboard
Figure 4 – Dashboard with second panel hidden in Splunk

The second panel will remain hidden until the click value is passed to it. To accomplish this, you can edit the source code of the dashboard to add the following line:

<panel depends=“$insert_token_name$”>
edited source code in Splunk
Figure 5 – Edited dashboard source code in Splunk

Save the changes and reload to see your new dashboard! Test to see if the token is being passed correctly by clicking on the cell and seeing the corresponding events populate below.

Ask the Experts

Looking for more tips to advance your Splunk visualizations? Ask the experts at Kinney Group! We supply new Splunk resources every week, like this tutorial on Choropleth maps. If you think your company could benefit from our professional services, our Expertise on Demand team is ready to address your unique Splunk needs. Fill out the form below to learn how we can help.

Splunk Search Command Series: dbinspect

 

 

The power of Splunk comes from the insights we pull from our data. And to emphasize… I mean searchable data. Now, Splunk isn’t perfect and neither is your data. Data can be corrupt, go missing, or frankly, live in the dark. Pull that data back into the light and ensure your data is intact by using dbinspect.

What is dbinspect? The Splunk search command, dbinspect, allows us to look at the information of buckets that make up a specified index.  If you’re using Splunk Enterprise, this search command shows you where your data lives so you can optimize your disk space.

How to Use dbinspect

Let’s break down the command:  

|dbinspect index=<index_nametimeformat=<time format> 

Check out what this looks like in Splunk:  

Figure 1 - dbinspect in Splunk
Figure 1 – dbinspect in Splunk

 

The above screenshot may look small as it doesn’t capture all of the fields, but, the fields we DO see provide us with a wealth of information. When you use the command, you’ll have access to view all of the fields we can’t see in the screenshot.

 

Here’s what we can see with dbinspect: 

How many events are in a bucket 

The file path of the bucket 

Which index the bucket belongs too 

 

dbinspect also tells us: 

The state of the bucket (hot/warm/cold) 

When the bucket was created 

The size of the bucket in mb 

And tsidx states (full, fulling, etc) 

 

And that’s it. Use dbinspect to get insights into your data buckets. We’ve got plenty of searches to come this month, stay tuned!

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Splunk 101: Predict Command

 

Hey everyone, I’m Hailie with Kinney Group.

 

Today, I’ll walk you through a short tutorial on the predict command. The predict command forecast values from one or more sets of time series data. The command can also fill in missing data in a time series and provides predictions for the next several time steps.

 

As we’re going to see in this example, we’re going to have a bar graph displaying the number of purchases made on a specific host. The predict command is going to provide us with confidence intervals for all of its estimates with an upper and lower 95th percentile range displayed on the graph.

 

In this instance, we’re going to use the practice data that came from Spunk. Here, we’re in the index of the web. If we want to look at purchases made on a specific host, we’re going to use WW1 as a host. It’ll pull up the events that come from that but as I said earlier, predict needs some sort of time series data to work off of. 

 

We’re going to use a time chart. By counting the number of purchases by our host, this is going to display a bar graph. As you can see the bar graph shows how many purchases were made on that host for the day. In this example, on November 8th, there were 76 purchases made. 

 

With the benefit of the predict command, we’re going to see predictions extend off the graph here for future days,  future predictions, and the upper and lower 95th percentile confidence ranges.

 

Use predict command in your environment when it makes sense to you. Maybe to see future trend analysis and use cases. Maybe for predicting the amount of disk usage you’re using a day, use space, or data ingest. In this instance, we used it to see a high confidence level of how many purchases we’d expect on our host for WW1.

 

Meet our Expert Team

If you’re a Splunker, or work with Splunkers, you probably have a full plate. Finding the value in Splunk comes from the big projects and the small day-to-day optimizations of your environment. Cue Expertise on Demand, a service that can help with those Splunk issues and improvements to scale. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks. We have the team here to support you. Let us know below how we can help.

Splunk Search Command Series: inputlookup and outputlookup

 

Think back to our article on the Splunk search command, lookup… we talked about lookups and how they can be used to enrich the data currently in Splunk. Let’s revisit some new ways we can use lookups in our Splunk environment.

Using Inputlookup

Where the lookup search command allows you to inject fields from lookup to the data in an index, inputlookup will allow you to just view the lookup. This can be used at the beginning of a search, halfway through (using append or join), or where you see fit to bring in a lookup.

Let’s take a look at the syntax:

Syntax: |inputlookup <lookup_name>

Easy, peasy.

 

Figure 1 - Using inputlookup in Splunk
Figure 1 – Using inputlookup in Splunk

Interestingly enough, the lookup captured in the screenshot was built with the second command in this article: outputlookup

Lookups in Splunk are not just tables that get ingested… they can also be created from data we already have.

Using Outputlookup

Whenever you find yourself with a results table that you’d like to hold onto, use outputlookup. When you throw outputlookup at the end of the search, and it will turn the results into a lookup that you can use indepentently.

Let’s take a look at the syntax:

|outputlookup <lookup_name>

 

Figure 2 - Using outputlookup in Splunk
Figure 2 – Using outputlookup in Splunk

There are a few extra lines that can be added if need be. Lines like append=true and overwrite=true will change based on how the lookup is created.

Outputlookup really shines when it comes to building out a list of suspicious values in Splunk (such as a watchlist, blacklist, or whitelist).  All it takes is to build out a results table in Splunk that contains the information you need.

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Michael Simko’s Top Five Recommended Sessions at Splunk’s .conf20

Splunk .conf20 logo

One of my favorite times of the fall is the annual Splunk user conference. The pandemic has thrown lots of conferences into disarray. The Las Vegas .conf may be off, but virtual .conf is on — and is free. And yes, free as in free, not free like someone tried to give you a dog.

The virtual conference is 20-21 October for AMER, and 21-22 for EMEA and APAC. 

Here are the top five sessions at Splunk .conf20 that I recommend my customers, colleagues, and students attend. There are many more interesting sessions across the Splunk product line and beyond (temperature scanning crowds to find the infected?). 

 

1) PLA1454C – Splunk Connect for Syslog: Extending the Platform 

Splunk Connect for Syslog is an outstanding system for onboarding syslog data into Splunk. Traditionally, Splunk uses a third-party syslog to write data to disk, and then a Universal Forwarder to read that data and send it to Splunk. This has worked well but requires building the syslog server and understanding enough of the syslog rules to configure the data correctly.  

Enter Splunk Connect for Syslog, which handles the syslog configuration, sends the data to Splunk, and for many known sourcetypes makes the onboarding process a snap. 

 

What I like best: This came from engineers looking at a problem and making things better.

 

2) PLA1154C – Advanced pipeline configurations with INGEST_EVAL and CLONE_SOURCETYPE

Eval is powerful way to create, modify, and mask data within Splunk. Traditionally it is performed at search time. This session shows methods for using INGEST_EVAL to perform eval logic as the data in being boarded. This helps with event enrichment, removing unwanted fields, event sampling, and many more uses.  

 

What I like best: INGEST_EVAL opens a world of more control in Core Splunk.

 

3) SEC1392C – Simulated Adversary Techniques Datasets for Splunk

The Splunk Security Research Team has developed test data for simulating attacks and testing defenses in Splunk. In this session, they are going to share this data and explain how to use it to improve detecting attacks.

 

What I like best: Great test data is hard to come by, much less security test data.

 

4) PLA1129A – What’s new in Splunk Cloud & Enterprise

This session shows off the newest additions to Splunk Cloud and Splunk Enterprise. Each year these sessions show the new features that have arrived either in the last year or in new versions that often coincide with Splunk .conf.

What I like best: New toys to play with.

 

5) SEC1391C – Full Speed Ahead with Risk-Based Alerting (RBA)

I’ve talked to several customers who wanted to use a risk-based alerting (RBA) system for their primary defenses. Traditional methods require lots of tuning to avoid flooding the security staff with too many alerts. RBA is a method to aggregate elements together and then present the findings in an easier-to-consume method.

 

What I like best: Another option on how to approach security response.

 

Bonus Sessions: You didn’t think I could really stop at five, did you?

TRU1537C – Hardened Splunk: A Crash Course in Making Splunk Environments More Secure

TRU1276C – Splunk Dashboard Journey: Past Present and Future

TRU1761C – Master joining your datasets without using join. How to build amazing reports across multiple datasets without sacrificing performance

TRU1143C – Splunk > Clara-fication: Job Inspector

 

Join us!

Our KGI team will be on board for .conf20 and we’re more excited than ever to attend with you. With over 200 virtual sessions at Splunk’s .conf20 event, this year is going to be BIG. With exciting updates to Splunk and grand reveal on new product features… Kinney Group is ready to help Splunkers along the way.

Keep your ears perked for some big, Splunk related announcements coming your way from Team KGI this month…