Meet Atlas: Your Guide for Navigating Splunk

Last week, we shared some big news! Kinney Group launched Atlas — an exciting new platform for navigating and achieving success with Splunk. We’re thrilled to reveal this revolutionary platform to the world and share more about its vision and technical components.

The mission of Atlas is to help customers derive meaningful value from Splunk for their organization and their colleagues.

Background

What’s stopped organizations so far from reaching their goals in Splunk? The answer is all too real for our friends in the industry: IT teams are under siege.

Splunk is and should always be a key enabler for organizations to reach business, security, and operational objectives. Splunk gives IT teams meaningful insights into security posture, IT Ops, and other analytics. Here’s the issue: these teams are buried in work. IT teams are already charged with an ever-growing list of IT demands. And to top it off, they’re now tasked with managing, implementing, and learning Splunk.

Atlas enables these teams to derive value out of Splunk, fast.

A Look Inside Atlas

Atlas is a subscription service that provides a clear path forward on your Splunk journey with revolutionary datacenter architectures, personal guidance and support, and a collection of applications and utilities that provide powerful insights, instantly. Let’s take a closer look into each component of Atlas.

1. Atlas Reference Designs

Atlas Reference Designs provide a blueprint for modern data architecture. They provide rapid performance increases and reduce indexer requirements, leading to a lower total cost of ownership for your environment. Atlas Reference Designs are built on Splunk validated designs, paired with top hardware and cloud environments, and powered by Kinney Group’s unique Splunk tuning and automation solutions.

Atlas Reference Designs are proven to reduce server counts, cut storage costs, and eliminate hidden OpEx expenses, all while enabling 10x improvements in your Splunk performance.

2. Atlas Applications and Utilities

Atlas’s Applications and Utilities include multiple tools built to monitor and easily manage your data in Splunk. With an interface that’s clean and easy to navigate, working in Splunk has never been easier.

“The current components in the Atlas platform were selected because they address horizontal pain points across many different Splunk instances and deployments that we’ve seen with Kinney Group customers,” Georges Brantley, Scrum Master at Kinney Group, emphasizes.“We want to help Splunk admins and users further curate their Splunk environment to produce what they need in order to accomplish their mission.”

3. Expertise on Demand

If those tools aren’t enough, how about we arm you with a team of expert Splunkers to help with everything else?

Expertise on Demand (EOD) is your team of dedicated Splunk professionals to help you with on-demand support and Splunk best practice training. Our EOD team can quickly assist with any Splunk question or fix in increments as small as 15 minutes.

Atlas gives you essential visibility into your environment, and Expertise on Demand makes sure you get the support you need.

Conclusion

Splunk is a journey, not a destination. And every component we’ve built into Atlas is specifically and thoughtfully designed to reduce the complexities of Splunk while removing the roadblocks on your journey to success.

There’s still plenty to come from the Atlas platform, and we can’t wait to share more with you. If you’re interested in learning more about the platform, fill out the form below.

A Beginner’s Guide to Regular Expressions in Splunk

No one likes mismatched data. Especially data that’s hard to filter and pair up with patterned data. A Regular Expression (regex) in Splunk is a way to search through text to find pattern matches in your data. Regex is a great filtering tool that allows you to conduct advanced pattern matching. In Splunk, regex also allows you to conduct field extractions on the fly.

Let’s get started on some of the basics of regex!

How to Use Regex

The erex command

When using regular expression in Splunk, use the erex command to extract data from a field when you do not know the regular expression to use.

Syntax for the command:

| erex <thefieldname> examples=“exampletext1,exampletext2”

Let’s take a look at an example.

In this screenshot, we are in my index of CVEs. I want to have Splunk learn a new regex for extracting all of the CVE names that populate in this index, like the example CVE number that I have highlighted here:

a CVE index
Figure 1 – a CVE index with an example CVE number highlighted

Next, by using the erex command, you can see in the job inspector that Splunk has ‘successfully learned regex’ for extracting the CVE numbers. I have sorted them into a table, to show that other CVE_Number fields were extracted:

a search job inspector window in front of CVE_Number table
Figure 2 – the job inspector window shows that Splunk has extracted CVE_Number fields

The rex Commands

When using regular expression in Splunk, use the rex command to either extract fields using regular expression-named groups or replace or substitute characters in a field using those expressions.

Syntax for the command:

| rex field=field_to_rex_from “FrontAnchor(?<new_field_name>{characters}+)BackAnchor”

Let’s take a look at an example.

This SPL allows you to extract from the field of useragent and create a new field called WebVersion:

an SPL window
Figure 3 – this SPL uses rex to extract from “useragent” and create “WebVersion”

As you can see, a new field of WebVersion is extracted:

a window displaying WebVersion and its data
Figure 4 – the new field in WebVersion

 

The Basics of Regex

The Main Rules

^ = match beginning of the line

$ = match end of the line

Regex Flags

/g = global matches (match all), don’t return after first match

/m = multi-line

/gm = global and multi-line are set

/i = case insensitive

Setting Characters

\w = word character

\W = not a word character

\s = white space

\S = not white space

\d = a digit

\D = not a digit

\. = the period key

Setting Options

* = zero or more

+ = 1 or more

? = optional, zero or 1

| = acts as an “or” expression

\ = escape special characters

( ) = allows for character groupings, wraps the regex sets

Some Examples

\d{4} = match 4 digits in a row of a digit equal to [0-9]

\d{4,5} = match 4 digits in a row or 5 digits in a row whose values are [0-9]

[a-z] = match between a-z

[A-Z] = match between A-Z

[0-9] = match between 0-9

(t|T) = match a lowercase “t” or uppercase “T”

(t|T)he = look for the word “the” or “The”

Regex Examples

If you’re looking for a phone number, try out this regex setup:

\d{10} = match 10 digits in a row

OR

\d {3}-?\d{3}-?\d{4} = match a number that may have been written with dashes 123-456-7890

OR

\d{3}[.-]?\d{3}[.-]?\d{4} = match a phone number that may have dashes or periods as separators

OR

(\d{3})[.-]?(\d{3})[.-]?(\d{4}) = using parentheses allows for character grouping. When you group, you can assign names to the groups and label one. For example, you can label the first group as “area code”.

 

If you’re looking for a IP address, try out this regex setup:

\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3} = searches for digits that are 1-3 in length, separated by periods.

Use regex101.com to practice your RegEx:

a practice search at regex101.com
Figure 5 – a practice search entered into regex101.com

We’re Your Regex(pert)

Using regex can be a powerful tool for extracting specific strings. It is a skill set that’s quick to pick up and master, and learning it can take your Splunk skills to the next level. There are plenty of self-tutorials, classes, books, and videos available via open sources to help you learn to use regular expressions.

If you’d like more information about how to leverage regular expressions in your Splunk environment, reach out to our team of experts by filling out the form below. We’re here to help!

Kinney Group, Inc. Launches Atlas, a Groundbreaking Platform That Empowers Rapid Success With Splunk

Atlas is a revolutionary new platform from Kinney Group, Inc. (KGI) that allows customers to bypass the complexities of Splunk through a suite of powerful applications and utilities that simplify day-to-day operations within Splunk. The Atlas Platform includes KGI’s unparalleled Expertise on Demand service, delivered by Kinney Group’s team of award-winning Splunk professionals. Available beginning today, the new platform is guided by the promise, “You’re never lost in Splunk with Atlas.”

“We’ve worked with hundreds of wickedly smart and capable customers over the years who depend on Splunk for business operations and security,” said Jim Kinney, CEO of Kinney Group. “What we’ve found is those tasked with managing Splunk also have a heavy responsibility in their day-to-day jobs. So, for customers, Splunk needs to be usable and add value quickly. The Atlas Platform removes friction and guides the way to success with Splunk.”

Splunk is the #1 big data analytics platform, serving thousands of customers worldwide. With the incredible results Splunk can produce, however, it’s also incredibly complex. The Atlas platform brings new, innovative solutions to the Splunk community enabling customers to achieve scalable and consistent success with Splunk.

For many users, the benefits of the Atlas platform could cut costs associated with operating Splunk in half.

“Atlas serves everyone who lives in Splunk, from users to administrators to architects,” explains Roger Cheeks, Director of Analytics Technology at Kinney Group. “Anyone working in the platform who needs consistent and high-performing results will benefit from Atlas. You no longer have the questions and burdens behind building, monitoring, and managing your data within Splunk — now you have Atlas.”

Atlas Reference Designs

Atlas Reference Designs provide a clear roadmap for data center architecture, enabling Splunk customers to model and build their on-premise Splunk environments at scale. For customers running Splunk “on-prem,” Atlas Reference Designs significantly reduce compute, storage, and network infrastructure footprints while delivering 10x improvements in performance and reliability when compared to legacy designs.

Atlas Applications and Utilities

The Atlas platform includes powerful applications, utilities, and integrations for Splunk that simplify daily tasks within Splunk. Core capabilities within Atlas provide clear visibility into data sources, a library of powerful searches that eliminates the need to understand Splunk’s Search Processing Language (SPL) for non-admin users, Splunk Forwarder awareness, and a scheduling assistant that allows users to optimize scheduled searches and jobs.

Expertise on Demand

Expertise on Demand (EOD) provides anytime access to a certified team of Splunk professionals on-demand, and in increments as small as 15 minutes. It’s like having Splunk experts on call to support your needs, large and small, with the Splunk platform. EOD combined with Atlas enables customers to quickly realize success in their journey with Splunk.

Also introduced today — Kinney Group Managed Services

Kinney Group also introduced Managed Services for Splunk (MSP) at a company launch event today. With deep technical expertise and proven experience implementing and achieving success with Splunk for hundreds of commercial and public sector organizations worldwide, Kinney Group’s Splunk-certified professionals will manage your Splunk needs 24/7 including monitoring infrastructure (forwarders, indexers, search heads, etc.), system upgrades, monitoring of log collection, custom dashboards and reports, searches, and alerts. This offering allows companies to reduce overhead related to Splunk, without losing the value and powerful insights the platform provides.

The Kinney Group MSP offering disrupts traditional managed services offerings and sets a new standard for outsourced management of the Splunk platform. KGI’s MSP offering is for both on-prem Splunk Enterprise and Splunk Cloud customers and combines world-class Splunk architecture support with KGI’s EOD and the power of the Atlas platform. The end result for Splunk customers is a managed services offering that is purpose-built to enable organizations to maximize their investments in Splunk, while dramatically reducing costs associated with operating the Splunk platform.

About Kinney Group

Kinney Group, Inc. was established in 2006 and has grown into an organization with the singular purpose of delivering best-in-class professional and subscription services. Partnering with some of the largest and most prestigious organizations in both the Public and Commercial sectors, Kinney Group boasts a growing list of over 500 successful Splunk engagements. Kinney Group consultants are not only focused on delivering superior technical solutions, but also driving the human, mission, and financial outcomes that are vital to your success.

6 Reasons INGEST_EVAL Can Help (Or Hurt) Your Splunk Environment

As a Splunker, you’re constantly faced with questions about what can help or hurt you in Splunk. And if you attended some of this year’s .conf20 sessions, you’re probably asking yourself this question:

“Should I use INGEST_EVAL?”

The answer to this is a solid maybe.

At Splunk’s .conf20 this year, Richard Morgan and Vladimir Skoryk presented a fantastic session on different capabilities for INGEST_EVAL. When you get a chance, take a look at their presentation recording!

In this review, we’ll go through Richard and Vladmir’s presentation and discuss inspiration derived from it. These guys know what they’re talking about, now I’m giving my two-cents.

This is part one of two: in the second part, we’ll look at code samples to test some of these use cases.

Background

Splunk added the ability to perform index-time eval-style extractions in the 7.2 release. It was in the release notes, but otherwise wasn’t much discussed. It generates more buzz in the 8.1 release as these index-time eval-style extractions (say that three times quickly) support the long-awaited index-time lookups. 

The purpose of INGEST_EVAL is to allow EVAL logic on indexed fields. Traditionally in Splunk, we’d hold-off on transformations until search time — old-timers may remember Splunk branding using the term “Schema on the Fly.” Waiting for search-time is in our DNA. Yet, perhaps the ingest-time adjustments are worth investing in.

Let’s look through the key takeaways on what ingest-time eval provides. Then you can review to see it’s worth the hassle to do the prep-work to take advantage of this.

1. Selective Routing

Before you try to yank my Splunk certs away, yes, we already have a version of this capability. This is slightly different than the typical method that can send data to separate indexers, say Splunk internal logs going to a management indexer instead of the common-use one, or security logs to a parent organization’s Splunk instance.

The INGEST_EVAL version allows for selective routing based on whatever you can come up to use in an eval statement. The example from the presentation uses the match function in regex to send data from select hosts to different indexers. Ideally, this would happen on a heavy forwarder, or any other Splunk Enterprise box, before reaching the indexers. Perhaps those security logs are staying on-prem, and the rest of the logs go to Splunk Cloud.

What else could we come up with for this? If data contains a particular string, we can route that to different indexes or indexers. We already have that with transforms. But, transforms are reliant upon regex, whereas this could use eval functions. Move off large profit transaction to a separate set of indexers, if a list of special codewords appear, move it to a different indexer?

Let your imagination run on this, and you’ll find lots of possibilities.

2. Ingest log files with multiple timestamp formats

In the past, we had to dive into the depth of datetime_config.xml and roll a custom solution. INGEST_EVAL, along with if/case statements, can handle multiple timestamp formats in the same log. Brilliant. If you have ever had to deal with logs that have multiple timestamp formats (and the owners of those logs who won’t fix their rotten logs), then you’ll be thrilled to see an easy solution.

INGEST_EVAL can look at the data and search for different formats until it finds a match.

3. Synthesizing dates from raw data mixed with directory names

Sometimes we find data, often IoT or custom syslog data, where the log file only has a timestamp. In these cases, we normally see the syslog server write the file into a directory with a date name. Example: /data/poutinehuntingcyborg/2020-10-31.log 

Using INGEST_EVAL, it’s possible to create an _time that uses part of the source and part of the raw data to create a timestamp that matches what Splunk expects. A lovely solution that wasn’t so easy otherwise.

This simple trick could replace having to use ETL. 

4. Event Sampling

Using eval’s random function and an if/case statement, it is possible to send along only the percentage of events. Combine with other eval elements such as sending on one in ten login errors or one in one-thousand successful purchases.

By combining multiple eval statements, you could create a sample data set that includes data from multiple countries, different products, and different results. 

5. Event Sampling combined with Selective Routing

 Whoa.

 Sample the data, and then send the sample to test, dev, or over to your machine learning environment. This is big.

6. Dropping unwanted data from structured data

Using INGEST_EVAL, we can drop fields that we otherwise don’t need. With indexed extractions for csv and json, each column or element becomes a field. Sometimes we don’t want those extra fields.

Let’s look at an example: an excel spreadsheet exported as csv, where a user has been adding notes that are unneeded in Splunk.

In standard Splunk ingest, those notes become fields in Splunk and we have to use SPL to remove them from our searches. How often does a csv dump contain hundreds of fields, but we only care about four? (Answer: often).

Using INGEST_EVEL, we can onboard only the columns or elements that we want, and the rest poof away. Not only would this save disk space, but it makes for cleaner searching. 

My Final Thoughts

Back to our question… “Should I use INGEST_EVAL?” Again, it’s a solid maybe.

If you need to keep licensing down by only ingesting what you need, then sure. If you need to modify data beyond what sed or a regex can perform, then give it a try. INGEST_EVAL isn’t for every Splunk admin, but not every admin hunts down blogs like this.

Stay tuned for more takeaways on INGEST_EVAL in part two.

Splunk Search Command Series: mvzip

 

 

Need some help zipping up your data in Splunk? This week’s Search Command should do the trick. The Splunk Search Command, mvzip, takes multivalue fields, X and Y, and combines them by stitching together.

Today, we are going to discuss one of the many functions of the eval command called mvzip. This function can also be used with the where command and the fieldformat command, however, I will only be showing some examples of this function using the eval command.

If you have been following our eval series, I am sure by now you know that the eval command is very versatile. Now let’s dive into another tool in the eval command’s tool belt! Let’s also use another command that we just learned called makemv to help facilitate this lesson. First, let’s make some data that has multiple field values.

Figure 1 - Data with multiple fields in Splunk
Figure 1 – Data with multiple fields in Splunk

 

I’ve created three new fields called name, grade, and subject. Within each of these fields, we have multiple values. Let’s say we want to create a new field with these values “zipped” together. For example, I want to know what subjects Mike is taking all in one field. This is where mvzip comes in.

Figure 2 - mvzip example in Splunk
Figure 2 – mvzip example in Splunk

 

Here, I have created a new field called “zipped” with the values from the name and subject fields. Now we can see that Mike is taking Math, Science, History, and English. Next, I want to know what grades Mike has in those subjects (a.k.a. report card time!).

Figure 3 - Using mvzip in Splunk
Figure 3 – Using mvzip in Splunk

 

Using mvzip, we can see what grades Mike has in each subject. As you can see from the SPL above, I have mvzip the third field “grade” to the other two by adding another mvzip function. Splunk only allows you to zip three fields together, so this is our limit here! Also, if you noticed I added a different delimiter to our final results. I have a pipe separating my values instead of a comma in my first example. You can use whatever delimiter you want when using the mvzip function by putting quotes around the delimiter.

That is it for now, I hope you enjoyed this lesson and I hope you try this out in your own environment, happy Splunking! P.S. I think Mike could use some tutoring in History and English??

 

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Splunk 101: How to Use Macros

Hey everyone, I’m Hailie with Kinney Group.

Today, we’ll take a look at two examples to see how macros can help you with search optimization and for saving you time in conducting tedious SPLs or long SPLs.

In each example, we’re going to be working with Splunk’s practice data. 

Let’s take a look at some of the predefined macros that come with this data. You can see them by going to settings -> advanced -> search macros.

Here are the names of the macros that they have defined and their associated SPLs. As you can see here, this SPL is very long and it would take a long time to hand jam all of that into your search bar. Instead, what a macro allows you to do is just type the name of the macro surrounded by backticks and it will execute the defined SPL that was made when creating a macro.

 

Example 1

Let’s take a look at the first example of how to use a macro. In Spunk, you always want to do best practice when running searches keeping in mind search optimization and trying to limit the amount of data that you’re pulling from disks. The best way to limit this is to use the time picker value. Set it to the smallest time range window where you know your data resides. 

The next best thing is to define an index. Here you can see we’re using the wild card which is definitely not best practice and it’s really not going to allow for an efficient search to run as it’s going take a lot of time to parse through all those indexes. Instead for example, if we wanted to look at the web security and sales indexes only, we can define a macro that just allows us to search on those three indexes and creates a better search optimization instance instead of using the wild card. 

What you could do is just type up here index=web or index=security or index=sales as one long search query. If you’re constantly having to look at those three indexes every day, you’re going to get tired of typing out every single index you may want to define. Some of you may have queries where you need to define ten or fifteen indexes at a time to see the data that you want. AND you’re having to do that on a daily basis.

Let’s make a macro that defines our indexes that we want to search. When it comes to naming conventions, I try to stick to the most simple name that’s applicable to the use case that I’m trying to implement. In this example, I’m going to call this macro “sws and that stands for sales, web, and security. Here’s where you’ll define your SPL. 

Go ahead and save it (click “save”). Let’s make sure it populates. There it is, there’s my SPL. Let’s go ahead and run it just to verify. 

As we can see the three indexes of sales, security, and the web have populated here and we didn’t need to type out nearly as long of an SPL as we would have had to with the Boolean of or’s.

 

Example 2

Let’s take a look at another example to see how a macro can help you save time. Here, we’re looking at some internal data provided by Splunk and we see that scheduled_time is in the default value of Epoch time. The Epoch time isn’t really a friendly user view to see what date this is. Usually, we have to convert it with the following syntax (see video).

That took me a good chunk of time to type all this out making sure there are no errors just to convert my Epoch into a more friendly date time. Let’s make this a macro. Let’s go ahead and copy this (see video), add a new, again with the naming convention name. With this one, I’m just going to call it “convert_time” because that’s what I want the macro to do.

I’ll paste in the SPL associated with it. Click “save,” make sure it populates, and then we’re going to run it.

There you have it.

That took me significantly less time instead of using a stress time command with all the percentages for month, day, hours, minutes, and seconds to produce the same output. I hope these two examples have given you a starting point for where you can use macros to leverage your environment to either increase your search efficiency with multiple indexes that you always need to run and make a macro that defines that super quick for you. Or you can do it to save tedious SPLs that you’re having to run all the time as we’ve seen here for time conversion. 

 

Meet our Expert Team

If you’re a Splunker, or work with Splunkers, you probably have a full plate. Finding the value in Splunk comes from the big projects and the small day-to-day optimizations of your environment. Cue Expertise on Demand, a service that can help with those Splunk issues and improvements to scale. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks. We have the team here to support you. Let us know below how we can help.

Splunk Search Command Series: Halloween Edition

 

 

Halloween is hands down my favorite time of the year. Candy, costumes, scary movies, cold weather, haunted houses (or hayrides), what’s not to love. Every time Halloween rolls around, I am always looking for a good fright. While this year has been a disappointment for going out and experiencing all the scares, Splunk has been there to provide a terrifyingly good time. 

Today, let’s look at a couple of search commands that are so good…it’s SCARY.

1. Rex command

2. Fullnull

3. Rename

(t)rex

In the land before time, one creature ruled the earth…  

Nah, just kidding, we’re not talking about dinosaurs, we’re looking at the rex command 

Field extractions don’t pull out all the values that we absolutely need for our search. It might be due to irregular data patterns, low visibility, or maybe just not necessary to have as an extracted field. Regardless of the reason, we always come back to the data and extract the values through our search. Rex allows us to use regular expression in our search to extract values and create a new field. 

 

|rex field=<field> “<regular_expression>”

 

Instead of breaking down each section, it might be easier to show an example, here are a few sample events

10:41:35 PM – I saw Casper walking down the hallway 

08:31:36 PM – I saw Zuul running after me 

06:33:12 PM – I saw Jason coming out of the lake 

04:05:01 PM – I saw Jigsaw setting something up in the basement 

02:36:52 PM – I saw Hannibal making dinner 

Apparently, we need to get out of the house we’re staying at…or call the cops, right? (We all know the phone lines have already been cut?).

Before we do anything, we need to assess all the “things” we saw. In my panic, I forgot to set up proper field extractions and didn’t write a line in props.conf for monsters. Luckily, I can use rex to quickly grab these values.  

 

|rex field=_raw saw\s+(?<scary_things>\w+) 

From there we will get a list of our monsters:

 

Casper 

Zuul 

Jason 

Jigsaw 

Hannibal 

Fillnull

You ever look at the results and notice the empty fields? Is that data missing, or was it never really there? (x-files music plays in the background) These are null values in your data, usually caused by a field not being in some events. In a results set this would look like empty cells and all those empty cells might drive you to insanity. To help ease your mind, we can use fillnull to complete our tables  

 

|fillnull value=<value> 

 

By entering a value, fillnull will fill the empty cells with your chose value. This could be a number like 0 or a string like “null” or “empty” 

Rename

Field names don’t always play nicely. In terms of compliance or formatting, field names can really jump out and scare you. In order to blend, we may need to resort to putting a mask over them. Rename search command will let us do just that. 

 

|rename <field> as <new_name> 

 

Here are some examples of rename command in action:  

|rename monsters as users 

|rename insane_asylums as dest 

That’s it for this scary edition of our Search Command Series. I hope these search commands help eliminate the fear behind slow search performance and the ghouls lurking in our data.

Don’t Be Scared of Splunk

Splunk can be pretty frightening, especially when you’re hiding from your searches. That’s where our EOD team comes in. Think the Ghost Busters… but for Splunk.

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

How Do Users Most Commonly Get Lost in Splunk?

When have you been lost in Splunk? We’ve all been there.

In some cases, you’re trying to clean up your data ingest and track down the status of your forwarders. In others, you’re trying to decipher Splunk’s Search Processing Language (SPL) and can’t figure out how to get to the data you need. Then, there’s the constant maintenance, research, and manual hours needed to keep Splunk running efficiently.

Splunk is a journey and, needless to say, most of us have felt a little lost along the way. That’s why we asked our own Kinney Group Splunk experts this question:

 

 

How do users most commonly get lost in Splunk?

 

 

“Practically all my customers thus far don’t know how to use SPL or get data onboarded. They love it after they get that figured out.”

 

“Slow and inefficient searches are often seen reducing deployment or an instance. Additionally, large numbers of scheduled searches often take a toll on performance. Small tweaks can be made to vastly improve searching, however come with much practice.”

 

“Splunk itself is vast and hard enough to learn, however mastering Splunk requires knowledge of SPL, networking, python scripting, regex, XML, Active Directory, AWS etc.”

 

“Logging containerized services. Using the Splunk Syslog Driver for Docker has increased pain-points when bringing in docker logs. This has been a big issue for one of my customers”

 

“I would say that a question I get often in Splunk is ‘How do I find my data dictionary?’ This is equivalent to how do I find all my fields and tables in a database. Because Splunk is SO versatile, sometimes it is hard to know where to begin your search as a newbie to Splunk.”

 

“I think one of the most difficult situations with Splunk is not understanding which configurations are actively affecting data ingest and parsing, and where those configurations are located. You can troubleshoot this on the CLI, but that’s inconvenient when you only have access to the Splunk UI. This confusion leads to lengthy trial and error configuration changes to resolve data format issues.”

 

“Data onboarding with unstructured logs or sourcetypes can be difficult as an intermediate knowledge of regex is often needed to accurately parse these events.”

 

To sum it up, Splunk is hard.

New features, updated products and interfaces, premium apps — when you’re navigating your Splunk journey, it’s easy to get lost. In the 17 year history of Splunk, there’s never been a single solution that removes the roadblocks, provides a clear path forward, and helps you navigate your journey with Splunk. Until now…

 

We have a big announcement. 

We’d love for you to join us Tuesday, November 10 at 1 PM EST.

 

The “Magic 8” Configurations You Need in Splunk

 

When working in Splunk, you can earn major magician status with all of the magic tricks you can do with your data. Every magician needs to prepare for their tricks… and in the case of Splunk, that preparation comes through data onboarding. That’s where the Magic 8 props.conf configurations come in to help you set up for your big “abracadabra” moment.

The Magic 8 (formerly known as the Magic 6), are props.conf configurations to use when you build out props for data – these are the 6-8 configurations that you absolutely need. Why? Splunk serves us with a lot of automation.. but as we know, the auto”magic” parts don’t always get it right. Or at least, it can be pretty basic and heavily lean on default settings.

While you’re watching the video, take a look at this resource, The Aplura Cheat Sheet (referenced in the video).

The Magic 8 configurations you’ll need are…

  1. SHOULD_LINEMERGE = false (always false)
  2. LINE_BREAKER = regular expression for event breaks
  3. TIME_PREFIX = regex of the text that leads up to the timestamp
  4. MAX_TIMESTAMP_LOOKAHEAD = how many characters for the timestamp
  5. TIME_FORMAT = strptime format of the timestamp
  6. TRUNCATE = 999999 (always a high number)
  7. EVENT_BREAKER_ENABLE = true*
  8. EVENT_BREAKER = regular expression for event breaks*

You’ll notice the * on #7 and #8. These configs are new to the list! The * indicates these configurations are useful “with forwarders > 6.5.0.” In Part One, we’ll be covering the first two on our list: SHOULD_LINEMERGE and LINE_BREAKER. In Part Two, we’ll review 3-8.

You may have read a few of Josh’s Splunk Search Command Series blogs, both Josh, and our engineers here at Kinney Group produce weekly content around Splunk best practices. The Tech Ops team runs our Expertise on Demand service. Team Tech Ops is responsible for knowing everything and anything around Splunk best practice… that’s why you’ll get access to a ton of video and written content from these rockstars.

Meet our Expert Team

If you’re a Splunker, or work with Splunkers, you probably have a full plate. Finding the value in Splunk comes from the big projects and the small day-to-day optimizations of your environment. Cue Expertise on Demand, a service that can help with those Splunk issues and improvements to scale. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks. We have the team here to support you. Let us know below how we can help.

Splunk Search Command Series: makemv

 

Have you ever been stick with a single value field and needed it to bring a little more… value? This week’s Splunk search command, makemv adds that value.

Let’s talk about makemv. Makemv is a command that you can use when you have a field, and that field has multiple values. Here is an example of a field with multiple values.

 

Figure 1 - example of a field with multiple values in Splunk
Figure 1 – example of a field with multiple values in Splunk

How to use makemv

Here field1 has the values of 1, 2, 3, 4, and 5. By using the makemv command we can separate out these values. Let’s take a look.

 

Figure 2 - example of separated values using makemv
Figure 2 – example of separated values using makemv

 

Using the delim argument

As you can see, Splunk has successfully divided out the values associated with this field. To use the makemv command successfully you have to give the delim argument, once you let Splunk know what delim it’s looking for, make sure to surround it in quotes. After that, all you need to do is provide the field that has multiple values and let Splunk do the rest! Here is an example of Splunk separating out colons.

 

Figure 3 - Splunk separating out colons
Figure 3 – Splunk separating out colons with makemv

 

Extract field values with regex

The makemv command can also use regex to extract the field values. Let’s take a look at how to construct that. Here is an example.

 

Figure 4 - makemv command using regex
Figure 4 – makemv command using regex

 

Here, all I wanted from the field values was the name of the email address. To do this you need to use the tokenizer argument instead of the delim, while the regex takes care of separating the values. Now that you have some basic understanding of the makemv command, try it out in your environment! Happy Splunking!

 

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!