Clearing the Air: Apps vs Add-ons in Splunk

When talking about apps that we need to bring into Splunk, the conversation can get very confusing, very quickly. This is because apps serve different purposes and come from different sources.

Let’s look at AWS data for example. If I do a cursory search on Splunkbase, the center for Splunk’s Apps and Add-ons, for an app to bring in my data, I might find the following results:

  • Splunk App for AWS
  • Splunk Add-on for Amazon Web Service
  • Splunk Add-on for Amazon Kinesis Firehose

 

Figure 1: Search results in Splunkbase
Figure 1: Search results in Splunkbase

This is just to name a few on the list out of the 38 results that pop up. Of those 38, which do you choose?

There are a number of similarly named apps built around the same data. Without doing extensive research before your search, you probably couldn’t clearly identify when each app needs to be used. Which app is the best fit for the AWS data I’m consuming? How many users have installed this app? What are the users saying about this app?

 

The Tricky Part

There is a lot to decipher when choosing which tool to utilize. And it gets even trickier than that– Splunk provides both apps and add-ons built for users to enhance and extend the value of the Splunk platform. Although the two have very different functions, both apps and add-ons are listed the same on your Splunkbase results: all results come up listed as a “app.”

This can make the process of identifying the correct app or add-on extremely difficult for users within Splunk. That’s why the Tech Ops team has some tips that should make the choice clear.

 

Apps vs Add-ons: The Difference

Let’s see if we can make it easier to decipher this in the future. First, we’ll breakdown the different types of “apps”:

Add-on (TA)

These are the bread and butter of bringing in data from your machines. Add-ons are built to have props, transforms, inputs, are various other configuration files to ensure that the data sources being ingested are parsed, extracted, and indexed correctly.

App

In most cases, an app usually brings in Knowledge objects for the user to utilize. This could be dashboards, alerts, reports, and macros. It uses the data brought in via the add-on to populate the Knowledge objects

To take full advantage of the data we’re bringing in, we generally want to use both Add-ons and Apps in tandem. While neither of these products are required to bring in your data, they certainly make it much easier. Start with your add-ons to help you bring your data in from machines. Then, utilize your apps to do the heavy lifting to help you visualize and analyze your data.

Tips from Team Tech Ops

Here at Kinney Group, the Tech Ops team is dedicated to helping customers fix any issue they face with Splunk (really, we mean anything) through our Expertise on Demand offering through the Atlas Platform. We work with different Apps and Add-ons all day, every day and are constantly recommended the best of these products to our customers. If you want to see the full picture of Splunk, all while snagging out best practice help and guidance, fill out the form below to talk with one of our Splunkers.

A Lesson on Splunk Field Extractions and Rex and Erex Commands

With Splunk, getting data in is hard enough. After uploading a CSV, monitoring a log file, or forwarding data for indexing, more often than not the data does not look as expected. These large blocks of unseparated data are hard to read and unable to be searched. If the data is not separated into events, you may be wondering how to correctly parse, and perform advanced search commands using fields.

This is where field extraction comes in handy.

Field Extraction via the GUI

Field extractions in Splunk are the function and result of extracting fields from your event data for both default and custom fields. Basically, organize your data with field extractions in order to see the results you’re looking for.

Figure 1 - GUI in Splunk
Figure 1 – Extracting searchable fields via Splunk Web

 

Pictured above is one of Splunk’s solutions to extracting searchable fields out of your data via Splunk Web. Within the Search and Reporting App, users will see this button available upon search. After clicking, a sample of the file is presented for you to define from events the data. The image below demonstrates this feature of Splunk’s Field Extractor in the GUI, after selecting an event from the sample data.

 

Figure 2 - Splunk’s Field Extractor in the GUI
Figure 2 – Sample file in Splunk’s Field Extractor in the GUI

From here, you have two options: use a regular expression to separate patterns in your event data into fields, and the ability to separate fields by delimiter. Delimiters are characters used to separate values such as commas, pipes, tabs and colons.

Figure 3 - Regex delim in Splunk’s Field Extractor in the GUI
Figure 3 – Regular expressions vs delimiter in Splunk

 

Figure 4 - Delimiter in Splunk’s Field Extractor
Figure 4 – Delimiter in Splunk’s Field Extractor

If you have selected a delimiter to separate your fields, Splunk will automatically create a tabular view in order to allow you to see what all events properly parsed would look like compared to its _raw data pictured above.

Functionality is provided to rename all fields parsed by the selected delimiter. After saving, you will be able to search upon these fields, perform mathematical operations, and advanced SPL commands.

What’s Next? Rex and Erex Commands 

After extracting fields, you may find that some fields contain specific data you would like to manipulate, use for calculations, or display by itself. You can use the Rex and Erex commands to help you out.

Rex

The Rex command is perfect for these situations. With a working knowledge of regex, you can utilize the Rex command to create a new field out of any existing field which you have previously defined. This new field will appear in the field sidebar on the Search and Reporting app to be utilized like any other extracted field.

Syntax

| rex [field=<field>] (<regex-expression>)

For those who would like to use the Rex command, and would like resources to learn, please utilize websites such as https://regex101.com/ to further your development.

In order to define what your new field name will be called in Splunk, use the following syntax:

| rex [field=<field>] (?<field_name>”regex”)

Erex

Many Splunk users have found the benefit of implementing Regex for field extraction, masking values, and the ability to narrow results. Rather than learning the “ins and outs” of Regex, Splunk provides the erex command, which allows users to generate regular expressions. Unlike Splunk’s rex and regex commands, erex does not require knowledge of Regex, and instead allows a user to define examples and counterexamples of the data to be matched.

Syntax

 | erex <field_name> examples="<example, <example>" counterexamples="<example,

<example>"

Here’s an example of the syntax in action:

 | erex Port_Used examples=”Port 8000, Port 3182”

That’s a Wrap

There is a ton of incredible work that can be done with your data in Splunk. When it comes to extracting and manipulating fields in your data, I hope you found this information useful. We have plenty of Splunk tips to share with you. Fill out the form below if you’d like to talk with us about how to make your Splunk environment the best it can be.

Meet Atlas’s Reference Designs for Splunk

There is immense power built within the Splunk data center model. However, with great power comes even greater complexity. The traditional model is difficult to scale for performance, and Splunk teams already working against license costs are constantly faced with performance issues.

We know your teams have more to do with their time, and your organization has better ways to spend their budget.

With Atlas’s Reference Designs, we’ve found a better path forward.

Background

The Atlas Reference Designs provide a blueprint for modern data center architecture. Each provides blazing-fast performance increases and reduces indexer requirements, leading to a lower cost of ownership for your environment. Our Reference Designs are built to remove any friction caused by building and deploying your Splunk environment, serving all Splunk customers. Whether you’re running Splunk on-prem, in a hybrid fashion, or in the cloud, Atlas’s Reference Designs provide you with a simplified and proven solution to scaling your environment. 

There is a common problem that every organization faces with Splunk: ingesting data effectively.

Figure 1 - Atlas Reference Designs
Figure 1 – Atlas Reference Designs

The Atlas Reference Designs present you with a solution that pairs the Splunk Validated Architectures with the needed hardware or environment in order to deliver proven performance results.

Our first set of Reference Designs were built for on-prem Splunk instances in partnership with Pure Storage, a leader in the enterprise platform and primary storage world. Check out Pure Storage’s many nods in the Gartner Magic Quadrant for their innovative solutions.

With the matched power of Pure Storage with Kinney Group’s Splunk solutions, we’ve been able to decrease the indexers needed through Pure’s speed and increased IO on the back end—and we have the tested results to prove it.

Proven Results

Splunk recommends that for every 100 GB you ingest, you create at least one indexer. Let’s look at a real-world example delivered by the Atlas team: a company is ingesting 2 TB/day in Splunk—based on Splunk’s recommendations, this company relied on 20 indexers.

By applying the Atlas Reference Design, we were able to reduce that physical indexer count to just five indexers. This significantly reduces the cost of owning Splunk while increasing performance by ten times.

Figure 2 - Atlas Reference Design results
Figure 2 – Atlas Reference Design results

For those who invest in Splunk infrastructure on-prem, this means huge savings. Think of the costs that 20 indexers entail versus five—the cost savings are huge. Then factor in the impact of reducing the exhausting manual hours spent on Splunk to just a few minutes of human interactions.

Conclusion

To sum it up, we took an incredibly complex design and made it shockingly easy. Through automation and the great minds behind Atlas, your Splunk deployments can now launch with the push of a button, less time and fewer resources, and guaranteed results. The Atlas Reference Designs are built for all customers, with many releases to come; get in touch using the form below to inquire about our other Reference Designs.

 

Meet Atlas: Your Guide for Navigating Splunk

Last week, we shared some big news! Kinney Group launched Atlas — an exciting new platform for navigating and achieving success with Splunk. We’re thrilled to reveal this revolutionary platform to the world and share more about its vision and technical components.

The mission of Atlas is to help customers derive meaningful value from Splunk for their organization and their colleagues.

Background

What’s stopped organizations so far from reaching their goals in Splunk? The answer is all too real for our friends in the industry: IT teams are under siege.

Splunk is and should always be a key enabler for organizations to reach business, security, and operational objectives. Splunk gives IT teams meaningful insights into security posture, IT Ops, and other analytics. Here’s the issue: these teams are buried in work. IT teams are already charged with an ever-growing list of IT demands. And to top it off, they’re now tasked with managing, implementing, and learning Splunk.

Atlas enables these teams to derive value out of Splunk, fast.

A Look Inside Atlas

Atlas is a subscription service that provides a clear path forward on your Splunk journey with revolutionary datacenter architectures, personal guidance and support, and a collection of applications and utilities that provide powerful insights, instantly. Let’s take a closer look into each component of Atlas.

1. Atlas Reference Designs

Atlas Reference Designs provide a blueprint for modern data architecture. They provide rapid performance increases and reduce indexer requirements, leading to a lower total cost of ownership for your environment. Atlas Reference Designs are built on Splunk validated designs, paired with top hardware and cloud environments, and powered by Kinney Group’s unique Splunk tuning and automation solutions.

Atlas Reference Designs are proven to reduce server counts, cut storage costs, and eliminate hidden OpEx expenses, all while enabling 10x improvements in your Splunk performance.

2. Atlas Applications and Utilities

Atlas’s Applications and Utilities include multiple tools built to monitor and easily manage your data in Splunk. With an interface that’s clean and easy to navigate, working in Splunk has never been easier.

“The current components in the Atlas platform were selected because they address horizontal pain points across many different Splunk instances and deployments that we’ve seen with Kinney Group customers,” Georges Brantley, Scrum Master at Kinney Group, emphasizes.“We want to help Splunk admins and users further curate their Splunk environment to produce what they need in order to accomplish their mission.”

3. Expertise on Demand

If those tools aren’t enough, how about we arm you with a team of expert Splunkers to help with everything else?

Expertise on Demand (EOD) is your team of dedicated Splunk professionals to help you with on-demand support and Splunk best practice training. Our EOD team can quickly assist with any Splunk question or fix in increments as small as 15 minutes.

Atlas gives you essential visibility into your environment, and Expertise on Demand makes sure you get the support you need.

Conclusion

Splunk is a journey, not a destination. And every component we’ve built into Atlas is specifically and thoughtfully designed to reduce the complexities of Splunk while removing the roadblocks on your journey to success.

There’s still plenty to come from the Atlas platform, and we can’t wait to share more with you. If you’re interested in learning more about the platform, fill out the form below.

A Beginner’s Guide to Regular Expressions in Splunk

No one likes mismatched data. Especially data that’s hard to filter and pair up with patterned data. A Regular Expression (regex) in Splunk is a way to search through text to find pattern matches in your data. Regex is a great filtering tool that allows you to conduct advanced pattern matching. In Splunk, regex also allows you to conduct field extractions on the fly.

Let’s get started on some of the basics of regex!

How to Use Regex

The erex command

When using regular expression in Splunk, use the erex command to extract data from a field when you do not know the regular expression to use.

Syntax for the command:

| erex <thefieldname> examples=“exampletext1,exampletext2”

Let’s take a look at an example.

In this screenshot, we are in my index of CVEs. I want to have Splunk learn a new regex for extracting all of the CVE names that populate in this index, like the example CVE number that I have highlighted here:

a CVE index
Figure 1 – a CVE index with an example CVE number highlighted

Next, by using the erex command, you can see in the job inspector that Splunk has ‘successfully learned regex’ for extracting the CVE numbers. I have sorted them into a table, to show that other CVE_Number fields were extracted:

a search job inspector window in front of CVE_Number table
Figure 2 – the job inspector window shows that Splunk has extracted CVE_Number fields

The rex Commands

When using regular expression in Splunk, use the rex command to either extract fields using regular expression-named groups or replace or substitute characters in a field using those expressions.

Syntax for the command:

| rex field=field_to_rex_from “FrontAnchor(?<new_field_name>{characters}+)BackAnchor”

Let’s take a look at an example.

This SPL allows you to extract from the field of useragent and create a new field called WebVersion:

an SPL window
Figure 3 – this SPL uses rex to extract from “useragent” and create “WebVersion”

As you can see, a new field of WebVersion is extracted:

a window displaying WebVersion and its data
Figure 4 – the new field in WebVersion

 

The Basics of Regex

The Main Rules

^ = match beginning of the line

$ = match end of the line

Regex Flags

/g = global matches (match all), don’t return after first match

/m = multi-line

/gm = global and multi-line are set

/i = case insensitive

Setting Characters

\w = word character

\W = not a word character

\s = white space

\S = not white space

\d = a digit

\D = not a digit

\. = the period key

Setting Options

* = zero or more

+ = 1 or more

? = optional, zero or 1

| = acts as an “or” expression

\ = escape special characters

( ) = allows for character groupings, wraps the regex sets

Some Examples

\d{4} = match 4 digits in a row of a digit equal to [0-9]

\d{4,5} = match 4 digits in a row or 5 digits in a row whose values are [0-9]

[a-z] = match between a-z

[A-Z] = match between A-Z

[0-9] = match between 0-9

(t|T) = match a lowercase “t” or uppercase “T”

(t|T)he = look for the word “the” or “The”

Regex Examples

If you’re looking for a phone number, try out this regex setup:

\d{10} = match 10 digits in a row

OR

\d {3}-?\d{3}-?\d{4} = match a number that may have been written with dashes 123-456-7890

OR

\d{3}[.-]?\d{3}[.-]?\d{4} = match a phone number that may have dashes or periods as separators

OR

(\d{3})[.-]?(\d{3})[.-]?(\d{4}) = using parentheses allows for character grouping. When you group, you can assign names to the groups and label one. For example, you can label the first group as “area code”.

 

If you’re looking for a IP address, try out this regex setup:

\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3} = searches for digits that are 1-3 in length, separated by periods.

Use regex101.com to practice your RegEx:

a practice search at regex101.com
Figure 5 – a practice search entered into regex101.com

We’re Your Regex(pert)

Using regex can be a powerful tool for extracting specific strings. It is a skill set that’s quick to pick up and master, and learning it can take your Splunk skills to the next level. There are plenty of self-tutorials, classes, books, and videos available via open sources to help you learn to use regular expressions.

If you’d like more information about how to leverage regular expressions in your Splunk environment, reach out to our team of experts by filling out the form below. We’re here to help!

Kinney Group, Inc. Launches Atlas, a Groundbreaking Platform That Empowers Rapid Success With Splunk

Atlas is a revolutionary new platform from Kinney Group, Inc. (KGI) that allows customers to bypass the complexities of Splunk through a suite of powerful applications and utilities that simplify day-to-day operations within Splunk. The Atlas Platform includes KGI’s unparalleled Expertise on Demand service, delivered by Kinney Group’s team of award-winning Splunk professionals. Available beginning today, the new platform is guided by the promise, “You’re never lost in Splunk with Atlas.”

“We’ve worked with hundreds of wickedly smart and capable customers over the years who depend on Splunk for business operations and security,” said Jim Kinney, CEO of Kinney Group. “What we’ve found is those tasked with managing Splunk also have a heavy responsibility in their day-to-day jobs. So, for customers, Splunk needs to be usable and add value quickly. The Atlas Platform removes friction and guides the way to success with Splunk.”

Splunk is the #1 big data analytics platform, serving thousands of customers worldwide. With the incredible results Splunk can produce, however, it’s also incredibly complex. The Atlas platform brings new, innovative solutions to the Splunk community enabling customers to achieve scalable and consistent success with Splunk.

For many users, the benefits of the Atlas platform could cut costs associated with operating Splunk in half.

“Atlas serves everyone who lives in Splunk, from users to administrators to architects,” explains Roger Cheeks, Director of Analytics Technology at Kinney Group. “Anyone working in the platform who needs consistent and high-performing results will benefit from Atlas. You no longer have the questions and burdens behind building, monitoring, and managing your data within Splunk — now you have Atlas.”

Atlas Reference Designs

Atlas Reference Designs provide a clear roadmap for data center architecture, enabling Splunk customers to model and build their on-premise Splunk environments at scale. For customers running Splunk “on-prem,” Atlas Reference Designs significantly reduce compute, storage, and network infrastructure footprints while delivering 10x improvements in performance and reliability when compared to legacy designs.

Atlas Applications and Utilities

The Atlas platform includes powerful applications, utilities, and integrations for Splunk that simplify daily tasks within Splunk. Core capabilities within Atlas provide clear visibility into data sources, a library of powerful searches that eliminates the need to understand Splunk’s Search Processing Language (SPL) for non-admin users, Splunk Forwarder awareness, and a scheduling assistant that allows users to optimize scheduled searches and jobs.

Expertise on Demand

Expertise on Demand (EOD) provides anytime access to a certified team of Splunk professionals on-demand, and in increments as small as 15 minutes. It’s like having Splunk experts on call to support your needs, large and small, with the Splunk platform. EOD combined with Atlas enables customers to quickly realize success in their journey with Splunk.

Also introduced today — Kinney Group Managed Services

Kinney Group also introduced Managed Services for Splunk (MSP) at a company launch event today. With deep technical expertise and proven experience implementing and achieving success with Splunk for hundreds of commercial and public sector organizations worldwide, Kinney Group’s Splunk-certified professionals will manage your Splunk needs 24/7 including monitoring infrastructure (forwarders, indexers, search heads, etc.), system upgrades, monitoring of log collection, custom dashboards and reports, searches, and alerts. This offering allows companies to reduce overhead related to Splunk, without losing the value and powerful insights the platform provides.

The Kinney Group MSP offering disrupts traditional managed services offerings and sets a new standard for outsourced management of the Splunk platform. KGI’s MSP offering is for both on-prem Splunk Enterprise and Splunk Cloud customers and combines world-class Splunk architecture support with KGI’s EOD and the power of the Atlas platform. The end result for Splunk customers is a managed services offering that is purpose-built to enable organizations to maximize their investments in Splunk, while dramatically reducing costs associated with operating the Splunk platform.

About Kinney Group

Kinney Group, Inc. was established in 2006 and has grown into an organization with the singular purpose of delivering best-in-class professional and subscription services. Partnering with some of the largest and most prestigious organizations in both the Public and Commercial sectors, Kinney Group boasts a growing list of over 500 successful Splunk engagements. Kinney Group consultants are not only focused on delivering superior technical solutions, but also driving the human, mission, and financial outcomes that are vital to your success.

6 Reasons INGEST_EVAL Can Help (Or Hurt) Your Splunk Environment

As a Splunker, you’re constantly faced with questions about what can help or hurt you in Splunk. And if you attended some of this year’s .conf20 sessions, you’re probably asking yourself this question:

“Should I use INGEST_EVAL?”

The answer to this is a solid maybe.

At Splunk’s .conf20 this year, Richard Morgan and Vladimir Skoryk presented a fantastic session on different capabilities for INGEST_EVAL. When you get a chance, take a look at their presentation recording!

In this review, we’ll go through Richard and Vladmir’s presentation and discuss inspiration derived from it. These guys know what they’re talking about, now I’m giving my two-cents.

This is part one of two: in the second part, we’ll look at code samples to test some of these use cases.

Background

Splunk added the ability to perform index-time eval-style extractions in the 7.2 release. It was in the release notes, but otherwise wasn’t much discussed. It generates more buzz in the 8.1 release as these index-time eval-style extractions (say that three times quickly) support the long-awaited index-time lookups. 

The purpose of INGEST_EVAL is to allow EVAL logic on indexed fields. Traditionally in Splunk, we’d hold-off on transformations until search time — old-timers may remember Splunk branding using the term “Schema on the Fly.” Waiting for search-time is in our DNA. Yet, perhaps the ingest-time adjustments are worth investing in.

Let’s look through the key takeaways on what ingest-time eval provides. Then you can review to see it’s worth the hassle to do the prep-work to take advantage of this.

1. Selective Routing

Before you try to yank my Splunk certs away, yes, we already have a version of this capability. This is slightly different than the typical method that can send data to separate indexers, say Splunk internal logs going to a management indexer instead of the common-use one, or security logs to a parent organization’s Splunk instance.

The INGEST_EVAL version allows for selective routing based on whatever you can come up to use in an eval statement. The example from the presentation uses the match function in regex to send data from select hosts to different indexers. Ideally, this would happen on a heavy forwarder, or any other Splunk Enterprise box, before reaching the indexers. Perhaps those security logs are staying on-prem, and the rest of the logs go to Splunk Cloud.

What else could we come up with for this? If data contains a particular string, we can route that to different indexes or indexers. We already have that with transforms. But, transforms are reliant upon regex, whereas this could use eval functions. Move off large profit transaction to a separate set of indexers, if a list of special codewords appear, move it to a different indexer?

Let your imagination run on this, and you’ll find lots of possibilities.

2. Ingest log files with multiple timestamp formats

In the past, we had to dive into the depth of datetime_config.xml and roll a custom solution. INGEST_EVAL, along with if/case statements, can handle multiple timestamp formats in the same log. Brilliant. If you have ever had to deal with logs that have multiple timestamp formats (and the owners of those logs who won’t fix their rotten logs), then you’ll be thrilled to see an easy solution.

INGEST_EVAL can look at the data and search for different formats until it finds a match.

3. Synthesizing dates from raw data mixed with directory names

Sometimes we find data, often IoT or custom syslog data, where the log file only has a timestamp. In these cases, we normally see the syslog server write the file into a directory with a date name. Example: /data/poutinehuntingcyborg/2020-10-31.log 

Using INGEST_EVAL, it’s possible to create an _time that uses part of the source and part of the raw data to create a timestamp that matches what Splunk expects. A lovely solution that wasn’t so easy otherwise.

This simple trick could replace having to use ETL. 

4. Event Sampling

Using eval’s random function and an if/case statement, it is possible to send along only the percentage of events. Combine with other eval elements such as sending on one in ten login errors or one in one-thousand successful purchases.

By combining multiple eval statements, you could create a sample data set that includes data from multiple countries, different products, and different results. 

5. Event Sampling combined with Selective Routing

 Whoa.

 Sample the data, and then send the sample to test, dev, or over to your machine learning environment. This is big.

6. Dropping unwanted data from structured data

Using INGEST_EVAL, we can drop fields that we otherwise don’t need. With indexed extractions for csv and json, each column or element becomes a field. Sometimes we don’t want those extra fields.

Let’s look at an example: an excel spreadsheet exported as csv, where a user has been adding notes that are unneeded in Splunk.

In standard Splunk ingest, those notes become fields in Splunk and we have to use SPL to remove them from our searches. How often does a csv dump contain hundreds of fields, but we only care about four? (Answer: often).

Using INGEST_EVEL, we can onboard only the columns or elements that we want, and the rest poof away. Not only would this save disk space, but it makes for cleaner searching. 

My Final Thoughts

Back to our question… “Should I use INGEST_EVAL?” Again, it’s a solid maybe.

If you need to keep licensing down by only ingesting what you need, then sure. If you need to modify data beyond what sed or a regex can perform, then give it a try. INGEST_EVAL isn’t for every Splunk admin, but not every admin hunts down blogs like this.

Stay tuned for more takeaways on INGEST_EVAL in part two.

Splunk Search Command Series: mvzip

 

 

Need some help zipping up your data in Splunk? This week’s Search Command should do the trick. The Splunk Search Command, mvzip, takes multivalue fields, X and Y, and combines them by stitching together.

Today, we are going to discuss one of the many functions of the eval command called mvzip. This function can also be used with the where command and the fieldformat command, however, I will only be showing some examples of this function using the eval command.

If you have been following our eval series, I am sure by now you know that the eval command is very versatile. Now let’s dive into another tool in the eval command’s tool belt! Let’s also use another command that we just learned called makemv to help facilitate this lesson. First, let’s make some data that has multiple field values.

Figure 1 - Data with multiple fields in Splunk
Figure 1 – Data with multiple fields in Splunk

 

I’ve created three new fields called name, grade, and subject. Within each of these fields, we have multiple values. Let’s say we want to create a new field with these values “zipped” together. For example, I want to know what subjects Mike is taking all in one field. This is where mvzip comes in.

Figure 2 - mvzip example in Splunk
Figure 2 – mvzip example in Splunk

 

Here, I have created a new field called “zipped” with the values from the name and subject fields. Now we can see that Mike is taking Math, Science, History, and English. Next, I want to know what grades Mike has in those subjects (a.k.a. report card time!).

Figure 3 - Using mvzip in Splunk
Figure 3 – Using mvzip in Splunk

 

Using mvzip, we can see what grades Mike has in each subject. As you can see from the SPL above, I have mvzip the third field “grade” to the other two by adding another mvzip function. Splunk only allows you to zip three fields together, so this is our limit here! Also, if you noticed I added a different delimiter to our final results. I have a pipe separating my values instead of a comma in my first example. You can use whatever delimiter you want when using the mvzip function by putting quotes around the delimiter.

That is it for now, I hope you enjoyed this lesson and I hope you try this out in your own environment, happy Splunking! P.S. I think Mike could use some tutoring in History and English??

 

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Splunk 101: How to Use Macros

Hey everyone, I’m Hailie with Kinney Group.

Today, we’ll take a look at two examples to see how macros can help you with search optimization and for saving you time in conducting tedious SPLs or long SPLs.

In each example, we’re going to be working with Splunk’s practice data. 

Let’s take a look at some of the predefined macros that come with this data. You can see them by going to settings -> advanced -> search macros.

Here are the names of the macros that they have defined and their associated SPLs. As you can see here, this SPL is very long and it would take a long time to hand jam all of that into your search bar. Instead, what a macro allows you to do is just type the name of the macro surrounded by backticks and it will execute the defined SPL that was made when creating a macro.

 

Example 1

Let’s take a look at the first example of how to use a macro. In Spunk, you always want to do best practice when running searches keeping in mind search optimization and trying to limit the amount of data that you’re pulling from disks. The best way to limit this is to use the time picker value. Set it to the smallest time range window where you know your data resides. 

The next best thing is to define an index. Here you can see we’re using the wild card which is definitely not best practice and it’s really not going to allow for an efficient search to run as it’s going take a lot of time to parse through all those indexes. Instead for example, if we wanted to look at the web security and sales indexes only, we can define a macro that just allows us to search on those three indexes and creates a better search optimization instance instead of using the wild card. 

What you could do is just type up here index=web or index=security or index=sales as one long search query. If you’re constantly having to look at those three indexes every day, you’re going to get tired of typing out every single index you may want to define. Some of you may have queries where you need to define ten or fifteen indexes at a time to see the data that you want. AND you’re having to do that on a daily basis.

Let’s make a macro that defines our indexes that we want to search. When it comes to naming conventions, I try to stick to the most simple name that’s applicable to the use case that I’m trying to implement. In this example, I’m going to call this macro “sws and that stands for sales, web, and security. Here’s where you’ll define your SPL. 

Go ahead and save it (click “save”). Let’s make sure it populates. There it is, there’s my SPL. Let’s go ahead and run it just to verify. 

As we can see the three indexes of sales, security, and the web have populated here and we didn’t need to type out nearly as long of an SPL as we would have had to with the Boolean of or’s.

 

Example 2

Let’s take a look at another example to see how a macro can help you save time. Here, we’re looking at some internal data provided by Splunk and we see that scheduled_time is in the default value of Epoch time. The Epoch time isn’t really a friendly user view to see what date this is. Usually, we have to convert it with the following syntax (see video).

That took me a good chunk of time to type all this out making sure there are no errors just to convert my Epoch into a more friendly date time. Let’s make this a macro. Let’s go ahead and copy this (see video), add a new, again with the naming convention name. With this one, I’m just going to call it “convert_time” because that’s what I want the macro to do.

I’ll paste in the SPL associated with it. Click “save,” make sure it populates, and then we’re going to run it.

There you have it.

That took me significantly less time instead of using a stress time command with all the percentages for month, day, hours, minutes, and seconds to produce the same output. I hope these two examples have given you a starting point for where you can use macros to leverage your environment to either increase your search efficiency with multiple indexes that you always need to run and make a macro that defines that super quick for you. Or you can do it to save tedious SPLs that you’re having to run all the time as we’ve seen here for time conversion. 

 

Meet our Expert Team

If you’re a Splunker, or work with Splunkers, you probably have a full plate. Finding the value in Splunk comes from the big projects and the small day-to-day optimizations of your environment. Cue Expertise on Demand, a service that can help with those Splunk issues and improvements to scale. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks. We have the team here to support you. Let us know below how we can help.