Lower Total Cost of Ownership with Splunk

The distributed data center model provides high availability by replicating data, but effectively eliminates any benefits gained from Splunk data compression by increasing storage requirements. Co-locating storage and compute means when you need more storage, you have to add both compute and storage. To further increase the total cost of ownership (TCO), Splunk indexers with a distributed scale-out architecture usually have more servers with less storage to minimize the amount of data and time associated with server maintenance and failures.

 

In short, the old-school, conservative approach of an ever-growing physical data center comes with incredible expense and tremendous financial risk.

 

Reduce Server Counts

Splunk recommendations for an Enterprise Security (ES) deployment with a 2 TB daily ingest call for up to 20 indexers. Based on validated testing with this Reference Design, we were able to achieve similar or better performance with only 5 indexers. This 4x improvement over Splunk recommendations represents incredible cost savings for organizations in year one alone.

Using SmartStore with FlashBlade®, Kinney Group’s Reference Design lowers the storage and compute requirements when compared to Splunk’s classic, “bare metal” storage architecture. With this approach, indexers can be sized based on ingest rates and concurrent search volumes instead of worrying about storage. Additionally, SmartStore only requires the storage of a single copy of warm data, and FlashBlade® further reduces storage requirements for the object tier by 30–40% through data compression.

 

Reduce Storage Costs by 62%

The impact is even greater when you consider the topic of storage efficiency using the Kinney Group PureStorage Reference Design on FlashBlade®. Storage efficiency — fitting more data into less raw flash — is a key contributor to reducing the amount of physical storage that you must purchase, deploy, power, and cool.

In ESG’s Economic Validation report, “Validating the Economics of Improved Storage Efficiency with Pure Storage,” the results show that Pure saved financial services organizations up to 59% in TCO, and healthcare and government organizations up to 62% through storage efficiencies alone.

 

Impact TCO through storage performance, availability, scalability… all while providing unparalleled results and reducing risk

 

Reducing CapEx AND OpEx: Considering Total Financial Impact

While a reduction in the capital costs associated with server and storage acquisition are compelling, those costs typically contribute only 20% (or less) to a 3-year server TCO, with management and other OpEx contributing the remaining 80%.

How does this reference design decrease operating expenses? The short answer is that a smaller footprint means a reduction across the board in the month-to-month and year-over-year expenses hidden in operating a data center — costs like power consumption and other utilities, preventative and predictive maintenance, connectivity, and staffing, to name a few.

With this reference design, you’ll impact bottom-line savings through storage performance, availability, scalability, and performance — providing the potential to grow revenue streams and lower costs. You’ll significantly reduce overhead by reducing the number of servers required to drive your Splunk ES environment, while simultaneously providing unparalleled results and reducing security risk. And, especially of importance, you’ll substantially reduce operating expenses associated with a sprawling data center footprint.

Total Cost of Ownership (TCO) is a complex subject, to be sure. The bottom line is that implementing a powerful, scalable compute and storage solution such as FlashBlade® technology in conjunction with SmartStore in a Kinney Group-tuned Splunk environment provides both immediate and long-term financial benefits for your organization.

 

Modernize Your Splunk Environment

We gave you a taste of the power backing the reference design model and how it can modernize your Splunk environment. Now, it’s time to download your copy and access the full document of information instantly. Within the reference design, we’ll dig deeper into the 3 Key Benefits of utilizing the reference design in modernizing your Splunk operations and dive into the technology supporting the findings. Download your copy of the white paper here.

Splunk Search Command Series: eval (Part Three)

 

In our last blog, part one of the eval command series, we covered the basics of using eval as well as a few functions of the command. In part two, we covered comparisons, using if/case, and using lower/upper. After so much reading, I am sure you’ve done plenty of exploring on the eval command, so I’ll try to keep this last entry brief.  In part 3 of the eval command series, we’ll explore some miscellaneous functions of eval…

 

Lower(x)

Lower will take all the values from a field and make them lowercase

Syntax: |eval field = lower(field)   

Upper(X)

Upper will do the same as lower but all uppercase 

Syntax: |eval field = upper(field) 

Typeof(x)

Typeof will create a field that will tell you the data type of the field.

Syntax: |eval type = typeof(field) 

Example: string, number 

Round(X,Y)

Round will take a numeric value and round it to the nearest defined decimal place 

Syntax | eval field = round(field, decimal place) 

Example – round(4.56282,2) = 4.56 

Mvjoin(x,y)

This will take a field that has multiple values separated by a space and add a delimiter making it a single value (think opposite of makemv) 

Syntax: |eval field = (field,string) 

|eval field = mvjoin(field, “,”) 

Output = 1,2,3,4,5 

Example: Field – number = 1 2 3 4 5 

That is going to wrap up our eval command series. All in all, eval is a very powerful command with endless use cases and functions. If you have the opportunity, I want to implore you to play around with the different functions and see what you can accomplish.  Thanks for tuning in!

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Case Study: Home Services Leader Reaches Data Onboarding Success with Splunk Platform

Home Services Leader Reaches Data Onboarding Success with Splunk Platform

As the parent company to over 20 home service brands across the world, this customer has seen immense growth year over year. With a constantly expanding and evolving business, our customer needed the ability to manage and secure their data. When dealing with thousands of franchise locations falling under their brand, this customer needs data that is secure by design and can scale with their growing consumer demand. In late 2019, this customer was seeking a data analytics platform and chose Splunk to the lead the way in log management and security across their business.

New users of the Splunk platform, this customer sought guidance and education on this platform for their small team of 3 engineers. With an immediate need to utilize this new platform, they needed a services partner to help them onboard data into Splunk, learn quick response to security incidents, and to connect the dots between their log systems.

Challenges

1. With a small team of 3 engineers, our customer needed help with education around best practices related to their new Splunk platform

2. Our customer had a massive inventory of data. With 50 GB of Splunk Cloud, this home services leader needed to onboard all of their existing data into Splunk, while fully adopting Splunk’s log management and security capabilities. Analyzing and connecting their log systems was critical for their operations.

3. The customer needed a system that will scale with their growing business. With over 20 acquired brands and thousands of franchise locations, they needed a service that could help them grow their Splunk practice into their constantly evolving business model

Solutions

1. One of the most essential resources to this new Splunk team was utilizing the Lunch and Learn sessions offered through Expertise on Demand. This new team of Splunk users holds monthly training sessions on Splunk best practices, enabling their understanding and usage of the Splunk platform.

2. Kinney Group has successfully onboarded data related to Cisco Devices, IIS, SQL, and applications for their Splunk Enterprise. Through the onboarding process, our EOD team has shown the value in the log management and security capabilities of the Splunk platform.

3. Expertise on Demand scales with a business at growth. With multiple tiers of the service offering, our EOD resources scale with our customers. 

Business Impacts

Being new to Splunk can be intimidating. When you’re given a platform that can do everything with your data, it can be hard to find your starting point. Our customer made the important decision to pull in an expert from the very start to set them up for success with Splunk. With a sea of existing data and introducing a new platform like Splunk, onboarding the right data in the right way is essential.

We’ve seen the common pitfall of companies acquiring masses of untapped, unrecognized dark data.  In Splunk’s annual Dark Data Report, it’s noted that 60% of respondents say more than half of their organization’s data is not captured, and much of it is not even understood to exist. This customer knew the risk of dark data and chose a dedicated partner to pull their data into the light. Kinney Group’s Expertise on Demand service helps our customers capture all of their data. This leader in the Home Services industry is growing fast. Expertise on Demand ensures that their data can grow with their business.

Simplified Scaling with Splunk

Accommodating scale is an ever-present struggle for IT teams and data center operators — providing sufficient infrastructure to facilitate more demanding requirements such as increasing compute, storage, and network needs. Complexities introduced by Splunk’s specialized data ingest requirements only make the situation more challenging (not to mention costly).

The true benefit of scaling is realized not just when future growth is enabled, but when front-end requirements can be met with less hardware, expense, and footprint. Scaling only matters if you can grow from a reasonable starting point. The Kinney Group PureStorage Reference Design empowers users to achieve better performance at scale from their Splunk environment while requiring 75% less hardware.

Managing growth requires systems and strategies that cost-effectively and efficiently support scale. While traditional data center models rely on prohibitive infrastructure requirements in order to scale (square footage requirements, ballooning engineering and operational costs, and a never-ending list of hardware requirements and purchases), FlashBlade® allows incredible scaling in a smaller form factor. Cloud infrastructure provides great scaling, but growing out an existing Splunk cloud architecture is costly, complex, and operationally challenging. The Kinney Group PureStorage Reference Design is a powerful and elegant solution that enables data centers and Splunk solutions to “grow in place.”

 

The Power of Virtualized Scaling

Splunk excels at extracting hidden value from ever-growing Machine Data. This workload, however, requires massive storage capacity, so infrastructure needs to be flexible and scalable, while also providing a linear performance increase alongside that scaling. Simply put, more data means more storage and computing power needs.

While the traditional approach of using physical servers for deployment is certainly an option, utilizing virtual machines on scalable hardware solutions allows you to save time, space, and budget while being able to scale and grow “on the fly” as required.

Typical Splunk deployments utilize a handful of components at their core — Forwarders, Indexers, and Search Heads. Forwarders collect and forward data (lightweight, not very resource intensive). Indexers store and retrieve data from storage, making them CPU and disk I/O dependent. Search Heads then search for information across the various indexers, and are usually CPU and memory intensive.

By properly utilizing virtual machines, the Kinney Group PureStorage Reference Design allows users to scale resources to match the increasing demands of these components.

 

Physical Scaling that Doesn’t Grind Operations to a Halt

Modern data centers are looking less and less like giant warehouses of server racks and becoming more distributed, but the basics of traditional data center growth have experienced little disruption, depending heavily on increasing the number of servers, racks, electrical distribution, and space required to accommodate growth.

Utilizing PureStorage FlashBlade® enables cloud-like simplicity and agility with consistent high performance and control. The primary way FlashBlade® enables growth in place scale is by allowing massive physical expansion in a single chassis by adding “blades,” each of which increases capacity and performance without requiring an ever-growing footprint. Rather than shutting down data center operations to scale out by adding new servers and bringing them online alongside existing infrastructure, the FlashBlade® solution allows users to grow in place. PureStorage FlashBlade® provides up to 792 Terabytes of raw storage in a single 4 rack unit (RU) chassis. Storage is further optimized by using SmartStore, which removes the need for indexer replication (typically a factor of 2 for all data). The total system can grow to ten chassis. FlashBlade® also supports in-service hardware and software updates, so scaling up and scaling out won’t interrupt operations.

 

Meet Any Compliance Requirement with Unlimited Scaling

Splunk SmartStore makes the daunting task of data retention simple for organizations that have compliance or organizational obligations to retain data. This PureStorage architecture supports up to 10 FlashBlade® chassis, potentially representing years of data even for high-ingest systems.

 

Modernize Your Splunk Environment

We gave you a taste of the power backing the reference design model and how it can modernize your Splunk environment. Now, it’s time to download your copy and access the full document of information instantly. Within the reference design, we’ll dig deeper into the 3 Key Benefits of utilizing the reference design in modernizing your Splunk operations and dive into the technology supporting the findings. Download your copy of the white paper here.

Splunk 101: Scheduling with Cron Expressions

 

Hello, Josh here, to walk you through another quick Splunk tutorial that will save you time… literally. In this video tutorial, I’ll discuss the importance of using the Cron Expression when scheduling in Splunk. Cron may seem tricky to use, but once you get the system nailed down, it will save you a ton of time by automating your report generation. Here are some takeaways from the video when you’re using Crons expression in Splunk…

Key Takeaways from Cron Expressions in Splunk

When scheduling a report, you’ll need to establish when it runs, how it runs, how often it runs etc. To make a report functional and consistent, it’s important to schedule out your report.

  • Unless it’s weekly or monthly, you shouldn’t use the standard report scheduling options in Splunk… use Cron expressions.
  • Avoid backed up reporting and stagger the time in which your reports fire off… instead of scheduling all of your reports at the top of the hour, break those up to release minute by minute in batches. This will help your reports send on time and not back up your system and lower your chances of failed or skipped searches.
  • If you have a reoccurring report that’s scheduled to release frequently throughout the day, consider pushing your results to a summary indexer.
  • Prioritize your reports when scheduling and indicate which reports should send first.

You may have read a few of my Splunk Search Command Series blogs, both myself, and our engineers here at Kinney Group produce weekly content around Splunk best practices. My team, the Tech Ops team, runs our Expertise on Demand service, which I’ll touch on a little more below. Our EOD team is responsible for knowing everything and anything around Splunk best practice… that’s why you’ll get access to a ton of video and written content from our team.

Meet our Expert Team

If you’re a Splunker, or work with Splunkers, you probably have a full plate. Finding the value in Splunk comes from the big projects and the small day-to-day optimizations of your environment. Cue Expertise on Demand, a service that can help with those Splunk issues and improvements to scale. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks. We have the team here to support you. Let us know below how we can help.

Unmatched Performance in Splunk

The beauty of Kinney Group’s new reference design for Splunk lies in the unmatched performance provided by combining PureStorage FlashBlade®, Splunk SmartStore, and Kinney Group’s advanced Splunk configuration tuning in a virtualized environment.

PureStorage FlashBlade® supports file and object storage, producing a seamless integration with Splunk SmartStore. These technologies provide an all-flash performance, even for data that would have been traditionally rolled into cold buckets on slower storage tiers. Kinney Group optimizations enable rapid ingest and quick searches even at high volume, and testing showed the reference design can easily ingest up to 4x the Splunk-recommended limit. That means a Splunk-recommended architecture for 500 GB of daily ingest can handle 2 TB or more. In fact, testing showed that a sustainable result of 8x the recommended limit (4 TB/day) is possible.

Optimizing Splunk for Lightning Fast Search

Kinney Group’s engineering expertise in optimizing Splunk enables users to ingest more data, more quickly. Optimization and fine-tuning of the environment yields astonishing results. Splunk searches on traditional, distributed scale-out architectures lead to significant performance degradation as data ages. As it ages, data is tiered to cheaper and lower-performance storage tiers in cold buckets, significantly impacting search performance. This storage approach is especially impractical when responding to search requests related to regulatory or compliance requirements, cybersecurity, and legal discovery—all of which demand information beyond the most immediate data.

Utilizing SmartStore with FlashBlade®, however, provides all-flash performance with high bandwidth and parallelism for data operations and searches outside of the SmartStore cache. It also ensures that you can efficiently complete critical, non-repetitive tasks while supporting the bursting of SmartStore indexers. By Splunk best practices, high search execution latency should be avoided and can cause a cascading degradation in performance. At the highest levels of data throughput tested in the validation of this design, disk latency never exceeded 2ms, and Input/Output Operations Per Second (IOPS) remained flat.

Optimizing Splunk for Lightning Fast Security Workloads

Using Splunk Enterprise Security (ES) “off the shelf,” there are a number of inefficiencies in search configuration. In the testing and validation of this Reference Design, Kinney Group was able to tune ES to avoid skipped searches while maintaining the level of searches in the environment. Splunk will often skip scheduled searches — as a result of high latency that Splunk is not able to overcome — by postponing or rescheduling the search or searches. This was accomplished, in part, by including updated timing of searches and increasing search slots in the software. (See the “Enterprise Security Tuning” section of this document for details.)

The net result is an environment with such precise software tuning and hardware engineering that you’ll imagine the sound of a perfect Formula-1 racing engine every time you walk by your server room.

Enabling Data Security without Hindering Performance

In a traditional Splunk environment, enabling data security introduces various considerations that significantly impact performance. Pure Storage FlashBlade® supports native data encryption while still maintaining incredible single chassis performance of 1.5 million IOPS and 15 gigabytes per second (GB/s) of throughput at consistently low latency.

We hate to say “faster, better, cheaper,” but…

We know how tired the “faster, better, cheaper” trope is, but the reality simply can’t be avoided. This unmatched performance doesn’t come with the soul-crushing price tag you’d expect. Rather, we’ve engineered a solution that allows you to reduce footprint and impact the total cost of ownership (TCO) in a way that demands further inspection — you’ll save on capital expenses, operating expenses, and who knows how much on aspirin.

Modernize Your Splunk Environment

We gave you a taste of the power backing the reference design model and how it can modernize your Splunk environment. Now, it’s time to download your copy and access the full document of information instantly. Within the reference design, we’ll dig deeper into the 3 Key Benefits of utilizing the reference design in modernizing your Splunk operations and dive into the technology supporting the findings. Download your copy of the white paper here.

Splunk Search Command Series: eval (Part Two)

 

In our last blog, part one of the eval command series, we covered the basics of using eval as well as a few functions of the command. Let’s keep the ball rolling into part two.

First, let’s add one more thing to your list of eval basics. Let’s not forget a crucial component to eval command… at results time of the search, if you’ve created a new field using eval, then a new column will be added to the results table. This means that if you have three columns listing Name, Id, and count and you write the follow the line at the end of your search:

 

|eval percent = (count/number)*100

 

This will add a fourth column that will have a percent value per row.

Let’s talk comparisons

Now, let’s talk about comparisons. Sometimes, we just need to add a bit of flavor to our data, like web status codes. Sure, sure, we all know the common ones like 200 and 404, but what about the other ones? Once you go looking at the code definitions, you begin to realize, there are a plethora of codes and “ain’t nobody got time for that”. What if I told you that we could put a general description of our status codes? That’s where if and case step in.

If and Case with eval

IF and CASE are in the same vein of comparison, however, CASE will allow for more arguments. Let’s take a quick look at these two:

 

|eval test = if(status==200, “Cool Beans”, “No Bueno”)

 

Using if

 Here’s the breakdown, when using IF, we need to pass three arguments:

  • The condition – this is usually “if something equals some value”
  • The result – if said field does equal the defined value, then the test’s value is the argument
  • The else – if said field does NOT equal the defined values, then test’s value is the argument

In this case, if status equals 200, then the text would say, “Cool Beans.” If the value of status is anything other than 200, then the text reads, “No Bueno.”

Using case

As stated earlier, CASE will allow us to add more arguments…

 

|eval test = case(status==”2*”, “Cool Beans”, status==”5*”, “Yikes”, status==”4*”, “might be broken”)

 

As you can see, we can apply multiple conditions using the case to get a more robust list of descriptions. Pretty cool right? Let’s look at some other things that eval can do.

 

Lower/upper with eval

Sometimes, the text formatting in our data can be weird. Splunk says that when you search for a value it doesn’t need to be case-sensitive… but take that with a grain of salt. It’s also not true when comparing values from different sources. Check out this scenario…

Event Data – ID: 1234AbCD

Lookup – ID: 1234abcd

If I’m trying to use a lookup command and join and get values in a coherent table of information, that’s not going to happen. Why? Because the two values don’t match. Sure, the numbers and letters are the same, but the formatting is different. Splunk views that as a roadblock. Need a quick fix? Here’s one that’s super easy and barely an inconvenience.

 

|eval id = lower/upper(id)

 

And that’s it.

Lower and upper will allow you to format a field value to make all letters each lowercase or uppercase depending on which function you use. Now, all we’ll have to do is make the letters in our event data lower case and then the lookup and indexed data can communicate correctly.

That’s going to be it for Part 2 of the eval command, but we’re not done quite yet. Be sure to check out Part 3 coming next week!

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Splunk 101: Basic Reporting and Dashboarding

It’s Mike again, one of Kinney Group’s resident Splunk experts. This week, I’ll review basic reporting and dashboarding functions following best practice methods in this video tutorial.

Basic Reporting and Dashboarding is one of many Splunk troubleshooting issues that is covered by our Expertise on Demand service offering. Within this video, I’ll break down the basics behind these essential functions of Splunk…

Splunk Help At Your Fingertips

If you’re a Splunker, or work with Splunkers, you probably have a full plate. Finding the value in Splunk comes from the big projects and the small day-to-day optimizations of your environment. Cue Expertise on Demand, a service that can help with those Splunk issues and improvements to scale. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks. We have the team here to support you. Let us know below how we can help.

An Introduction to Modernizing Your Splunk Environment

At Kinney Group, we believe the best way to solve the inefficiencies of traditional Splunk operations is to throw out the script and modernize the approach.

 

The traditional Splunk data center model is complex and difficult to scale for performance, requiring IT professionals to increase server counts and expand data center footprint to gain compute and storage capabilities. This outdated approach means expensive upgrade cycles, disruptive downtime, and increasingly complicated operation, all in an architecture fraught with performance “gotchas.”

 

We’ve found a better path forward.

 

Kinney Group and Pure Storage have teamed together to create a reference design that provides benefits, insights, and a technical overview of a high-performance, scalable, and resilient data center infrastructure for the Splunk Enterprise platform. This revolutionary reference design is comprised of a powerful combination of VMware virtualization, Pure Storage hardware, Splunk SmartStore, and Kinney Group engineering expertise.

 

Kinney Group and Pure Storage have teamed together to create a reference design that provides benefits, insights, and a technical overview of a high-performance, scalable, and resilient data center infrastructure for the Splunk Enterprise platform.

 

The end result is an elegant approach to hosting Splunk Enterprise that enables dramatic reductions in storage complexity and infrastructure footprint, with transformative performance improvements and a lower total cost of ownership.

Here’s a glimpse into the 3 Key Benefits of utilizing the reference design in modernizing your Splunk operations

Key Benefit #1: Unmatched Performance

The beauty of this reference design lies in the unmatched performance provided by combining PureStorage FlashBlade®, Splunk SmartStore®, and Kinney Group’s advanced Splunk configuration tuning in a virtualized environment.

By combining expert engineering, finetuned software solutions, and solid-state storage can provide a 4x performance improvement over Splunk’s own recommendations.

Key Benefit #2: Simplified Scaling

Accommodating scale is an ever-present struggle for IT teams and data center operators — providing sufficient infrastructure to facilitate more demanding requirements such as increasing compute, storage, and network needs.

Complexities introduced by Splunk’s specialized data ingest requirements only make the situation more challenging (not to mention costly). The 1-2 punch of VMWare virtualization and Pure Storage’s FlashBlade® technology allows for “grow in place” scaling that won’t require disruptive downtime, and makes scaling storage for compliance a breeze.

Key Benefit #3: Lower Cost of Ownership

The distributed data center model provides high availability by replicating data, but effectively eliminates any benefits gained from Splunk data compression by increasing storage requirements. Co-locating storage and compute means when you need more storage, you have to add both compute and storage. To further increase the total cost of ownership (TCO), Splunk indexers with a distributed scale-out architecture usually have more servers with less storage to minimize the amount of data and time associated with server maintenance and failures.

Saving on capital expenditures such as servers, storage, and square footage is just the beginning. Reductions in equipment and increases in productivity represent savings on OpEx that make this reference design the gift that keeps on giving.

 

Modernize Your Splunk Environment

We gave you a taste of the power backing the reference design model and how it can modernize your Splunk environment. Now, it’s time to download your copy and access the full document of information instantly. Within the reference design, we’ll dig deeper into the 3 Key Benefits of utilizing the reference design in modernizing your Splunk operations and dive into the technology supporting the findings. Download your copy of the white paper here.

Splunk Search Command Series: eval (Part One)

 

Where to begin with Splunk eval search command… in its simplest form, eval command can calculate an expression and then applies the value to a destination field. Although, that can be easier said than done. That’s why we’ve broken down the eval command into a three-part series. In part one, we’ll cover the basics of eval.

Eval command is an incredibly robust and one of the most commonly used commands. However, you probably don’t know all the possibilities eval is capable of performing. Before we jump right in, let’s take a quick look at the syntax:

|eval <field> = <expression>

Super vague, right? Exactly. Eval has many different functions that can be performed such as:

  • Mathematical
  • Comparison
  • Conversion
  • Multivalue
  • Date and Time
  • Text
  • Informational

The list goes on. Each of the above types of functions has its own list of different arguments-based functions, to list and describe how each one work would result in a novel. But today, I want to start with some basic eval commands.

How to Use eval

When we call a field into the eval command, we either create or manipulate that field for example:

|eval x = 2

If “x” was not an already listed field in our data, then I have now created a new field and have given that field the value of 2. If “x” is a field within our data, then I have overwritten all the fields so that now x is only 2. This is the simplest way to use eval, list a field and give it a value.

eval In Action

eval with mathematical functions

But, we can do so much more than that. Eval is capable of doing mathematical functions:

|stats count
|eval number = 10
|eval percent = (count/number)*100

By using numeric values established by previous lines in our search, we can calculate percentages.

Math is pretty self-explanatory though, let’s talk about something a bit more relative, like time (pun intended)

Format time values with eval

There a couple of ways we can work with time using eval.

The first is formatting, say we are bringing in a time field but it’s written in epoch time. I don’t know about you, but I can’t read epoch time, I’m not a computer. I can, however, convert it to a readable time format:

|eval time = strftime(<time_field>, “%Y-%m-%d %H:%M:%S”)

We can also strip a time format and convert it to epoch:

|eval time= strptime(<time_field>, “%Y-%m-%d %H:%M:%S”)

I know what you’re thinking, “That’s cool, but what if I need to compare my time values with a static time value to say, I don’t know, filter out events”? Great question, here’s what I like to do.

Compare time values with eval

Using “relative-time” I can create a rolling time window:

|eval month = relative_time(now(), “-1mon”)

This line will return a value that is exactly 1 month from now, the time period can be changed to be a day, a week, 27 days, 4 years, whatever your heart desires. From here we can use a where command to filter our results:

|eval time= strptime(<time_field>, “%Y-%m-%d %H:%M:%S”)
|eval month = relative_time(now(), “-1mon”)
|where time > month

Because both of these time values are in epoch, we can simply find results where time is a higher number than a month, or in even simpler terms, anything younger than 1 month.

That’s going to be about it for this first part on the eval command series. Be on the lookout for more deep dives into the eval command in the coming weeks.

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!