A Guide to Automating Splunk ITSI Services

In many environments, services and dependency hierarchies may be constantly changing. Maybe you are working with a containerized application and your web, application, and database servers are ephemeral. Maybe you are managing a large Citrix environment where resource pools are constantly being created depending on demand. Whatever your situation, if your service topology is dynamic, manually creating new services in Splunk ITSI can be laborious.

By crafting a search that monitors your data and identifies newly created services as well as their relationship to other new or already existing services you can automate the process of ITSI service creation. Follow along on this path to automating the creation of ITSI services in an environment that is dynamic.

Identify Your Entities

As a pre-requisite, you will want to ensure that you have all existing entities imported in ITSI. To guarantee that entities are added as they come into existence following the Splunk guidance for setting up a recurring import of entities in ITSI.

First, navigate to the ITSI Services View.

Figure 1 - ITSI Services View
Figure 1 – ITSI Services View

Then, select: Create Service > Import from Search.

Figure 2 - Import Services from Search
Figure 2 – Import Services from Search

Now we craft a search that will accomplish four main goals:

    1. Identify newly created services
    2. Map out service dependencies
    3. Associate entities with newly found services
    4. Associate services with any applicable service templates
Figure 3 - Entity/Service import
Figure 3 – Entity/Service import

Let’s walk through this search to explain what’s happening…

index=vmware sourcetype=vmware:perf

| stats values(host) as entities by env

| eval entities=mvjoin(entities,",")

| rename env as service_title

| eval service_template="vcenter_health"

This first part of the search is identifying new services, which in this case are representative of VMs that exist within a certain environment. In order to identify these services, we must use fields that currently exist. We use the information within the events to identify entities associated with this new service, and we use the eval command’s mvjoin function to create a comma-separated list of these entities

| append [ search index=vmware sourcetype=vmware:perf
| stats values(env) as service_dependencies by vc

| eval service_dependencies=mvjoin(service_dependencies,",")

| rename vc as service_title  ]

Find Your High-Level Services

This next portion of the search will work to identify higher-level services, for example, services that might include the services created in the first step as service dependencies. In the case that these higher-level services already exist, this portion of the search will serve only to map these to the newly identified services in the first step. We again use the eval command’s mvjoin function to create a comma-separated list of these service dependencies.

On the next page, we map the fields from the results of our search to the Service Title, Dependent Services and Service Template Link. You’ll notice that there isn’t an option here to identify the entities associated with each service, that happens on the next page. At the bottom of the page, you can get a preview of the services that will be imported as well as any dependency relationships that exist between the services that are being created/updated. Once everything looks good, click “Next.”

Figure 4 - Field mapping results of services search
Figure 4 – Field mapping results of services search

Define Entity Rules

Next, we define any entity rules that are to be associated with a created/updated service. In order for this step to work correctly, you will need to have associated the created service with a service template that includes the Entity Rule (‘matches a value to be defined in the service’). In the dropdown on the right, select the field in your search results that include the comma-separated list of entities. The preview pane will show you the entities that will be associated with each created/updated service. When everything looks good, click “Import.”

Figure 5 - Define entity rule associated with service
Figure 5 – Define entity rule associated with service

When the import has completed you will see the following page:

Figure 6 - Import complete page
Figure 6 – Import complete page

Here are a few more tips to follow once you’ve imported…

To schedule this import to occur on a recurring basis, select ‘Set Up Recurring Import.’ Then, you can view your newly created import, navigate to Settings > Data inputs and select “IT Service Intelligence CSV Import” from the list.

Figure 7 - ITSI Service Intelligence Import results
Figure 7 – ITSI Service Intelligence Import results

By default, your newly created import uses an update type of UPSERT. This will ensure that existing services are updated rather than being overwritten. You can also adjust the interval on which the import search will run from this page. Start by selecting your created import and checking the “More settings” checkbox on the following page. In this example, we’ve created all services from a single search. If you would prefer to break this down into several different searches, you’ll want to make sure that the searches to create lower-level services run before the searches to create the higher-level services that depend on those lower-level services.

Mission Accomplished

Please use these tips and apply them to your ITSI environment. We want to make your day job easier with Splunk, and automating ITSI service creation is just one way to do it. With experience in all things ITSI and Splunk, we’re packed with expertise in the platform. If you’re interested in speaking with one of our technical experts, let us know below.

Splunk Search Command of the Week: lookup

 

Lookups are a vital part of Splunk. This Splunk search command can be used to enrich data and provide critical insights into the events users are ingesting. Whether it be blacklisted IPs, geo-locations, or product information, you can utilize lookups to find outstanding issues or suspicious events in your environment.

Once your lookups are in Splunk, how do you tie them to our event data? Great question, there are several ways to do this. You might already be familiar with using the Splunk search command, join, to create a sub search, and use inputlookup to bring in the information from the lookup. But what if I told you there was a much easier, much more efficient way to do this?

TA-DA the lookup command. By using the lookup search command, you no longer have to worry about writing sub searches or having to use the join command at all. Instead, we can use this one-stop-shop command to easily integrate our lookup information to our data.

How To Use lookup

Let’s look at the syntax…

|lookup <lookup_name> <correlating field> OUTPUT <field> <field> ….<field>
  • <lookup_name>- name of your lookup
  • <correlating field> – this is a field or field values that match between both the event data and the lookup
  • OUTPUT – required, everything after will be the fields that we are bringing over from the lookup

Then, check out these fields…

Figure 1 - List of field values
Figure 1 – list of field values

 

Example: This particular data set is product purchasing information from a web storefront. As we can see there are a lot of good fields pertaining to the total sales of a product. In this case, we have…

  • productId
  • action
  • status

However, there are a few fields not listed that would really paint the full picture of sales performance through data. Think of fields like product_name and price. Fortunately, you can add a lookup to that information.

Figure 2 - Adding lookup to fields
Figure 2 – Adding lookup to fields

lookup Results 

Once we’ve ingested the lookup into Splunk, we can start to use the lookup command to start bringing that data over to my event data.  Check out this search to do just that.

<base_search>|lookup prices.csv productId OUTPUT product_name price

Then, run the search and take a look back at your fields. You can see that product_name and price our now fields that we can manipulate and search on.

Figure 3 - Lookup results in searchable fields
Figure 3 – Lookup results in searchable fields

Finally, with the lookup search command, you can see your data integrated with your lookup information.

Ask the Experts

If reading through this article… what other use cases might be in terms lookup? Take a look at Splunk Search Command of the week: iplocation, to read how to include geolocation information to your data.

Our Splunk Search Command of the Week series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

The Essentials of Every Splunk Deployment (Part Two)

If you’re reading this, the words “Splunk” and “deployment” may strike some level of interest. And not just the idea of deploying, but deploying well. Taking on a Splunk deployment and integrating the platform with your team, your data, and your company is a large undertaking. As a company that’s driven mission success on Splunk deployments for years, we have a few thoughts on delivering a successful implementation.

As a reminder, the two primary components of this framework are:

Part 1: The Operational Framework of a Splunk Implementation

Part 2: Best Practices for the “Four Functional Areas” of Splunk Implementation

Let’s dive in to Part Two…

The Four Areas of Splunk Implementation

A best practice is a set of commercial or professional procedures that are accepted or prescribed as being correct or most effective. They’re a standard of how we do things well, such as requesting new data or using Splunk Validated Architectures for your deployment.

There are four functional areas in every Splunk implementation. Here are the best practices for each:

Platform

Best practices in this area help with the availability, scalability, and maintainability of the Splunk deployment:

  • Use the best Splunk Validated Architecture for your deployment.
  • Set up clustering and data replication for redundancy and high availability
  • Establish configurations management policies for knowledge objects
  • Set up a sandbox environment to test changes and research new features introduced via platform updates

Program

These include business alignment, operations, collaboration, use cases, and staffing; which enable you to realize maximum value from your Splunk deployment:

  • Establish robust change management policies and procedures and implement a source control system
  • Develop standard naming convention, RBAC policies, standards for reports, and dashboards. Test thoroughly for adherence to these standards
  • Implement configuration management software, like Puppet, to monitor drift
  • Share success stories between teams, inspire new ideas, promote knowledge sharing, and participate in Splunk user activities such as user groups, Splunk Live, conferences, etc.
  • Hold regular stakeholder meetings to provide updates on progress and demonstrate the value.

Data

These are to help generate well-designed and practical use cases that achieve the desired business outcome from the right set of data:

  • Implement a request process and tools – use ticketing systems for new data request submissions and tracking
  • Develop and maintain a data dictionary to capture meta-information about the data ingested
  • Create processes to document and understand the use cases for every data request and follow typical data management lifecycles to retire data no longer in use
  • Test, test, test. Use the sandbox environment to validate everything before deploying to production.

People

Having the right people trained, empowered, and motivated to do what they do best, the most essential best practice to adopt. Get smart people with the right mission plans and provide the necessary training and tools to be successful.

Your Splunk Experts

Take these tips and put them into action. If you’re looking ahead to a Splunk implementation in your future and need a helping hand, check out our Splunk services to see if we can be of assistance. If you’re looking more for best practice tips on-demand while you work through Splunk, Expertise on Demand may be a great resource for you. Regardless, all the best in your Splunk deployment efforts – we hope this two-part series got you closer to Splunk deployment success.

The Essentials of Every Splunk Deployment (Part One)

In many organizations, Splunk users and admins spend their time in Splunk onboarding new data, writing new searches (trying to find a needle in a haystack), or creating reports and dashboards for others to consume. The more friction that can be removed for users, the more value they recognize from the investment. Let’s walk through some Splunk implementation best practices that can help you in your deployment.

What we see as the Splunk implementation grows — in terms of data ingest volume and active users — the overall experience starts to deteriorate for everyone. The issues follow very similar patterns, and you want to mitigate some of them before too late. (You don’t want your Splunk Admins to want to write letters like this, do you?)

Splunk has published a success framework handbook to help you get started. This handbook provides reference materials, templates, and expert guidance for every aspect of a Splunk implementation, from data onboarding and platform management to suggestions for user education.

The two main components of this framework are:

1.) The Operational Framework of a Splunk Implementation

2.) Best Practices for the Four Functional Areas of Splunk Implementation

We’ll cover the first part here today (stayed tuned for the second part, coming soon).

The Operational Framework

Often, the Splunk implementation starts as a small “skunk-works” project within a team. Soon after the project kicks off, however, it grows into a large, poorly managed enterprise platform, not providing the desired value or outcome. Lack of a practical operational framework is one of the most common causes of poor user experience and adoption in a Splunk deployment. Like any enterprise platform, it is imperative to start with strong fundamentals. Establishing the purpose, goals, and ownership of the Splunk platform will increase the likelihood of success and wide-spread adoption.

1. Start with the Why

Ask yourself and your team important questions before diving into implementation. Here are a few to start with:

  • Why is Splunk a critical tool in the success of the organization’s mission?
  • Why does leveraging this platform to analyze data to assist with strategic and tactical decisions help increase the probability of attaining the organization’s financial targets?
  • Why does uniform and wide-spread adoption help colleagues and teammates better achieve work-life balance and maintain a healthy lifestyle?

2. Find a Champion

Find a leader who is most passionate about the “why.” Enable them to provide the resources and support the success of the implementation. To help the leader, find a champion within the team who will share with all the groups the “art of the possible.”

3. Define Success

What does the implementation look like in an ideal state? Establish success criteria early, define success metrics, and hold everyone accountable for achieving these measurements.

4. Operations Framework

Choose a centralized, federated, or hybrid model for operational success and empower the teams to deliver using the framework. Make sure you review the advantages and challenges of each type of framework before defining your framework.

What’s Next?

Now that you’ve defined your operational framework, let’s take this “base coat” and apply it to your Splunk implementation. Kinney Group has years of Splunk experience and experience backing Splunk implementations from start to end. You’ll read about the best practices we’ve learned about the implementation process in the second part of this series. If this strikes a chord with an upcoming Splunk project your team may be developing, let’s chat. Fill out the form below to talk with one of our expert team members.

Splunk Search Command of the Week: iplocation

 

Splunk is full of hidden gems. One of those gems is the Splunk Search Command: iplocationBy utilizing particular database files, iplocation can add geolocation- information to ip address values in your data. If you are ingesting data that contains an external ip address field, such as web storefront, VPN access, or what have you, we can find the location such as country, city, and region that ip address belongs to. 

Let’s look take a look at iplocation 

How To Use iplocation

|iplocation <ip_field>

Pretty simple stuff, so long as your ip field is an external ip address. Here is sample data that was ingested containing external ip’s under the field name clientip.

Figure 1 - iplocation sample data
Figure 1 – iplocation sample data

Next, add |iplocation clientip to our search.

Figure 2 - Add clientip to your search
Figure 2 – Add clientip to your search

If we look at our interesting fields, we’ll see some new additions.

Figure 3 - Review your interesting fields
Figure 3 – Review your interesting fields

NOTE: Region is also added but it was too far down the list.

Now that geolocation fields have been added to your fields list, add them to your search.

Figure 4 - Add geolocation fields to your search
Figure 4 – Add geolocation fields to your search

iplocation Results 

Figure 5 - iplocation results
Figure 5 – iplocation results

There you have it. As you can see, we have successfully added geographical information to our ip addresses. By using this Splunk search command, you can use this information and build heatmaps and cluster map dashboards to visualize activity around the globe.

Ask the Experts

Our Splunk Search Command of the Week series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Scrum 101: Three Myths of Traditional Scrum

Hi, I’m Georges, Scrum Master Myth Buster.

As a resident Scrum Master at Kinney Group, I’m responsible for promoting and championing Agile & Scrum habits on our automation projects. I started off as a developer, which shaped a lot of my goals and expectations in Scrum today. I have a bias to producing work, lightweight management involvement, and simplifying everything that’s not technical. A Scrum Master’s role is to produce an environment that is encouraging for developers and for growing the Agile mindset. Throughout my journey of training, execution, and experimentation, I have seen common Scrum myths permeate through the software development and project management ecosystem. Follow along as I debunk three myths about traditional Scrum best practice, so hold on to your Jira tickets and let’s go through the big offenders!

Myth #1

Scrum makes my development teams more efficient.

One of the first books I read for training was “Scrum: The Art of Doing Twice the Work in Half the Time.”A great read for learning the Scrum trade, but with an untrained Scrum eye, some directors and management may take away the wrong impression of how their teams will operate under a new delivery framework.

Teams today are most likely doing a hybrid practice of Homebrew/Waterfall/Agile systems. A distinct change in your Agile practice may reinforce good habits that make your team faster, however, it’s important to remember that speed is not the goal of Scrum. The goal of Scrum is to make our teams more effective not efficient.

With a bias to client and user interactions, our engineers are executing on project goals while verifying their work repeatably. Many may argue that effectiveness leads to efficiency from a high level, and they would be right, but the message behind Scrum-based development should never promise the moon. Short-term stumbles and smaller gains will happen along the way and Scrum allows your team to break these bad habits along the way. Sometimes, a team needs to slow down before they can speed up, resulting in more effective work to reach a customer’s goal.

Myth #2

Scrum is a set of meetings and rules for my developers.

I love this myth. It’s at the same time entirely true and completely misleading. By reducing Scrum to its artifacts and ceremonies, we ignore the foundation that Scrum rests upon — the Agile Mindset. When you ignore the Agile Mindset, your Scrum practices will never reach their full potential. This may ultimately get in your way instead of fostering a better development environment.

Equally important, the Agile Mindset not only requires your developers’ participation, but it involves your entire delivery pipeline. Your company’s salespeople need to champion the need for Product Owner involvement. Your business analysts need to write Statements of Work that promote handshake agreements on changing priorities. Your project managers need to keep their development teams as stable as possible. Scrum is not a set of meetings and rules for your engineers to follow. It’s a shift in the entire delivery framework and requires everyone to pitch in.

Myth #3

Scrum is the evolution of Waterfall and supersedes it.

Scrum and Waterfall are two delivery frameworks eternally locked in a misguided war of buzzwords and superstition. The truth is, they both have value for different reasons. Scrum did not kill Waterfall. However, Scrum did create a system that is better suited for rapid development, experimentation, and unknowns. Waterfall still has place for fixed, ‘simple’ work requirements. By requirement and by design, these won’t deviate from the initial technical and feature scope. With complex, initial asks on a Waterfall project, it’s no wonder that Agile processes have taken over. However, it is important for developers and managers to treat the words, “We are doing Scrum” with respect and not as a not-so-clever way of saying, “We aren’t doing Waterfall.”

Myths = Busted

There is a common misunderstanding intertwined within these myths: Scrum is easy. The truth is that Scrum isn’t just a simple evolution of the classic plan to execution flow of Waterfall. Scrum requires all hands-on deck in an organization to support. It is not a band-aid, but a methodical shift in how a company operates. If executed and supported sufficiently, adopting a Scrum practice can result in the effective delivery of your product and, most importantly, happy clients.

Splunk Search Command of the Week: TOP

I get it, SPL is a very wide language. With so many commands, arguments, functions, you name it. It’s a lot to learn and definitely a lot to remember. But what if I told you there were a couple of commands, that can almost do it all for you.

Let’s take a look at this search…

index=main| stats count as count by user | sort – count | head 10

A relatively easy search, for sure. But what if I could make it easier for you? Allow me to introduce the TOP, or Rare, Splunk Search Commands. TOP allows you to easily find the most common values in fields. It will also help you find information behind your event values like count and percentage of the frequency.

TOP Syntax

Now, we can explore the syntax for TOP Search Command.

|top <options> field <by-clause>

Here are the options:

  • Limit = limit the number of results
  • Showperc = show the activity percent field of the value

Field = filed you want to find the top values of

By-clause = a field you want to filter by

TOP Results 

Now, let’s show the value in this search. Take the same search referenced above used with the new commands:

Index=main| top limit=10 user

And blam, same results, less… search.

Figure 1 - TOP Results
Figure 1 – TOP Search Command Results

Ask the Experts

Our Splunk Search Command of the Week series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Dude, Where’s My Data (Part 3)

In Dude, Where’s My Data? (Part One), you learned how to configure your data source to send your data to Splunk. In Dude, Where’s My Data? (Part Two), you learned how to configure Splunk to receive and parse that data once it gets there. However, you still aren’t seeing your data in Splunk. You are pretty sure you configured everything correctly, but how can you tell?

Check your Splunk configuration for any errors. In these instances, there are three troubleshooting steps that I like to look at in order to ascertain what the problem could be.

They are as follows:

1. Check for typos

2. Check the permissions

3. Check the logs

Check for typos

There is always the possibility that even though the inputs look correct, there may be a typo that you originally missed. There may also be a configuration that is taking precedence over the one you just wrote. The best way to check is to use btool on the Splunk server configured to receive the data. This command-line interface (CLI) command checks the configuration files, merges the settings that apply to the same stanza heading, and returns them in order of precedence.

When looking for settings that relate to the inputs configured for a data source, this simple command can be run:

./splunk btool <conf_file_prefix> list -app=<app> --debug | grep <string>

Where <string> is a keyword in the input that you are looking for, will help quickly locate the settings that apply to that particular input.

Check the permissions

More times than not, the issue preventing Splunk from reading the log data is that the user running Splunk doesn’t have the correct permissions to read the file or folder where the log data is stored. This can be fixed by adding the user running Splunk to the group assigned to the file on the server that is configured to send data to Splunk. You should then, make sure that the group has the ability to read the file. On a Linux host, if you wanted Splunk to read, for example, /var/log/secure/readthisfile.log, you would navigate to the /var/log/secure folder from the command line interface using the following command:

cd /var/log/secure

Once there, you would run this command:

ls -l

This will return results that looks similar to the line below:

rwxr----- creator reader   /var/log/secure/readthisfile.log

Where creator, who is the user that owns that file, has the ability to read, write, and execute the file; reader, who is the group that owns the file, has the ability to read the file, and all other users cannot read, write, or execute the file.

Now, in this example, if the user running Splunk is Splunk, then you can check which groups that Splunk belongs to by running the following command:

id splunk OR groups splunk

If the results show that the Splunk user is not a member of the reader group, a user with sudo access or root, can add Splunk to the reader group using the following command:

sudo usermod -a -G reader splunk_reader

Check the logs

If the Splunk platform’s internal logs are accessible from the Splunk GUI, an admin user can use the following command to check for errors or warnings:

index=_internal (log_level=error OR log_level=warn*)

As a bonus, if your firewall or proxy logs are configured to send data to Splunk, and those logs are capturing data about network traffic between the data source and the Splunk server configured to receive data from your data source, searching these logs for errors by specifying the IP address and/or hostname of the sending or receiving server will help you find out if data is being blocked in transit. On a Linux host, the following commands can also tell you which ports are open:

sudo lsof -i -P -n | grep LISTEN

sudo netstat -tulpn | grep LISTEN

sudo lsof -i:22 ## see a specific port such as 22 ##

sudo nmap -sTU -O localhost  

Data Dreams Do Come True!

One, Two, Three strikes… and you’re out of problems with Splunk. Ha, if only these three blog posts could fix all of your Splunk issues, but we hope it helps. If you’re still having Splunk data configuration issues, or have any other troubleshooting needs, see how our Splunk services can help!

Avoid Alert Fatigue with Event Sequencing in Splunk

Working in the security space in Splunk, something we are well-aware of the pressure behind security alert management. Often on the frontlines of responding to alerts, security analysts often experience “alert fatigue” monitoring an abundance of alerts in their day to day roles. As we know, this kind of fatigue can leave teams even more exposed to legitimate threats.

Avoid Alert Fatigue

In Splunk Enterprise Security, you can turn to Event Sequencing to easily identify the actionable threats amidst a sea of alerts that you and your team are faced with. This leads to quicker remediation of security incidents. As a feature of Splunk Enterprise Security, the Event Sequencing engine is a series of chained (sequenced) correlation searches. These searches are triggered based on search criteria and other modifiers. Once the conditions of all sequenced correlation searches are met, a sequenced event is generated with all the information included for analysts to take action upon.

The How-To’s of Event Sequencing

Let’s take a look into how Splunk Event Sequencing works. Sequenced Events start by creating a Sequence Template. With Event Sequencing, out-of-the-box correlation searches, or custom searches, can be used.

You can create the sequencing template to detect specific behavior that an analyst can take immediate action upon. You can follow these graphics below for further reference to creating your sequence templates.

Figure 1 – Create a new Sequence Template
Figure 1 – Create a new Sequence Template
Figure 2 – New Sequence Template
Figure 2 – New Sequence Template
Figure 3 – Sequence Template Settings
Figure 3 – Sequence Template Settings

After the sequence template is created, you will find the triggered events in the Incident Review.

Figure 4 – Triggered Sequenced Template
Figure 4 – Triggered Sequenced Template

Then, you’ll want to filter your events. Click to filter on your “Sequenced Events” for these specific events.

Figure 5 – Filtering to see only triggered sequenced events
Figure 5 – Filtering to see only triggered sequenced events

Once you run your sequenced events, find them at Security Intelligence > Sequence Analysis. Then, you can review your sequence analysis.

Figure 6 – Sequence Analysis
Figure 6 – Sequence Analysis

Threats Minimized, Efficiency Maximized

When you take these best practice tips to Splunk Enterprise Security, your security alerts should be more manageable and consumable. Splunk Event Sequencing is here to help and ensure your Splunk teams are efficient and successful in the security space. With a team of security experts, Kinney Group has years of experience working in Splunk to ensure threats do not slip through the cracks. If you’re interested in our work with Splunk Enterprise Security, let us know below!

Splunk Search Command of the Week: STATS

Here’s the situation: You’re a security analyst that’s been tasked with finding different attacks on your servers. You need to find various events relating to possible brute force attempts, suspicious web page visits, or even suspicious downloads.

This probably isn’t much of a hypothetical — it might be a reality for a lot of people. We get it. Security is incredibly important in the era of technology. Fortunately, Splunk makes it easy to find this information, by using the STATS search command.

With the Splunk search command, STATS, the name says it all: it calculates statistics. Those statistical calculations include count, average, minimum, maximum, standard deviation, etc.  By using the STATS search command, you can find a high-level calculation of what’s happening to our machines.

|stats <aggregation> BY <field>

<aggregation> = count, avg(), max(), sum()

STATS Use Cases

Let’s take a look at a couple of use cases:

Use Case #1: You want to look at the number of failed login attempts.

index=_audit action="login attempt" info=failed | stats count by user
Figure 1 - Number of failed logins by user
Figure 1 – Number of failed logins by user

 

Use Case #2: You want to identify values like average, shortest, and the longest runtime on saved searches.

index=_internal sourcetype="scheduler" search_type=scheduled | stats avg(run_time) min(run_time) max(run_time)
Figure 2 - Average, shortest, and longest runtime of saved searches
Figure 2 – Average, shortest, and longest runtime of saved searches

STATS Results

STATS can help provide a strong overview of the activity within your environment. While STATS is a fairly simple command it can provide huge insights about your data. When paired with other commands like iplocation or lookup you can enrich your data to find anomalies such as interactions from certain countries or blacklisted IP addresses.   

Ask the Experts

Our Splunk Search Command of the Week series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!