Avoid Alert Fatigue with Event Sequencing in Splunk

Working in the security space in Splunk, something we are well-aware of the pressure behind security alert management. Often on the frontlines of responding to alerts, security analysts often experience “alert fatigue” monitoring an abundance of alerts in their day to day roles. As we know, this kind of fatigue can leave teams even more exposed to legitimate threats.

Avoid Alert Fatigue

In Splunk Enterprise Security, you can turn to Event Sequencing to easily identify the actionable threats amidst a sea of alerts that you and your team are faced with. This leads to quicker remediation of security incidents. As a feature of Splunk Enterprise Security, the Event Sequencing engine is a series of chained (sequenced) correlation searches. These searches are triggered based on search criteria and other modifiers. Once the conditions of all sequenced correlation searches are met, a sequenced event is generated with all the information included for analysts to take action upon.

The How-To’s of Event Sequencing

Let’s take a look into how Splunk Event Sequencing works. Sequenced Events start by creating a Sequence Template. With Event Sequencing, out-of-the-box correlation searches, or custom searches, can be used.

You can create the sequencing template to detect specific behavior that an analyst can take immediate action upon. You can follow these graphics below for further reference to creating your sequence templates.

Figure 1 – Create a new Sequence Template
Figure 1 – Create a new Sequence Template
Figure 2 – New Sequence Template
Figure 2 – New Sequence Template
Figure 3 – Sequence Template Settings
Figure 3 – Sequence Template Settings

After the sequence template is created, you will find the triggered events in the Incident Review.

Figure 4 – Triggered Sequenced Template
Figure 4 – Triggered Sequenced Template

Then, you’ll want to filter your events. Click to filter on your “Sequenced Events” for these specific events.

Figure 5 – Filtering to see only triggered sequenced events
Figure 5 – Filtering to see only triggered sequenced events

Once you run your sequenced events, find them at Security Intelligence > Sequence Analysis. Then, you can review your sequence analysis.

Figure 6 – Sequence Analysis
Figure 6 – Sequence Analysis

Threats Minimized, Efficiency Maximized

When you take these best practice tips to Splunk Enterprise Security, your security alerts should be more manageable and consumable. Splunk Event Sequencing is here to help and ensure your Splunk teams are efficient and successful in the security space. With a team of security experts, Kinney Group has years of experience working in Splunk to ensure threats do not slip through the cracks. If you’re interested in our work with Splunk Enterprise Security, let us know below!

Keys to Splunk Customization

Splunk has a ton to offer visually. And while visualizations are Splunk’s sweet spot, let’s make your Splunk instance a little more… personal. Why not add some flare and customize your Splunk environment by adding your company logos and branding? I’ll walk you through how to add customization to login pages and PDF exports, directly in Splunk. Not only does customizing your Splunk environment help you maintain brand standards, but it makes your reports look more professional and it makes your deployment overall look and feel more mature. (You also get some great brownie points with management!)

Customize Your Login Page

Here’s the default login page you’re probably seeing in Splunk. You can turn this into something that fits your company’s brand standards through some customization tricks in Splunk.

Figure 1 - Default Splunk login page
Figure 1 – Default Splunk login page

Custom Login Page

First, update your login page background. Navigate to Settings > Server Settings > Login Background. Once you’re there, follow these image format settings…

  • Use a .jpg, .jpeg, or .png formatted file.
  • A landscape-oriented image is recommended.
  • The maximum file size is 20MB.
  • The suggested minimum image size is 1024×640 pixels.
Figure 2 – Custom background image upload
Figure 2 – Custom background image upload


Figure 3 – Splunk Login with custom background
Figure 3 – Splunk Login with custom background


Add Your Logo

Next, change out the Splunk login logo to your company’s login logo. On this step, the Splunk web.conf file needs to be updated to use a custom logo. Follow these image requirements:

  • The maximum image size is 485px wide and 100px high.
  • If the image exceeds these limits, the image is automatically resized.
loginCustomLogo = <fullUrl, pathToMyFile, myApp:pathToMyFile, or blank for default>

Default destination folder: $SPLUNK_HOME/etc/apps/search/appserver/static/logincustomlogo.

Example: If your logo image is located at $SPLUNK_HOME/etc/apps/search/appserver/static/logincustomlogo/logo.png, type loginCustomLogo = logincustomlogo/logo.png.

Manual location: $SPLUNK_HOME/etc/apps/<myApp>/appserver/static/<pathToMyFile>, and type loginCustomLogo = <myApp:pathToMyFile>
Figure 4 – Example web.conf
Figure 4 – Example web.conf


Figure 5 – Custom Logo added to Login Page
Figure 5 – Custom Logo added to Login Page

Add Text to Login

After that, add some login text to your page. Similar to the previous step, the same web.conf needs to be updated to add login text. Add “login_content” setting to conf file.

login_content = <string>
* Lets you add custom content to the login page.
* Supports any text including HTML.
* No default.


Figure 6 – web.conf example
Figure 6 – web.conf example


Figure 7 – Login page with Custom Text
Figure 7 – Login page with custom text

Add Alert Action

Finally, on your login page customization, add an alert action. It is often required to present a notice to a user before logging in. This can be done by adding an “alert” script to the login content. Follow the lines below to set up your login page alert.


Figure 8 – Example web.conf to add script
Figure 8 – Example web.conf to add script


Figure 9 – Alert example
Figure 9 – Alert example

Customize Your Reports

Once you’re done updating your login page, take a look at customizing your PDF’s and reports with your company logo. Not only does it add to the visualization of reports, but small things like this can make your work in Splunk stand out.

In order to add a custom logo to reports, the alert_actions.conf file needs to be updated.

pdf.logo_path = <string>* Define the PDF logo using the syntax <app>:<path-to-image>.
* If set, the PDF is rendered with this logo instead of the Splunk logo.
* If not set, the Splunk logo is used by default.
* The logo is read from the  $SPLUNK_HOME/etc/apps/<app>/appserver/static/<path-to-image>  path if <app> is provided.
Figure 10 – Example alert_actions.conf
Figure 10 – Example alert_actions.conf

Check it out — you’ve got your company logo on your PDF! An easy way to make your Splunk reports look more professional and quality.

Figure 11 – Custom logo on PDF export
Figure 11 – Custom logo on PDF export

Customizing your Splunk environment can add a personal and professional touch to the work you do. If one of the biggest benefits for using Splunk is visualizations, take advantage of that and add your company branding to make those visuals stand-out even more. At Kinney Group, we love adding our logo and branding anywhere we can in Splunk. If you need help or more tips in tricks on how to customize your Splunk environment, we’re happy to help.

Step Up Your Search: Exploring the Splunk tstats Command

If you’re running Splunk Enterprise Security, you’re probably already aware of the tstats command but may not know how to use it. Similar to the stats command, tstats will perform statistical queries on indexed fields in tsidx files. Significant search performance is gained when using the tstats command, however, you are limited to the fields in indexed data, tscollect data, or accelerated data models.

The Power of tstats

Let’s take a simple example to illustrate just how efficient the tstats command can be. For this example, the following search will be run to produce the total count of events by sourcetype in the window’s index.

| stats count by sourcetype
| sort 5 -count
| eval count=tostring('count',"commas")

This search will output the following table.

Figure 1 - Table produced by the first search.
Figure 1 – Table produced by the first search.

By looking at the job inspector we can determine the search efficiency.

Figure 2 - Job inspector for first search. 
Figure 2 – Job inspector for first search.

This search took almost 14 minutes to run. We can calculate the Events Per Second (EPS) by dividing the event scanned by the number of seconds taken to complete. This can be helpful when determining search efficiency. The EPS for this search would be just above 228 thousand, a respectable number.

By converting the search to use the tstats command there will be an instant, notable difference in search performance.

| tstats count where index=windows by sourcetype

| sort 5 -count

| eval count=tostring('count',"commas")

This search will provide the same output as the first search. However, if we take a look at the job inspector, we will see an incredible difference in search efficiency.

Figure 3 - Search job inspector for tstats command search. 
Figure 3 – Search job inspector for tstats command search.

Here we can see that the same number of events were scanned but it only took 1.342 seconds to complete! That’s an EPS of about 142 million.

Implement tstats

The tstats command is most commonly used with Splunk Enterprise Security. Anytime we are creating a new correlation search to trigger a notable event, we want to first consider if we can utilize the tstats command. The basic usage of this command is as follows, but the full documentation of how to use this command can be found under Splunk’s Documentation for tstats.

| tstats <stats-function> from datamodel=<datamodel-name> where <where-conditions> by <field-list> 

| tstats `summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks where IDS_Attacks.severity=high OR IDS_Attacks.severity=critical by IDS_Attacks.src, IDS_Attacks.dest, IDS_Attacks.signature, IDS_Attacks.severity 

| `drop_dm_object_name(IDS_Attacks)`
Figure 4 - Output of example tstats search for Intrusion detection data. 
Figure 4 – Example tstats search for Intrusion detection data.

Notice in the example search that the dataset name “IDS_Attacks” is prepended to each field in the query. This is a requirement when searching accelerated data from the data models. Only the fields that are in the accelerated data models can be used. To find out more about the fields contained in the data models for ES see the documentation for Splunk’s Common Information Models (CIM).

Getting Unstuck

Understanding and correctly implementing the tstats command can significantly improve the performance of the searches being run. This command should always be considered when creating new correlation searches to improve search efficiency and overall performance of ES.

Kinney Group Splunk consultants are highly experienced and know how to get you “unstuck.” Fill out the form below to learn more about our expert Splunk solutions.

Does Slow Splunk Search Performance Have You Stuck?

Slow Splunk Search Performance Solution - Kinney Group - Jumspstart for Splunk

Does slow Splunk search performance have you stuck?

Get Unstuck with Jumpstart Services for Splunk by Kinney Group Inc.

Slow Splunk search performance can be attributed to several different factors. Perhaps the hardware Splunk is running on is old or outdated. Maybe you are running an older version of Splunk. But what if you are running a newer version of Splunk on minimum or above recommended hardware? One possible cause could be attributed to inefficient searches.

Why stats?

You may have a search/report that you dread to run due to the excessive duration it takes to complete. Splunk’s search processing language (SPL) is very flexible. This means one search can be written many different ways to accomplish the same end goal. With this in mind, we can craft searches to avoid certain functions that may be negatively contributing to our overall search performance.

Certain search functions can be used in the place of others to greatly improve performance. Did you know most transaction and join commands can be replaced with stats? Transactions and joins can be very costly in regard to search time. Below we will walk through a simple example of how to switch a join command to stats while preserving the original goal of the search.

For this example, we will want to produce a count of orders by purchaser country:

Screenshot of Slow Splunk Search Performance Example - Orders by Country - Kinney Group - Jumpstart for Splunk
Figure 1 – Order count by Country

The following search is our original search to output the desired results:

index=orders source=orders.csv

| table order_id,customer_id,order_date

| join type=left customer_id

     [search index=customers source=customers.csv

     | fields customer_id,customer_name,contact_name,country

     | table customer_id,customer_name,contact_name,country]

| table customer_id,customer_name,contact_name,country

| stats count by country

| sort 5 -count

| rename country as Country, count as "Total Orders"

This search achieves our goal, however, takes a considerable amount of time to run. This is due to the join command. Join utilizes a sub search which greatly increases search time. To improve performance while preserving the final result, the stats command will be used.

First, the data from the sub search will need to be brought up to the base search:

(index=orders source=orders.csv) OR (index=customers source=customers.csv)

Now we will create a filter field using the eval command. This will allow us to filter out only the rows that are applicable for the end result:

(index=orders source=orders.csv) OR (index=customers source=customers.csv)

| eval filter=if(index==”a”, 1, 0)

This filter field we created enables us to perform simple math to determine which fields need to be kept. This will assign every event from ‘index=orders’ to have a 1 and everything else would be 0. The original base search is performing a left join on the customer_id field.

To simulate this, we will be running a preliminary stats command to aggregate on the customer_id field:

(index=orders source=orders.csv) OR (index=customers source=customers.csv)

| eval filter=if(index==”orders”, 1, 0)

| stats values(country) as country, sum(filter) as filter by customer_id

The result will output the following table:

Screenshot of Splunk Search Output Table - Kinney Group - Jumpstart for Splunk
Figure 2 – Portion of the statistics table produced by the search.

Notice if the customer_id field is not present in index=orders, it will have a filter value of 0. This will allow us to filter where this value is greater than 0 effectively creating a left join.

Putting it all together:

(index=orders source=orders.csv) OR (index=customers source=customers.csv)

| eval filter=if(index==”orders”, 1, 0)

| stats values(country) as country, sum(filter) as filter by customer_id

| where filter>0

| stats count as “Total Orders” by country

| sort 5 –“Total Orders”

This search will provide us the same results as the original join search while giving a significant difference in search speed:

Screenshot - Splunk Optimized Search Stats - Kinney Group
Figure 3 – Result from optimized stats search showing same results from original search.


Screenshot Original Search
Figure 4 – Original search

Getting Unstuck

This was a simple example of how to improve slow Splunk search performance by switching to a stats from join. Kinney Group Splunk consultants are highly experienced and know how to get you unstuck.

The Kinney Group team began working with the Splunk technology all the way back in 2013. Since that time, we have consistently seen organizations run into challenges as they work to fully activate and adopt the platform.

Get Your Team “Unstuck” with Splunk Implementations

We have tested this service concept with “stuck” Splunk implementations and have experienced tremendous success. Organizations will be amazed at how much they can accomplish in just one day of focused immersion with one of our top-notch Splunk consultants.

Jumpstart Service for Splunk is designed to help companies that are:

  • Evaluating new use cases for Splunk
  • Limited on technical resources
  • Struggling to capture ROI with Splunk
  • Seeking documentation and adoption support
  • Short-staffed or onboarding new Splunk resources

If you are “stuck” or “stalled” with your Splunk implementation, get rolling again with our Jumpstart Service for Splunk—it just might be the best thousand bucks your organization has ever spent!

Jumpstart Services for Splunk - Kinney Group