Define Your Drilldown in Splunk: $click.value$ vs $click.value2$

In Splunk, there’s a lot to get right when it comes to choosing tokens for your dashboard when using drilldown. Deciding what values to select for your token names and token values in your dashboard drilldown menu can get confusing. You may be asking…

How do I know what to name my tokens?

What are all those prebuilt token values in the dropdown menu, and what do they do?

Passing tokens to your dashboard, linking them to other dashboard panels, or linking them to searches can advance your dashboard visualizations and data analytics. Multiple tokens can be used with dashboard drilldown in Splunk, but today we’ll be discussing two of the most commonly selected token values: $click.value$ and $click.value2$. By the end of this post, you will understand when to select one over the other to fit your dashboarding needs. Let’s get started!

 

The Drilldown

First off, you’ll need to edit your dashboard drilldown menu by going into your dashboard’s edit mode. Then, select “manage tokens on this dashboard”. The screen will look similar to the one below:

a "Drilldown Editor" popup window
Figure 1 – Drilldown Editor menu in Splunk

$click.value$

$click.value$ is where you can set your token names and select token values from the dropdown menu.

I like to give my tokens names that are relevant and easy to remember. I have decided to name my token “cell_clicked” because I will be clicking on a cell within my dashboard and observing the results that populate. In this first example, we will explore what happens when the token “cell_clicked” is paired with the token value of “$click.value$”.

example of dashboard
Figure 2 – Results for status value 200 using the token value of $click.value$

After saving the changes made, I decided to click on the status value of 200 and observe the results below in the linked dashboard panel. The results show us the events related to the GET request method… but why? When you use the token value of “$click.value$”, it will always display the leftmost cell value for the row in which you click. It will not display the events for the individual cell clicked.

 

$click.value2$

If you wanted to return the results from the cell value of status 200 that the mouse is hovering over, you would need to use the “$click.value2$” token. As you can see, when I edit my dashboard drill-down menu and select “$click.vaule2$” instead of “$click.value$”, the events from the exact cell clicked on the dashboard will populate the corresponding events below. Based on your needs when selecting values from a chart-style dashboard, you should be able to decide which of these two tokens is more applicable for your use case.

example of dashboard
Figure 3 – Results for status value 200 using the token value of $click.value2$

Bonus Tip

Keeping the second panel of the dashboard out of view can provide a cleaner look to the initial dashboard display, as seen in the image below. Follow this quick tip to achieve this view:

example of dashboard
Figure 4 – Dashboard with second panel hidden in Splunk

The second panel will remain hidden until the click value is passed to it. To accomplish this, you can edit the source code of the dashboard to add the following line:

<panel depends=“$insert_token_name$”>
edited source code in Splunk
Figure 5 – Edited dashboard source code in Splunk

Save the changes and reload to see your new dashboard! Test to see if the token is being passed correctly by clicking on the cell and seeing the corresponding events populate below.

Ask the Experts

Looking for more tips to advance your Splunk visualizations? Ask the experts at Kinney Group! We supply new Splunk resources every week, like this tutorial on Choropleth maps. If you think your company could benefit from our professional services, our Expertise on Demand team is ready to address your unique Splunk needs. Fill out the form below to learn how we can help.

Splunk Search Command Series: dbinspect

 

 

The power of Splunk comes from the insights we pull from our data. And to emphasize… I mean searchable data. Now, Splunk isn’t perfect and neither is your data. Data can be corrupt, go missing, or frankly, live in the dark. Pull that data back into the light and ensure your data is intact by using dbinspect.

What is dbinspect? The Splunk search command, dbinspect, allows us to look at the information of buckets that make up a specified index.  If you’re using Splunk Enterprise, this search command shows you where your data lives so you can optimize your disk space.

How to Use dbinspect

Let’s break down the command:  

|dbinspect index=<index_nametimeformat=<time format> 

Check out what this looks like in Splunk:  

Figure 1 - dbinspect in Splunk
Figure 1 – dbinspect in Splunk

 

The above screenshot may look small as it doesn’t capture all of the fields, but, the fields we DO see provide us with a wealth of information. When you use the command, you’ll have access to view all of the fields we can’t see in the screenshot.

 

Here’s what we can see with dbinspect: 

How many events are in a bucket 

The file path of the bucket 

Which index the bucket belongs too 

 

dbinspect also tells us: 

The state of the bucket (hot/warm/cold) 

When the bucket was created 

The size of the bucket in mb 

And tsidx states (full, fulling, etc) 

 

And that’s it. Use dbinspect to get insights into your data buckets. We’ve got plenty of searches to come this month, stay tuned!

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Splunk 101: Predict Command

 

Hey everyone, I’m Hailie with Kinney Group.

 

Today, I’ll walk you through a short tutorial on the predict command. The predict command forecast values from one or more sets of time series data. The command can also fill in missing data in a time series and provides predictions for the next several time steps.

 

As we’re going to see in this example, we’re going to have a bar graph displaying the number of purchases made on a specific host. The predict command is going to provide us with confidence intervals for all of its estimates with an upper and lower 95th percentile range displayed on the graph.

 

In this instance, we’re going to use the practice data that came from Spunk. Here, we’re in the index of the web. If we want to look at purchases made on a specific host, we’re going to use WW1 as a host. It’ll pull up the events that come from that but as I said earlier, predict needs some sort of time series data to work off of. 

 

We’re going to use a time chart. By counting the number of purchases by our host, this is going to display a bar graph. As you can see the bar graph shows how many purchases were made on that host for the day. In this example, on November 8th, there were 76 purchases made. 

 

With the benefit of the predict command, we’re going to see predictions extend off the graph here for future days,  future predictions, and the upper and lower 95th percentile confidence ranges.

 

Use predict command in your environment when it makes sense to you. Maybe to see future trend analysis and use cases. Maybe for predicting the amount of disk usage you’re using a day, use space, or data ingest. In this instance, we used it to see a high confidence level of how many purchases we’d expect on our host for WW1.

 

Meet our Expert Team

If you’re a Splunker, or work with Splunkers, you probably have a full plate. Finding the value in Splunk comes from the big projects and the small day-to-day optimizations of your environment. Cue Expertise on Demand, a service that can help with those Splunk issues and improvements to scale. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks. We have the team here to support you. Let us know below how we can help.

Splunk Search Command Series: inputlookup and outputlookup

 

Think back to our article on the Splunk search command, lookup… we talked about lookups and how they can be used to enrich the data currently in Splunk. Let’s revisit some new ways we can use lookups in our Splunk environment.

Using Inputlookup

Where the lookup search command allows you to inject fields from lookup to the data in an index, inputlookup will allow you to just view the lookup. This can be used at the beginning of a search, halfway through (using append or join), or where you see fit to bring in a lookup.

Let’s take a look at the syntax:

Syntax: |inputlookup <lookup_name>

Easy, peasy.

 

Figure 1 - Using inputlookup in Splunk
Figure 1 – Using inputlookup in Splunk

Interestingly enough, the lookup captured in the screenshot was built with the second command in this article: outputlookup

Lookups in Splunk are not just tables that get ingested… they can also be created from data we already have.

Using Outputlookup

Whenever you find yourself with a results table that you’d like to hold onto, use outputlookup. When you throw outputlookup at the end of the search, and it will turn the results into a lookup that you can use indepentently.

Let’s take a look at the syntax:

|outputlookup <lookup_name>

 

Figure 2 - Using outputlookup in Splunk
Figure 2 – Using outputlookup in Splunk

There are a few extra lines that can be added if need be. Lines like append=true and overwrite=true will change based on how the lookup is created.

Outputlookup really shines when it comes to building out a list of suspicious values in Splunk (such as a watchlist, blacklist, or whitelist).  All it takes is to build out a results table in Splunk that contains the information you need.

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Michael Simko’s Top Five Recommended Sessions at Splunk’s .conf20

Splunk .conf20 logo

One of my favorite times of the fall is the annual Splunk user conference. The pandemic has thrown lots of conferences into disarray. The Las Vegas .conf may be off, but virtual .conf is on — and is free. And yes, free as in free, not free like someone tried to give you a dog.

The virtual conference is 20-21 October for AMER, and 21-22 for EMEA and APAC. 

Here are the top five sessions at Splunk .conf20 that I recommend my customers, colleagues, and students attend. There are many more interesting sessions across the Splunk product line and beyond (temperature scanning crowds to find the infected?). 

 

1) PLA1454C – Splunk Connect for Syslog: Extending the Platform 

Splunk Connect for Syslog is an outstanding system for onboarding syslog data into Splunk. Traditionally, Splunk uses a third-party syslog to write data to disk, and then a Universal Forwarder to read that data and send it to Splunk. This has worked well but requires building the syslog server and understanding enough of the syslog rules to configure the data correctly.  

Enter Splunk Connect for Syslog, which handles the syslog configuration, sends the data to Splunk, and for many known sourcetypes makes the onboarding process a snap. 

 

What I like best: This came from engineers looking at a problem and making things better.

 

2) PLA1154C – Advanced pipeline configurations with INGEST_EVAL and CLONE_SOURCETYPE

Eval is powerful way to create, modify, and mask data within Splunk. Traditionally it is performed at search time. This session shows methods for using INGEST_EVAL to perform eval logic as the data in being boarded. This helps with event enrichment, removing unwanted fields, event sampling, and many more uses.  

 

What I like best: INGEST_EVAL opens a world of more control in Core Splunk.

 

3) SEC1392C – Simulated Adversary Techniques Datasets for Splunk

The Splunk Security Research Team has developed test data for simulating attacks and testing defenses in Splunk. In this session, they are going to share this data and explain how to use it to improve detecting attacks.

 

What I like best: Great test data is hard to come by, much less security test data.

 

4) PLA1129A – What’s new in Splunk Cloud & Enterprise

This session shows off the newest additions to Splunk Cloud and Splunk Enterprise. Each year these sessions show the new features that have arrived either in the last year or in new versions that often coincide with Splunk .conf.

What I like best: New toys to play with.

 

5) SEC1391C – Full Speed Ahead with Risk-Based Alerting (RBA)

I’ve talked to several customers who wanted to use a risk-based alerting (RBA) system for their primary defenses. Traditional methods require lots of tuning to avoid flooding the security staff with too many alerts. RBA is a method to aggregate elements together and then present the findings in an easier-to-consume method.

 

What I like best: Another option on how to approach security response.

 

Bonus Sessions: You didn’t think I could really stop at five, did you?

TRU1537C – Hardened Splunk: A Crash Course in Making Splunk Environments More Secure

TRU1276C – Splunk Dashboard Journey: Past Present and Future

TRU1761C – Master joining your datasets without using join. How to build amazing reports across multiple datasets without sacrificing performance

TRU1143C – Splunk > Clara-fication: Job Inspector

 

Join us!

Our KGI team will be on board for .conf20 and we’re more excited than ever to attend with you. With over 200 virtual sessions at Splunk’s .conf20 event, this year is going to be BIG. With exciting updates to Splunk and grand reveal on new product features… Kinney Group is ready to help Splunkers along the way.

Keep your ears perked for some big, Splunk related announcements coming your way from Team KGI this month…

Splunk 101: Choropleth Maps

Hey, and welcome to the video! My name is Elliot Riegner and I’m here with the Kinney Group to bring you a tutorial on Choropleth maps.

Splunk provides many visualizations to represent data. Some of which are suited for location, such as the choropleth map which we will be exploring today.

To get started, we’ll take a look at the data used throughout today’s demo.
This is a CSV file that I will be uploading to my Splunk instance. The first row in the file contains field names and the rest values.

Ingested in Splunk this is what the CSV of Employee Records looks like:

source="employee_data.csv" 
| eval Name=first_name + " " + last_name
| table Name ip_address state

Choropleth maps utilize KML or KMZ files, also known as ‘Keyhole Markup Language’ which use latitude and longitude coordinates to map out regions.

Let’s take a look at the KML file I will be using to create our choropleth map:

| inputlookup geo_us_states

Here we see a correlating field of state, and note the coordinates which define each state’s regions.
Let’s take a closer look at what the choropleth visualization is all about.
Notice that the count for each state is set to 0, causing all states to display the same highlighted color.

Now, let’s dive deeper into the employee CSV data to create our query

source="employee_data.csv" 
| stats count by state

Note that all states now have a count. We will use this data to populate our choropleth map.
In order to do so, we will use the ‘geom’ command to correlate the KML file’s featureId field which included states to the field name of state found within the employee CSV data.
As you can see, each state has a count of the number of employees residing within, as well as the coordinates used to map each state’s boundaries

source="employee_data.csv" 
| stats count by state
| geom geo_us_states featureIdField=state

While Splunk’s default formatting can be great for some datasets, let’s create custom values to use in our key and sort by on the map.
Using case statements, we are able to pass multiple argument and value pairs.

source="employee_data.csv" 
| stats count by state
| eval count = case(count<10, "Less than 10", count>10 AND count<30, "10-30", count>30 AND count<60, "30-60", count>60 AND count<100, "60-100", count>100, "Over 100")
| geom geo_us_states featureIdField=state

Finally, let’s take care of that null value and set it to something more user friendly

source="employee_data.csv" 
| stats count by state
| eval count = case(count<10, "Less than 10", count>10 AND count<30, "10-30", count>30 AND count<60, "30-60", count>60 AND count<100, "60-100", count>100, "Over 100")
| fillnull value="No Employees"
| geom geo_us_states featureIdField=state

As you can see, we now have a fully populated map visualizing the stats in which employees reside. Thank you so much for joining me in yet another Splunk tutorial

Meet our Expert Team

Be on the lookout for more Splunk tutorials! My team, the Tech Ops team, runs our Expertise on Demand service, which I’ll touch on a little more below. Our EOD team is responsible for knowing everything and anything around Splunk best practice… that’s why you’ll get access to a ton of video and written content from our team. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks.  Let us know below how we can help.

Splunk Search Command Series: Rare

 

Remember we talked about the TOP command? Well turns out there is a command that works exactly the same way but you get results for the fewest occurrences in your data. 

It is called RARE. Where TOP provides you with the most common values in your data, rare shows you the values that occur the fewest. 

More About Rare

Something we can accomplish with the search below:  

index=main| stats count as count by user | sort count | head 10 

Again, an easy search, but we can make it easier 

Index=main| rare limit=10 user 

Wango Bango! Same results, less…search.

 

How to Use Rare

Let’s explore the syntax: 

|rare <options> field <by-clause> 

Options –  

  • Limit = limit the number of results 
  • Showperc =  show the activity percent field of the value 

Field = filed you want to find the top values of 

By-clause = a field you want to filter by 

And there you have it. Rare command is an easier search… but is important to utilize.

Ask the Experts

Our Splunk Search Command Series is created by our Expertise on Demand (EOD) experts. Every day, our team of Splunk certified professionals works with customers through Splunk troubleshooting support, including Splunk search command best practice. If you’re interested in learning more about our EOD service or chat with our team of experts, fill out the form below!

Splunk 101: Creating Pivots

 

Hello, Josh here, to walk you through another quick Splunk tutorial that will save you time and give your team a tool that everyone can use. In this video tutorial, I’ll discuss the importance of using the Pivot function of Splunk. Who wants to make Splunk easier… for just about any person? Pivots are the perfect way to get a non-Splunker started on pulling visualizations from your data. Here are some takeaways from the video when you’re using pivot in Splunk…

Key Takeaways from Creating Pivots in Splunk

When working with a Pivot in Splunk and ensuring you get the right visualizations… it all starts with your data models…

  • In simple terms, a Pivot is a dashboard panel. Every Pivot relies on your Data Models.
  • Pivots exist to make Splunk easy – any user, whether they have existing search language knowledge or not, can utilize the Pivot function.
  • Be careful not to delete or edit your data models while building your pivots (unless you know what you’re doing?).
  • The Pivot function has a drag and drop UI that makes creating dashboards simple.
  • You can manipulate the visualizations around your data – test out which chart looks best with your data!
  • There are limitations to the click and drag functionality of Splunk Pivot visualizations… all dependent on the limitations of your data set.

You may have read a few of my Splunk Search Command Series blogs, both myself, and our engineers here at Kinney Group produce weekly content around Splunk best practices. My team, the Tech Ops team, runs our Expertise on Demand service, which I’ll touch on a little more below. Our EOD team is responsible for knowing everything and anything around Splunk best practice… that’s why you’ll get access to a ton of video and written content from our team.

Meet our Expert Team

If you’re a Splunker, or work with Splunkers, you probably have a full plate. Finding the value in Splunk comes from the big projects and the small day-to-day optimizations of your environment. Cue Expertise on Demand, a service that can help with those Splunk issues and improvements to scale. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks. We have the team here to support you. Let us know below how we can help.

Case Study: Improved Visualization and Dashboarding Success with Splunk for Judicial Entity

With a mission to provide justice through systems and operations, visualizing data across all teams is essential to this judicial court entity. While comprising over 100 locations across the state, this customer needs to process massive quantities of information, fast. The customer’s network team was set on a mission to modernize their entire system in order to process information from each location with speed and precision. While this SLED customer works fast, this group needs to work with other teams through heatmap and ticketing processes that can slow down their system. With years of data sitting in their systems, maintaining and restoring historical data is a top priority.

In order to advance their data analytics capabilities, this customer purchased and implemented Splunk. After moving off of their legacy, open-source system, this customer knew they would be pressed to prove the value in the new Splunk platform. With little to no official Splunk training, this team needed a guide to show them the power of visualization and dashboarding in Splunk.

Challenges

  1. Following a new Splunk implementation, the customer needed to see results with Splunk fast. After moving over from their legacy system, this customer needed to ensure that they were accurately recording all historical data and moving it forward.
  2. Working with over 100 locations across the country, this customer needed the ability to process their data and display information across all sites with consistent dashboards and visualizations.
  3. With a small team of engineers, also had to work with multiple other teams within their site, the customer had to ramp up their engineers into the Splunk platform.

Solutions

  1. Our Expertise on Demand team quickly jumped in to show the value in the Splunk platform. After multiple resolved issues with ticketing and dashboarding, the judicial entity was able to adopt the Splunk platform across multiple teams.
  2. Through monthly “Lunch and Learns” and best practice knowledge transfer, our EOD team was able to educate this five-person Splunk team to a higher level of success in understanding and utilizing the platform.
  3. The customer was able to gain significant ground on searching, reporting, and dashboarding capabilities.

Business Impacts

Taking on investment in Splunk can be challenging. With added pressure to see instant results with the platform, our customer was placed under the spotlight to start utilizing the platform and enabling the adoption of Splunk across multiple teams. Through dashboarding and visualization support through our Expertise on Demand service offering, the customer did just that. Through advanced visualization methods in Splunk, the customer can now actively monitor and report metrics across their 100+ sites on all current and historical data.

4 Reasons an Accountant Should Use Splunk

Splunk for Accountants Use Case

I’ll admit, I am not very technical. I mean, I am in the field of Accounting, after all.  I can get around Excel spreadsheets pretty well and I can read some coding to get a pattern out of what it’s trying to do, and I’m good at breaking things when I think I’ve figured it out. That’s the gist of it for me though, and I imagine there are many others out there like me in the Accounting world.

As we all know, data mining is time-consuming and can be a little overwhelming. There is SO much data that it’s hard to figure out what data to use and what to lose that will help our leadership team(s) make data-driven decisions.

Insert Splunk… it’s something I knew nothing of when I first came to Kinney Group. Working with our engineers, I see the vast pool of Splunk use cases for every department in our company. There is so much more that we can do to help our leaders make the data-driven decisions with accuracy and in real-time.

 

You may be asking… “What could a Finance and Accounting team possibly use with Splunk?”

 

Well, in not-so-technical terms, it’s simple if you have data in excel or a CSV file- it can be Splunk’d.  Anything. No joke.

In most Accounting systems, some of the financial set up on the system leaves…  a lot to be desired.  As accountants, we want to push our systems beyond what they can do. You can pull hundreds of reports, dump massive amounts of data into Excel, then sift through all the data, knowing most of this data won’t get you the information or graphs you need.

With Splunk, we get visuals of all that data, summarized on colorful dashboards that allow us to see trends and help us make decisions. If you’re looking for automating some of your recurring reports, you don’t have to create a complicated macro in excel… Splunk it!

Now you’re asking, “What else can I do with Splunk?” Glad you asked. In my day to day as an accountant, here’s how I utilize Splunk to make my life easier.

 

4 Ways Splunk Makes My Life Easier

 

Utilization Tracking

We’re a professional services organization, in our world tracking against billable utilization, project hours, burn rates, and bonus plan targets are essential.

Tracking engineer utilization within Splunk allows us to watch our burn rates on projects… and the burn out the potential of our engineers – although we have a team of go-getter engineers, it’s important to keep an eye of the hours (and extra hours) they’re working!

 

Time Tracking

As any accountant knows, timesheet tracking is a constant battle. Splunk helps hold managers and colleagues accountable to submitting their time ON time. In fact, we just installed a Splunk alert that notifies colleagues if they have a timesheet missing before the deadline!

We even have Splunk set up to monitor our colleague’s progress and trends in submitting time. It’s great information for managers, and also great for us to send a little extra motivation to colleagues who notoriously submit their time late.

 

Schedule Tracking

In our organization, it’s crucial to see the availability of engineers and when they’re available for a project. Balancing existing projects, training, education, internal efforts, and of course their personal time off can be a lot. As a resource-driven organization, we set a high standard for planning our colleague’s time and capacity.

And with time segmented correctly, our forecast becomes stronger. We can now forecast accurate revenue numbers and bookings months out.

 

Project Tracking

I’m not a project manager but working in our systems and with projects being so closely tied to finances – project tracking is crucial. Splunk goes hand in hand with our availability matrix, giving us visibility to track if a project is on schedule, behind, or ahead of schedule.

And then there’s the immediate, real-time visibility into the hours/billings on any given project. Cut the manual spreadsheets and let your data speak to you through Splunk.

 

Splunk Made Easy

If you can dream it-and communicate it- then it can be achieved through Splunk’s amazing dashboards.  The sky truly is the limit with the right data and engineers in Splunk. If you know your company is using Splunk… but you’re unsure how to get it adopted in your department – reach out! Our Expertise on Demand service is designed to do just that – spread adoption of the Splunk platform across your organization.