Indy Splunkers Unite With the Messaging Tool Slack

When you find yourself stuck with a stubborn data issue, just digitally tap a fellow Splunker and share technical tips. This is possible through the Splunk User Group Slack domain.

New call-to-action

How to Access the Splunk Slack User Group

To receive access, complete this short Google Form. Be sure to join the #indianapolis channel once you’ve been accepted.

New call-to-action

How to the Use CIM to Normalize Splunk Data

Slow Splunk Search Performance Solution - Kinney Group - Jumspstart for Splunk

In previous blogs “Dude, Where’s My Data” (Part One and Part Two), we focused on the essential steps of onboarding your data into Splunk. But if those guidelines didn’t populate the data into the dashboards properly, you may need to explore Common Information Model (CIM) compliance to normalize that data and make it functional. 

In this post, we’ll walk you through what CIM is and how CIM compliance works so you can get usable data into your dashboards in no time.Continue reading

How to Use Splunk Field Extractions and Rex and Erex Commands

Getting data into Splunk is hard enough. After uploading a CSV, monitoring a log file, or forwarding data for indexing, more often than not, the data does not look the way you’d expect it to. The large blocks of unseparated data that are produced when it’s ingested are hard to read and unable to be searched. If the data is not already separated into events, doing so may seem like an uphill battle. 

You may be wondering how to parse and perform advanced search commands using fields. This is where field extraction comes in handy.

What is a field extraction?

A field extraction enables you to extract additional fields out of your data sources. This enables you to gain more insights from your data so you and other stakeholders can use it to make informed decisions about the business.

Field Extraction via the GUI

Field extractions in Splunk are the function and result of extracting fields from your event data for both default and custom fields. Field extractions allow you to organize your data in a way that lets you see the results you’re looking for.

How to Perform a Field Extraction [Example]

Figure 1 - GUI in Splunk
Figure 1 – Extracting searchable fields via Splunk Web

 

Pictured above is one of Splunk’s solutions to extracting searchable fields out of your data via Splunk Web. 

Step 1: Within the Search and Reporting App, users will see this button available upon search. After clicking, a sample of the file is presented for you to define from events the data. The image below demonstrates this feature of Splunk’s Field Extractor in the GUI, after selecting an event from the sample data.

 

Figure 2 - Splunk’s Field Extractor in the GUI
Figure 2 – Sample file in Splunk’s Field Extractor in the GUI

Step 2: From here, you have two options: use a regular expression to separate patterns in your event data into fields, and the ability to separate fields by delimiter. Delimiters are characters used to separate values such as commas, pipes, tabs, and colons.

Figure 3 - Regex delim in Splunk’s Field Extractor in the GUI
Figure 3 – Regular expressions vs delimiter in Splunk

 

Figure 4 - Delimiter in Splunk’s Field Extractor
Figure 4 – Delimiter in Splunk’s Field Extractor

Step 3: If you have selected a delimiter to separate your fields, Splunk will automatically create a tabular view in order to allow you to see what all events properly parsed would look like compared to its _raw data pictured above.

Step 4: You can choose to rename all fields parsed by the selected delimiter. After saving, you will be able to search upon these fields, perform mathematical operations, and advanced SPL commands.

New call-to-action

What’s Next? Rex and Erex Commands 

What are Rex and Erex Commands?

After extracting fields, you may find that some fields contain specific data you would like to manipulate, use for calculations, or display by themselves. You can use the Rex and Erex commands to do this.

What is the Rex command?

The Rex command can be used to create a new field out of any existing field which you have previously defined. This new field will appear in the field sidebar on the Search and Reporting app to be utilized like any other extracted field.

Rex Command Syntax

| rex [field=<field>] (<regex-expression>)

In order to define what your new field name will be called in Splunk, use the following syntax:

| rex [field=<field>] (?<field_name>”regex”)

What is the Erex Command?

The erex command allows users to generate regular expressions. Unlike Splunk’s rex and regex commands, erex does not require knowledge of Regex, and instead allows a user to define examples and counter-examples of the data that needs to be matched.

Erex Command Syntax

 | erex <field_name> examples="<example, <example>" counterexamples="<example,

<example>"

Erex Command Syntax Example

 | erex Port_Used examples=”Port 8000, Port 3182”

Start Using Field Extractions, Rex, and Erex Commands

A ton of incredible work can be done with your data in Splunk including extracting and manipulating fields in your data. But, you don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate. 

Cue Expertise on Demand, a service that can help with those Splunk issues and improvements to scale. EOD is designed to answer your team’s daily questions and breakthrough stubborn roadblocks. Book a free consultation today, our team of experts is ready to help.

New call-to-action

The Comprehensive Intro to Splunk Knowledge Objects

Conceptual Illustration of Knowledge Objects

What is a knowledge object in Splunk?

Splunk knowledge objects are a set of user-defined searches, fields, and reports that enrich your data and give it structure. Basically, if you’re using Splunk, you’re using one very large knowledge object. With those knowledge objects, you can share them with other Splunk users, and include tags, events, reports, and alerts to organize and maintain your data.

New call-to-action

There are several types of knowledge objects. Together these knowledge objects make up apps, and knowledge objects that service apps are called add-ons.

What are all of the Splunk Knowledge Objects?

  • Saved Searches:
  • Event Types:
  • Tags:
  • Field Extractions:
  • Lookups:
  • Reports:
  • Alerts:
  • Data Models:
  • Workflow Actions:
  • Fields:

The Basics of Splunk Knowledge Objects 

A knowledge object could be a piece of a search or a piece of data being ingesting. It could also just be a group of data. When defining your knowledge objects, ask yourself, “What do I want Splunk to show me?”

To better identify your data, utilize your field extraction to pull from the data coming in. Let’s use the example of the identifier, “transaction ID.” You want to see all information relevant to the transaction ID extracted. To start, you create a field extraction around the knowledge object, transaction ID, and now you can search for that specific set of information.

Knowledge objects exist in the deployment, indexers, search heads, saved searches, and any other user-defined data with your Splunk instance. You can reference a knowledge object any time you’re trying to isolate down your data to a refinement point.

What are Knowledge Object tags?

Tags can help you centralize the naming conventions behind your data and knowledge objects. Below is an example of how this works.

How Splunk Knowledge Objects Work

For this example, we’ll use transaction ID. I’ve. In this scenario, you have multiple streams of data coming in. 

Step 1: The transaction ID comes in through the firewall, hits the web server, goes into the database, and then transfers back through. 

Step 2: If you have a transaction ID throughout that stream, you can tag each knowledge object at each index point. 

Because your firewall is the one sending data in, you’ll want to tag your transaction ID within that point. 

On your web server, you can tag that knowledge object with the same transaction ID. 

The same goes for your database. 

Step 3: Now, when you search on that transaction ID, Splunk will pull up that transaction ID for all of those data inputs.

Note: To maintain a common naming convention, tag your data early on.

Sample tagging in Splunk

Figure 1 – Sample tagging in Splunk

Technical Add-ons

Finally, let’s throw in technical add-ons and how they work in tagging to knowledge objects. When you have a known data type coming in, you can implement a technical add-on. This add-on will take the data, ingest it, and apply known rules to it.

We can look at firewalls as an example. If you have a known firewall bender, you can apply the technical add-on for the known firewall bender. By adding a technical add-on, your data is now CIM compliant. The technical add-ons take the data coming in from the firewall, tag it, and perform a field alias.

The technical add-ons use event data and group the data sets by common terms instead of the vendor term. How is this helpful? The ability to search by common terms allows for easier communication flow across teams. Common terminology, via the Common Information Model (CIM), helps with communication across vendors and teams.

If you found this helpful…

You don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate.

Cue Atlas Assessment: a customized report to show you where your Splunk environment is excelling and opportunities for improvement. Once you download the app, you’ll get your report in just 30 minutes.

New call-to-action

Splunk Event Types and Tags: How to Create & Use Them

 

In this tutorial, I’ll discuss the importance of creating event types and tags in Splunk. Creating event types and tags may seem simple, but taking the steps to categorize your data early on is crucial when building your data model. Here’s a guide for using event types and tags in Splunk.

New call-to-action

Tags allow us to search across different data sources for specific types of events.

How to Create and Use Event Types and Tags in Splunk

  • Utilize event types and tags to categorize events within your data, making searching easier to collectively look at your data.
  • Match your actions with your tag names. For example, if your field pair value is action = purchase, your tag name will be purchase. You can create custom values if there is a specific type of information you want to see.
  • Within Enterprise Security, there are a lot of dashboards and searches that run off information that’s being pushed to the data models. Not only should that information be CIM compliant, but it also needs to have event types and tags because it’s looking for those specific types of that information.
  • Within a data model, there could be different types of events there. Examples of events:  logins, logoffs, timeouts, lockouts, etc. Use tags and event types to categorize these events, rather than just using your default fields and index and source types, allowing your data models to quickly identify what data you’re looking for.
  • Tagging and naming with event types is an essential step BEFORE you start building your data models and save you time in the long run.

If you found this helpful… 

You don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate.

Cue Atlas Assessment: a customized report to show you where your Splunk environment is excelling and opportunities for improvement. Once you download the app, you’ll get your report in just 30 minutes.

New call-to-action

The “Magic 8” Configurations You Need in Splunk

 

When working in Splunk, you can earn major magician status with all of the magic tricks you can do with your data. Every magician needs to prepare for their tricks… and in the case of Splunk, that preparation comes through data onboarding. That’s where the Magic 8 props.conf configurations come in to help you set up for your big “abracadabra” moment.Continue reading

Splunk Choropleth Maps: A Guided Tutorial [+Video]

Splunk provides many visualizations to represent data. One of the most popular visualizations is the choropleth map which is best suited for location data.

What is a choropleth map?

A choropleth map is a type of map that uses colors, shades, and symbols to display the average values of specific data in a geographic location. Choropleth maps utilize KML or KMZ files, also known as ‘Keyhole Markup Language’ which use latitude and longitude coordinates to map out regions. You can create choropleth maps in Splunk using the choropleth visualization. 

To get started, press play on the video to follow along with the written instructions.

How to Create a Choropleth Map

1. Choose your data.

I’m using a CSV file that I will be uploading to my Splunk instance. The first row in the file contains field names and the rest values.

This is what the CSV of Employee Records looks like when it’s ingested to Splunk:

source="employee_data.csv" 
| eval Name=first_name + " " + last_name
| table Name ip_address state

2. Select the KML file for the choropleth map.

Let’s take a look at the KML file I will be using to create our choropleth map:

| inputlookup geo_us_states

Here we see a correlating field of state, and note the coordinates which define each state’s regions.

3. Select the choropleth visualization.

Next, let’s choose the choropleth visualization. Notice that the count for each state is set to 0, causing all states to display the same highlighted color. You’ll want it this way for now as we head into the next step.

4. Create the query.

Now, let’s dive deeper into the employee CSV data to create our query

source="employee_data.csv" 
| stats count by state

Note that all states now have a count. We will use this data to populate our choropleth map.

5. Correlate the KML file’s featureId field.

In order to populate the data into the choropleth map, we will use the ‘geom’ command to correlate the KML file’s featureId field which included states to the field name of state found within the employee CSV data.

As you can see, each state has a count of the number of employees residing within, as well as the coordinates used to map each state’s boundaries

source="employee_data.csv" 
| stats count by state
| geom geo_us_states featureIdField=state

6. Create custom values.

While Splunk’s default formatting can be great for some datasets, let’s create custom values to use in our key and sort by on the map.

Using case statements, we are able to pass multiple argument and value pairs.

source="employee_data.csv" 
| stats count by state
| eval count = case(count<10, "Less than 10", count>10 AND count<30, "10-30", count>30 AND count<60, "30-60", count>60 AND count<100, "60-100", count>100, "Over 100")
| geom geo_us_states featureIdField=state

7. Reset the null value.

Finally, let’s take care of that null value and set it to something more user friendly

source="employee_data.csv" 
| stats count by state
| eval count = case(count<10, "Less than 10", count>10 AND count<30, "10-30", count>30 AND count<60, "30-60", count>60 AND count<100, "60-100", count>100, "Over 100")
| fillnull value="No Employees"
| geom geo_us_states featureIdField=state

As you can see, we now have a fully populated map visualizing the stats in which employees reside.

If you found this helpful… 

You don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate.

Cue Atlas Assessment: a customized report to show you where your Splunk environment is excelling and opportunities for improvement. Once you download the app, you’ll get your report in just 30 minutes.

New call-to-action

How to Manage Splunk Apps & Users

how to manage splunk apps and users

It’s not realistic for you or your engineering team to be the only group responsible for the successful deployment of your Splunk environment. Splunk offers several levels of permissions that grant access to the stakeholders you’ll want to add. This way, you don’t have to worry about the power of Splunk getting into inexperienced hands.

In this article, we’ll show you how to manage your Splunk apps and users as well as guide you on the deployment, configuration, and authentication processes. Let’s get started.

New call-to-action

What is a Splunk deployer?

The first step to administering Splunk apps and users is to use the Splunk deployer. According to Splunk, a deployer is a Splunk Enterprise instance that you use to distribute apps and other configuration updates to search head cluster members.

How the Splunk Deployer Works

  1. A Splunk admin executes a command to apply a new or updated configuration bundle, or 

     1a. A search head cluster member joins the cluster

     2. The search head cluster checks with the deployer for available updates

Deploying new or updated apps has its own set of rules and functions a bit differently.

  1. Create an app by going to apps > manage apps > create app 
  2. Copy the app directory
  3. Deploy the configuration bundle with the apply cluster-bundle command

Configuring and Authenticating Splunk Roles and Users

Giving Splunk access to various users in your organization is relatively straightforward. If you have a smaller team of users, you can use the native authentication controls in Splunk, but for larger teams and companies, you’ll find it helpful to use a Security Assertion Markup Language (SAML) or Lightweight Directory Access Protocol (LDAP). We’ll go over each of these methods in this section.

Types of Splunk authentication

Native Splunk Authentication

To access the authentication settings in Splunk, navigate to settings > access controls. From here, you can create a new user and assign their permissions. The most commo permissions you’ll use are:

  • Admin: All permissions are included by default except can_delete which can be added manually.
  • Power: Ability to schedule searches.
  • User: The basic search permissions.

SAML authentication

SAML authentication allows you to use single-sign on (SSO) supported by information from your identity provider (IdP). To configure SAML, navigate to settings > access controls > authentication method and select SAML. From here, you’ll want to work with the person responsible for SAML within your orginzation to retrieve the correct configuration settings.

LDAP authentication

To authenticate users in the Splunk Cloud Platform, you’ll want to use the LDAP scheme. Before entering the settings for LDAP, Splunk recommends that you complete these three steps first:

  1. Create an LDAP strategy
  2. Map LDAP groups to Splunk roles
  3. Specify the connection order of LDAP servers (if you have multiple servers)

Once you have this completed, navigate to settings > access controls > authentication method and choose LDAP. Just like when setting up SAML, you’ll need to work with your LDAP admin for the correct settings and bind DN password.

These are the basics of managing Splunk apps and users. With this knowledge under your belt, you can begin onboarding your team and stakeholders to your Splunk environment.

If you found this helpful… 

You don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate.

Cue Atlas Assessment: a customized report to show you where your Splunk environment is excelling and opportunities for improvement. Once you download the app, you’ll get your report in just 30 minutes.

New call-to-action