The Beginner’s Guide to Splunk Calculated Fields and Aliases

A user-friendly search and analytics experience is critical to improving the usability of your data in Splunk. By creating calculated fields in Splunk, users can query new fields with or without altering the original field. Calculated fields can:

  • Correct an original field name that is truncated, misspelled, or abbreviated
  • Correlate or aggregate a field with a similar field from a different sourcetype
  • Better describe the data in the field
  • Create a field to filter data
  • Confirm with the Common Information Model (CIM)

In this post, we’ll break down exactly what a calculated field is, how to create one, and how to create a field alias.

But first, the basics:

What is a Calculated Field in Splunk?

A calculated field is a way to perform repetitive, long, or complex derivations from the calculation of one or more other fields. In short, calculated fields are shortcuts to eval expressions

What is a Field Alias in Splunk?

A field alias is an alternate name that can be assigned to a field. Multiple field aliases can be created for one field. 

Field Alias vs Calculated Field

Though both are search-time operations that make it easier to interact with your original data, the field alias takes precedence over the calculated field. Thus, a field alias cannot be created for fields that were created as a calculated field. Both can override an existing field with the new field. To create the field, the user can either add the field to the configuration file, props.conf, or add it from the Splunk Web GUI.

How to Create a Field Alias from Splunk Web

To create a field alias from Splunk Web, follow these steps:

  1. Locate a field within your search that you would like to alias.
  2. Select Settings > Fields.
  3. Select Field aliases > + Add New.
  4. Then, select the app that will use the field alias.
  5. Select host, source, or sourcetype to apply to the field alias and specify a name.
    1. Note: Enter a wildcard to apply the field to all hosts, sources, or sourcetypes.
  6. Enter the name for the existing field and the new alias.
    1. Note: The existing field should be on the left side, and the new alias should be on the right side.
    2. Note: Multiple field aliases can be added at one time.
  7. (Optional) Select Overwrite field values if you want your field alias to remove the field alias name when the original field does not exist or has no value, or replace the field alias name with the original field name when the field alias name already exists.
  8. Click Save.
Figure 1 - Field Alias from Splunk Web
Figure 1 – Field Alias from Splunk Web

How to Create a Calculated Field from Splunk Web

To create a calculated field from Splunk Web, follow these steps:

  1. Select Settings > Fields.
  2. Select Calculated Fields > + Add New.
  3. Then, select the app that will use the calculated field.
  4. Select host, source, or sourcetype to apply to the calculated field and specify a name.
    1. Note: Enter a wildcard to apply the field to all hosts, sources, or sourcetypes.
  5. Enter the name for the resultant calculated field.
  6. Define the eval expression.
Figure 2 - Calculated Field from Splunk Web
Figure 2 – Calculated Field from Splunk Web

However, one of the things to note is that when you create the field alias or calculated alias in the Splunk Web GUI, the field is saved in the /etc/system/local/props.conf configuration file. If you want the configuration file to live in the app associated with the data you are defining the field for, you have to save the field in the /etc/apps/<app_name_here>/local/props.conf configuration file.

New call-to-action

How to Create a Field Alias or Calculated Field in props.conf

To create a field alias or a calculated field in props.conf:

  1. Navigate to /etc/apps/<app_name_here>/local/props.conf
  2. Open the file using an editor
  3. Locate or create the stanza associated with the host, source, or sourcetype to apply to the field alias or calculated field.
  4. Next, add the following line to a stanza:
[<stanza>]

FIELDALIAS-<class> = <orig_field_name> AS <new_field_name>

EVAL-<field_name> = <eval_statement>
    • <stanza> can be:
      1. host::<host>, where <host> is the host for an event.
      2. source::<source>, where <source> is the source for an event.
      3. <source type>, the source type of an event.
    • Field aliases must be defined with FIELDALIAS.
      1. Note: The term is not case sensitive and the hyphen is mandatory.
      2. <orig_field_name> is the original name of the field. It is case sensitive.
      3. <new_field_name> is the alias to assign to the field. It is case sensitive.
      4. Note: AS must be between the two names and multiple field aliases can be added to the same class.
    • Calculated fields must be defined with EVAL.
      1. Note: The term is not case sensitive and the hyphen is mandatory.
      2. <field_name> is the name of the calculated field. It is case sensitive.
      3. <eval_statement> is the expression that defines the calculated field. Much like the eval search command, it can be evaluated to any value type, including multi-value, boolean, or null.

Creating field aliases and calculated fields help make the data more versatile. By using both the original fields and the new fields, users can create knowledge objects that craft a visual story about what the data represents. A well-crafted data visualization can help users understand trends, patterns, and relationships. Making meaningful correlations will ultimately lead to making better decisions.

Need more Splunk Tips?

As a dedicated Splunk partner with a bench full of experts, we’ve gained valuable insights and understanding of the Splunk platform that can excel your business forward. When it comes to best practice methods, training, and solution delivery, we’ve developed service offerings that can help any organization exceed its Splunk goals. For Splunk tips like this post, check out our Expertise on Demand service offering. If you’re working on projects that require a larger scope and Splunk skills, see what our professional service offerings can deliver for you.

New call-to-action

The Beginner’s Guide to the makemv Command in Splunk

Have you ever been stuck with a single field that needed to provide you with a little more… value? The makemv command adds that value. 

Keep reading to learn exactly what the makemv command is, its benefits, and how to use it to bring more value to your searches.

Try Atlas Free for 30 Days

What is the makemv command in Splunk?

Makemv is a Splunk search command that splits a single field into a multivalue field. This command is useful when a single field has multiple pieces of data within it that can be better analyzed separately.

An example of a situation where you’d want to use the makemv command is when analyzing email recipients. “Recipient” is a single field that holds multiple values, but if you want to find a single value, it would require a lot of resources (and time) to search for it. Instead, you can use makemv to display multiple values of a single field as its own field.

Benefits of the makemv Command in Splunk

  • Analyze multiple values within a single field
  • Speed up your searches
  • You don’t have to create a new field to see the multiple values
  • You’re able to search using the search head without exhausting your search cores

How to Use makemv

To demonstrate how to use the makemv command, we’re going to use the following scenario:

Three people on your team received an email. These people are: Elmer, Bugs, and Yosemite. However, the email they received wasn’t from your internal team. Instead, it was a phishing email that has the potential to jeopardize the entire company. You know that when Bugs received the email, he opened it, and you suspect that the bad actors took over his account to send more spam emails to the entire company. It’s critical that you as the Splunk administrator get ahead of this issue immediately. To get started on your investigation, you need to find not only the email that was sent to these three individuals, but you need to find the emails that were sent directly to Bugs.

Let’s use the makemv command to solve this problem in this scenario.

Step 1: Start your search.

Use the search string below to start your initial search. Here, we’re telling Splunk to return to us all the recipients of the phishing email. 

| makeresults | eval recipients=”elmer@acme.com, bugs@acme.com, yosemite@acme.com

How to use the makemv command in Splunk: Step 1

Step 2: Use the makemv command along with the delim argument to separate the values in the recipients field.

How to use the makemv command in splunk: step 2Now that we have all the recipients of the email, we’re ready to look at the individual recipients as part of our investigation. Along with using the makemv command to find our specific recipient, Bugs, we’re using the delim argument to separate the email addresses into their own lines within the field. This makes the data easier to work with and puts it in the traditional makemv format in our table. The delimiter here is a comma since our email data is separated by commas.

| makeresults 

| eval recipients=”elmer@acme.com, bugs@acme.com, yosimite@acme.com” 

| makemv delim=”,” recipient

Step 3: Find the emails where Bugs is a recipient.

The great part about the makemv command is that you can find the emails where Bugs is a recipient rather than finding all the emails sent to the company and sorting for Bugs that way. This step in the makemv process is what makes it so efficient and valuable to have in your Splunk search repertoire.

| makeresults | eval recipients=”elmer@acme.com, bugs@acme.com, yosimite@acme.com” 

| makemv delim=”,” recipient 

| search recipient=”bugs@acme.com”
how to use the makemv command: step 3

Extract field values with regex

The makemv command can also use the regex command to extract the field values. In this example, we’re using the regular expression or “tokenizer” to match the string against the word characters in the email address.

| makeresults | eval recipients=”elmer@acme.com, bugs@acme.com, yosimite@acme.com” 
| makemv tokenizer=”(/we+)0” recipient 
| search recipient=”bugs”


how to extract field values using the makemv command and regex command in splunk
split Command vs. makemv Command

The split command in Java or Python and the makemv command in Splunk are similar in that they both separate values by a delimiter, but the primary difference is that the split function only separates the data into separate strings, it does not separate it into separate fields. When searching for data in Splunk, speed and ease are the name of the game, so makemv is the better choice between the two in this scenario.split command vs makemv command in splunk

Now that you have some basic understanding of the makemv command, try it out in your environment. Happy Splunking!

If you found this helpful…

You don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate.

Cue Atlas Assessment 30-day free trial: a customized report to show you where your Splunk environment is excelling and opportunities for improvement. You’ll get your report in just 30 minutes.Try the Atlas Assessment

Splunk eval Command: What It Is & How To Use It

 

Where to begin with Splunk eval search command… in its simplest form, eval command can calculate an expression and then applies the value to a destination field. Although, that can be easier said than done. That’s why in this post, we’ve broken down the eval command so you can understand exactly what it is, what it does, and why it’s so valuable to Splunkers everywhere.

What is the eval command in Splunk?

The eval command is a commonly used command in Splunk that calculates an expression and applies that value to a brand new destination field.

Eval command is incredibly robust and one of the most commonly used commands. However, you probably don’t know all the possibilities eval is capable of performing. Before we jump right in, let’s take a quick look at the syntax:

Eval Command Syntax

|eval <field> = <expression>

What does the eval command do?

Essentially, you are creating a field in Splunk where one doesn’t already exist. The primary benefit of the eval command is that it allows you to see patterns in your data by putting the data into context. That context is created through various formulas that carry out specific functions such as:

  • Mathematical functions
  • Comparison functions
  • Conversion functions
  • Multivalue functions
  • Date and Time functions
  • Text functions
  • Informational functions

Each of the functions above has its own list of arguments-based functions, but in this guide, we’ll start with ways to use some basic eval commands.

Ways to Use the eval Command in Splunk

1. Use the eval command with mathematical functions

When we call a field into the eval command, we either create or manipulate that field for example:

|eval x = 2

If “x” was not an already listed field in our data, then I have now created a new field and have given that field the value of 2. If “x” is a field within our data, then I have overwritten all the fields so that now x is only 2. This is the simplest way to use eval, list a field, and give it a value.

Although it might seem too simple to list here, using eval to complete mathematical functions can be quite helpful when analyzing a lot of data. You can turn values into percentages and even use the stats command to add additional context to your data.

|stats count

|eval number = 10

|eval percent = (count/number)*100
New call-to-action

2. Format time values with the eval command.

There are a couple of ways we can work with time using eval.

The first is formatting. Here’s an example: If we are bringing in a time field but it’s written in epoch time, we can convert it into a readable time format:

|eval time = strftime(<time_field>, “%Y-%m-%d %H:%M:%S”)

The second is stripping a time format and converting it to epoch:

|eval time= strptime(<time_field>, “%Y-%m-%d %H:%M:%S”)

I know what you’re thinking, “That’s cool, but what if I need to compare my time values with a static time value to say, I don’t know, filter out events”? Great question, here’s how to do that.

3. Compare time values with eval

Using “relative-time” I can create a rolling time window:

|eval month = relative_time(now(), “-1mon”)

This line will return a value that is exactly 1 month from now, the time period can be changed to be a day, a week, 27 days, 4 years, whatever your heart desires. From here we can use a where command to filter our results:

|eval time= strptime(<time_field>, “%Y-%m-%d %H:%M:%S”)

|eval month = relative_time(now(), “-1mon”)

|where time > month

Because both of these time values are in epoch, we can simply find results where time is a higher number than a month, or in even simpler terms, anything more recent than one month.

4. Use If and Case with eval

IF and CASE are in the same vein of comparison, however, CASE will allow for more arguments. Let’s take a quick look at these two:

|eval test = if(status==200, “Cool Beans”, “No Bueno”)

Using IF

 Here’s the breakdown, when using IF, we need to pass three arguments:

  • The condition – this is usually “if something equals some value”
  • The result – if said field does equal the defined value, then the test’s value is the argument
  • The else – if said field does NOT equal the defined values, then test’s value is the argument

In this case, if status equals 200, then the text would say, “Cool Beans.” If the value of status is anything other than 200, then the text reads, “No Bueno.”

Using CASE

As stated earlier, CASE will allow us to add more arguments…

 |eval test = case(status==”2*”, “Cool Beans”, status==”5*”, “Yikes”, status==”4*”, “might be broken”)

As you can see, we can apply multiple conditions using the case to get a more robust list of descriptions. 

5. Use lower/upper with eval.

Sometimes, the text formatting in our data can be weird. Splunk says that when you search for a value it doesn’t need to be case-sensitive… but take that with a grain of salt. It’s also not true when comparing values from different sources. Check out this scenario…

Event Data – ID: 1234AbCD

Lookup – ID: 1234abcd

If I’m trying to use a lookup command and join and get values in a coherent table of information, that’s not going to happen. Why? Because the two values don’t match. Sure, the numbers and letters are the same, but the formatting is different. Splunk views that as a roadblock. Need a quick fix? Here’s one that’s super easy and barely an inconvenience.

|eval id = lower/upper(id)

Lower and upper will allow you to format a field value to make all letters each lowercase or uppercase depending on which function you use. Finally, make the letters in the event data lowercase so that the lookup and indexed data can communicate correctly.

Other eval functions

Lower(x)

Lower will take all the values from a field and make them lowercase

Syntax:

|eval field = lower(field)   

Upper(X)

Upper will do the same as lower but all uppercase 

Syntax:

|eval field = upper(field) 

Typeof(x)

Typeof will create a field that will tell you the data type of the field.

Syntax:

|eval type = typeof(field) 

Example: string, number 

Round(X,Y)

Round will take a numeric value and round it to the nearest defined decimal place 

Syntax

| eval field = round(field, decimal place) 

Example – round(4.56282,2) = 4.56 

Mvjoin(x,y)

This will take a field that has multiple values separated by a space and add a delimiter making it a single value (think opposite of makemv

Syntax:

|eval field = (field,string) 

|eval field = mvjoin(field, “,”) 

Output = 1,2,3,4,5 

Example: Field – number = 1 2 3 4 5 

Eval Command Basics

Eval is a very powerful command in Splunk that gives you insight into your data that just can’t be seen on the surface of your monitoring console dashboards. Try out the eval command on your next search and explore the possibilities for yourself.

If you found this helpful… 

You don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate.

Cue Atlas Assessment: a customized report to show you where your Splunk environment is excelling and opportunities for improvement. Once you download the app, you’ll get your report in just 30 minutes.

New call-to-action

Every Question You Should Ask About Assets in Splunk

In Splunk Enterprise Security, asset and identity data management is essential to fully utilize the platform. An asset is a networked system in a customer organization. And the identity is a set of names that belong to or identify an individual user or user account. Having an accurate, complete list of your organization’s assets and identities is key to any security posture. Without it, you will not be able to answer basic, but important questions surrounding normal activity for your organization.

Having this list will allow you to assess the criticality and legitimacy of an entity on your network.

Ask Yourself

Ask yourself each of the following questions to identify every asset needed within your organization.

Does that system belong to the organization?

Who owns that system?

Is the system owner different from the application and/or data owner?

What other systems, applications, and network segments should that system be able to communicate with?

Which applications are running on that system?

What applications are supposed to be running on that system?

Have any new applications been installed on that system recently and if so, who installed them?

Has an application recently begun communicating on a new port?

Who is supposed to have access to that system?

Who is supposed to have access to the applications on that system?

Does that user’s activity correspond to the level of access they have been granted?

Is the frequency by which that user accesses that system consistent with how often that user normally accesses it?

Have a user’s privileges recently been elevated?

If so, who elevated those privileges?

Has a system recently downloaded or uploaded a large amount of data outside of the organization?

Is the amount of traffic generated by that system consistent with the amount of traffic generated by that system on previous days and with other systems running similar or the same application?

View this as a checklist. When considering the assets and identities within your organization, these questions should help you identify the right players. Documentation of these questions is important.

New call-to-action

 

Look at Your Logs

Once the critical systems are identified, the answers to these questions will help you to monitor your assets. You can build your reporting to identify data that differs from the normal usage and activity of the systems. When you’re looking to monitor your assets, refer to these logs:

Network traffic logs

Authentication logs

Application logs

Change management logs

Endpoint protection logs

Web logs

A Quick Guide to Asset Management in Splunk

There are your quick and easy steps to asset and identity management within Splunk. Sometimes, you need to ask yourself a full slate of questions to fully understand the system information around your security posture in Splunk.  Kinney Group has years of experience working in Splunk and Splunk Enterprise Security. If you’re looking for help identifying or managing your assets and identities, our services can help.

New call-to-action

Dude, Where’s My Data (Part 3)

In Dude, Where’s My Data? (Part One), you learned how to configure your data source to send your data to Splunk. In Dude, Where’s My Data? (Part Two), you learned how to configure Splunk to receive and parse that data once it gets there. However, you still aren’t seeing your data in Splunk. You are pretty sure you configured everything correctly, but how can you tell?

Check your Splunk configuration for any errors. In these instances, there are three troubleshooting steps that I like to look at in order to ascertain what the problem could be.

They are as follows:

1. Check for typos

2. Check the permissions

3. Check the logs

Check for typos

There is always the possibility that even though the inputs look correct, there may be a typo that you originally missed. There may also be a configuration that is taking precedence over the one you just wrote. The best way to check is to use btool on the Splunk server configured to receive the data. This command-line interface (CLI) command checks the configuration files, merges the settings that apply to the same stanza heading, and returns them in order of precedence.

When looking for settings that relate to the inputs configured for a data source, this simple command can be run:

./splunk btool <conf_file_prefix> list -app=<app> --debug | grep <string>

Where <string> is a keyword in the input that you are looking for, will help quickly locate the settings that apply to that particular input.

Check the permissions

More times than not, the issue preventing Splunk from reading the log data is that the user running Splunk doesn’t have the correct permissions to read the file or folder where the log data is stored. This can be fixed by adding the user running Splunk to the group assigned to the file on the server that is configured to send data to Splunk. You should then, make sure that the group has the ability to read the file. On a Linux host, if you wanted Splunk to read, for example, /var/log/secure/readthisfile.log, you would navigate to the /var/log/secure folder from the command line interface using the following command:

cd /var/log/secure

Once there, you would run this command:

ls -l

This will return results that looks similar to the line below:

rwxr----- creator reader   /var/log/secure/readthisfile.log

Where creator, who is the user that owns that file, has the ability to read, write, and execute the file; reader, who is the group that owns the file, has the ability to read the file, and all other users cannot read, write, or execute the file.

Now, in this example, if the user running Splunk is Splunk, then you can check which groups that Splunk belongs to by running the following command:

id splunk OR groups splunk

If the results show that the Splunk user is not a member of the reader group, a user with sudo access or root, can add Splunk to the reader group using the following command:

sudo usermod -a -G reader splunk_reader
New call-to-action

Check the logs

If the Splunk platform’s internal logs are accessible from the Splunk GUI, an admin user can use the following command to check for errors or warnings:

index=_internal (log_level=error OR log_level=warn*)

As a bonus, if your firewall or proxy logs are configured to send data to Splunk, and those logs are capturing data about network traffic between the data source and the Splunk server configured to receive data from your data source, searching these logs for errors by specifying the IP address and/or hostname of the sending or receiving server will help you find out if data is being blocked in transit. On a Linux host, the following commands can also tell you which ports are open:

sudo lsof -i -P -n | grep LISTEN

sudo netstat -tulpn | grep LISTEN

sudo lsof -i:22 ## see a specific port such as 22 ##

sudo nmap -sTU -O localhost  

Data Dreams Do Come True!

One, Two, Three strikes… and you’re out of problems with Splunk. Ha, if only these three blog posts could fix all of your Splunk issues, but we hope it helps. If you’re still having Splunk data configuration issues, or have any other troubleshooting needs, see how our Splunk services can help!

New call-to-action

Dude, Where’s My Data? (Part Two)

In Dude, Where’s My Data – Part One, we covered how to identify the right data to bring in. Now, let’s look at how you can ensure you’re getting that data into Splunk in the right way.

One of the easiest ways to ensure that your data is coming in correctly is to create a Technical Add-on (TA) for each data source you are sending to Splunk. By putting all the settings in one central location, you, your team, and any support representative can quickly create, read, update or delete configurations that tell Splunk how to process your data. This information can include:

  • Field extractions
  • Lookups
  • Dashboards
  • Eventtypes
  • Tags
  • Inputs (where the data is coming from)
  • And who has access to the view this data

Technical Add-ons are the lifeblood of any well-tuned Splunk environment and can mean the difference between spending hours and spending minutes troubleshooting simple problems.

New call-to-action

Getting the Data In

There are several ways to bring data in, including uploading a log file from the Splunk Web GUI, specifying a Universal Forwarder (UF) using CLI or modifying the configuration files directly. Customers often don’t realize that using more than one of these methods can cause configurations to be stored in several places. You can find these configurations commonly stored in the following folders:

  • /etc/system/local
  • /etc/apps/<appname>/local
  • /etc/apps/<username>local

Having log files stored in that many places can make it difficult to determine which configurations take precedence. By storing configuration files related to a single data source in one central location, there is no need to wonder which configuration is the one that is active. It also allows you to quickly expand your architecture by sharing your TA with other Splunk servers in your deployment.

Call the Experts

That closes up our two-part walk-through on getting data into Splunk the right way. Now let’s get these Splunk roadblocks removed. Check out  Kinney Group’s service offerings to find the specialized work we can deliver for you.

Want to learn more? Fill out your contact information below!

New call-to-action

Dude, Where’s My Data? (Part One)

Trying to find the proverbial “needle in a haystack” can be overwhelming when it comes to getting data into Splunk. Customers are often in such a hurry to provide value from their new Splunk deployment that they start bringing in all their data at once. This can lead to data uncertainty. How do you find what is truly important when all of it seems important? It’s as if your organization is having an existential crisis. So, what do you do?

1. Identify your use cases

Here are some questions and common use case areas you’ll need to get answered to kick things off…

Save time

  • Where are your employees spending most of their time?
  • What reports do they have to create manually every month?
  • What can be automated using Splunk?

Find the blind spots

  • Where are your organizational blind spots?
  • Do you know which servers are experiencing the most activity?
  • Are the most active servers the ones you thought it would be?

Clarity on systems

  • Are you planning for a major expansion or system adoption?
  • Do you have enough resources to accommodate the number of users?
  • Is access limited to only those users who need it?
  • Do we have an effective means of capacity planning?

Look at the ROI

  • Can we cut costs?
  • Which systems or over or undersized?
  • Do we need more bandwidth?

These and other questions are a good place to start to help you categorize your data needs quickly. Though you will probably not identify all your use cases at once, you will most likely uncover the most pressing issues on the first pass.

2. Prioritize your use cases

Once you have identified and the questions you would like to answer, you must arrange your data into categories based on their priority. The easiest grouping is:

  • Needs
  • Wants
  • Luxuries

These categories will help you segment the use cases into tasks that you should focus on immediately. Needs are things that will benefit the largest group of people and/or will potentially save your organization money in the long run. The needs are really what brings value to the way the business is run. Wants are things that will make a subset of users very happy to have but they could continue to function, albeit not as efficiently, if they had to wait a little while longer. Luxuries are cool to have, but probably satisfy a very specific niche request.

 

New call-to-action

 

3. Identify your data sources

Once you have identified and prioritized the questions you would like to answer, you must identify which data will help you answer those questions. Make sure to consider which data sources will help you satisfy several use cases at once. This will help you correctly determine the size of your daily license and make sure you only focus on the data sources you need to address the needs and wants of your organization.

4. Identify your heaviest users

By creating a list of people who need access to each data source, you can correctly determine how large an environment is needed to support all the data sources you plan to bring in. It also helps when determining each user’s level of access. If a data source is widely popular, it may behoove you to create a dashboard and/or report to quickly disseminate important information that the users may need. It will also help size expansion of the environment.

By taking these four steps, users will not only feel like their needs are being heard, it will help them feel empowered to identify further use cases for future expansion. It will free up their time to focus on more complicated tasks and can mean the difference between them being proactive as opposed to reactionary. By taking the organization’s greatest needs into account, it can mean the difference between users adopting a Splunk implementation as their own and it being discarded as just another tool.

What’s Next?

Stay tuned for Part Two to learn more on how to get your data into Splunk in the right way. Until then, take a look at Kinney Group’s specialized service offerings. Whether we can help you clean up your existing data or in getting data into your environment correctly, Kinney Group has the expert Splunk consultants to help you with just that.

Want to learn more? Fill out your contact information below!

New call-to-action