Know Your Knowledge Objects in Splunk

Are you having trouble getting your data from different sources to be useful? Often, our customers have many data sources coming into Splunk, but don’t know how to get the full benefits of the data.  Why is this a consistent issue for Splunk customers? In our experience, it’s mostly to do with an unfamiliarity with Knowledge Objects in Splunk.

The Sales Scenario

In this scenario, let’s take a look at how you can connect your alerts function to sales performance. As a sales leader, you want to be alerted on and track the following criteria, met with your data…

  • Sales performance on products
  • Heavy increase/decrease in sales of a product
  • Track/trend data history including item # references, location, customer name
  • A customer has chosen a competitor’s product

First things first — where can we find this data? In this scenario, check out your billing system. We’ve seen that you can get most of this information there. As a general rule, a developer needs that data to troubleshoot issues. Now, you could turn in a request to IT… but we know that process is lengthy.

It’s likely you have the data but it is spread out into multiple log files, databases and files. Getting that very large volume of unstructured data and then meshing it together can be an almost impossible task. Thankfully, Splunk has the ability to create knowledge objects which can save the day. Allow Splunk to complement the processes your data warehouse folks have put in place. Splunk knowledge objects can meet a need for real-time data gaps that exist in many organizations.

Look Back at Old Data

Let’s set some steps to operationalize your data. Once you grab your data, here’s some help on how to get reports, generate alerts, set thresholds, etc. on things you need to be watching.

Check out these sources:

Log_file_1: It collects and captures Summary Order Information: Account_Number, Order_Number, and Total_Order_mount.

Log_file_2: Captures Order Detail Activity: Order_Number, Product_ID, Status (cancel, activate, suspend), Location, Item_Amount.

Table_1: Has customer information: Account_Number, Customer_Name, Customer_Contact, Customer_Phone, Customer_Location, etc.

You’ll want to use the Splunk DB Connect to pull a list of active customers and get it ready to enhance your logs. You’ll run a quick DBConnect SQL job to pull the fields back you want on a schedule. The best part about Splunk knowledge objects, is they can be scheduled to run as needed. If I need to pull the list once a day I can, or if I need to pull it every 15 minutes…done.

To make the data useful, check out these steps…

  1. Sync-up Account Number in your DB Connect pull to the Account_Number in Log_file_1. Now, you’ve created a lookup from the DB Connect pull and match on Account_Number in Log_file_1.
  2. Next, you’ll want to create a new field in your Log_file_1 _index called Customer_Name, Customer_Contact, and Customer_ Phone.
  3. Finally, you’ll sync all your account information with the data in the Log_File_1_index  so that it’s is available for searching.

Great! You just created a search to pull back the time periods you need to report on. It might look something like this:

index=Log_file_1_index |fields Customer_Name, Account_Number, Total_Order_Amount, Customer_Location | table Customer_Name, Account_Number, Customer_Location, Total_Order_Amount

Look Ahead at New Data

Now you’re tracking on historical data, but you still need to track this by current products and statuses. Let’s make it easy on the users.  We have all the extra fields we need in Log_file_1_index now that we have enhanced it.  Why not use the same approach as we did in Step 2?  Essentially, you’re creating a stacked approach to data enhancement.  You would create a lookup that extracts all the field we need from Log_file_1_index and enhance the Log_file_2 index.  What does this look like?

  1. Pull your fields. We create a search that pulls the fields needed from Log_file_1_index. It looks like we need Customer_Name, Account, Number, Customer_Location, Order_Number, and Total_Order_Amount.  Then, we use that lookup to propagate the data into Log_file_2_index.  Do we have any matching fields?  Whew, Order_Number looks like our ticket!  We match on Order_Number and output the Customer_Name, Account,Number, Customer_Location, and Total_Order_Amount .  Once that’s done, let Splunk do it’s magic! It makes those fields available in Log_file_2_index.
  2. Create your search. This might look something like this: index=Log_File_2_index |stats sum(Item_Amount) as Amount by Cutomer_Account, Customer_Name,Customer_Location, Order_Number, Total_Order_Amount,  Product_Number, status
  3. Visualize your data. Finally, add this to a dashboard with various views that my end users can see and dissect the information further. You’ll want to set a threshold of alerts on that dashboard, defined by end-user feedback on various products.  Now, you’re set to get alerts on product performance.  In real-time, you can see what is causing the changes in sales.

Kinney Can Help

Kinney Group has years of experience developing unique solutions and creating knowledge objects to support every type of Splunk environment. Need some help creating knowledge objects or other custom features in Splunk? Contact us and fill out the form below.

Simplify Your Data with Splunk CIM

Slow Splunk Search Performance Solution - Kinney Group - Jumspstart for Splunk

The Cure for the Common Empty Dashboard

In previous blogs “Dude, Where’s My Data” (Part One and Part Two), we focused on the essential steps of onboarding your data into Splunk. Let’s make that data functional in your dashboards. Let’s say those guidelines didn’t make all your dashboards from the APPs you installed show your data properly.  Or sometimes the data doesn’t show up at all?

It is not you. It’s not the app. It’s not the data… but the cause may be related to Common Information Model (CIM) compliance.

A Rose by any Other Name Still Smells as Sweet

Shakespeare wrote that line in “Romeo and Juliet.” It is still true today, but in the world of expanding data, it is not quite as simple. The Common Information Model is the way Splunk identifies all the roses in the data, called by different names, to be recognized a rose. Different roses have different scientific names but are designated by their genus and species. In the same way, Splunk uses the CIM to identify different names for the same data. This helps Splunk to find and correlate different names for the same data.

CIM-plify Your Searches

The CIM data model is a way for Splunk to normalize your data to identify common data types into a simplified data model. For example, imagine you are standing in the check-out line at the grocery store. You hear terms like “Climbing Pinky” and “Knock-Out” and “English Tea.” What are these people talking about? The answer is roses.

The same principle is in effect in the CIM. It allows the Splunk end-users and APPs to search common fields across many source types. Instead of different names for roses, for example, fields and sourcetypes may have “user_ID” or “username” or “Login_ID” to identify an entity using a particular system, the “user.” The CIM takes common names for the same data, puts it into the model to normalize different names for the same function or entity across all Splunk data. Using CIM is a way of normalizing data for maximum efficiency at search time.

CIM Data Models

Much of the work behind normalizing data is already done for you. The CIM has a library of models that already have common data types normalized.  Most APPs come CIM ready and take advantage of these models in their dashboards and searches. Here is a list of data models already in Splunk:

Figure 1 - Data Models in Splunk
Figure 1 – Data Models in Splunk

The CIM is not restricted to just what is in the listed models. You can add new fields to the model as needed. For example, a new user field might be “system_user” which could be added under “user” in the model. The process is as follows:

  • Extract data fields – find the field of data you want to add to the data model
  • Normalize – add the data to the CIM in the appropriate model.
  • Tag – add a tag to that field and data so that it can be found across all searches

Making Data CIM Compliant

Making data CIM compliant is easier than you might think.  Ensure yours has a proper sourcetype. The next step is to extract your fields from your data.  Once you have extracted your fields and identified your fields, it is time to create an alias in the CIM.

Here is an example of creating an alias:

username AS user

The term “username” will return in Splunk searches for “user.” With all the terms for “user” in CIM, a single search for “user” will return all terms for user such as “userID,” “system_user,” “username,” etc., as long as they are in the CIM. Otherwise, a search might have to look something like this:

Index = network “user” OR userID” OR“system_user” OR “username”

The CIM normalizes these terms so that all items in the network index that have an identifier for the entity that uses the system and can be returned with just the term “user.”

Figure 2 - Fields of Authentication event datasets
Figure 2 – Fields of Authentication event datasets

The CIM model can also be used to capture calculated fields or actions.  In this example, an action has several possible outcomes.  By using a calculated field entry into the CIM, a search for “action” can return multiple results of the action.  Take a look at this example:

action=if(action="OK","success","failure")

In this way, the CIM can capture calculated results within a field with just the term “action” without specifying “OK,” “success,” and “failure.”

Figure 3 - Tags used with Authentication event datasets
Figure 3 – Tags used with Authentication event datasets

Lastly, tag your data fields to match them up with a data set within the CIM.

Need Help Simplifying Your Data?

Kinney Group can help jump start your dashboards by helping you make your data CIM compliant. Our team has the real-world experience of matching your data types, extracting fields, and putting it into the CIM so that your data can work for you instead of you working your data to get the critical results you need.

Step Up Your Search: Exploring the Splunk tstats Command

If you’re running Splunk Enterprise Security, you’re probably already aware of the tstats command but may not know how to use it. Similar to the stats command, tstats will perform statistical queries on indexed fields in tsidx files. Significant search performance is gained when using the tstats command, however, you are limited to the fields in indexed data, tscollect data, or accelerated data models.

The Power of tstats

Let’s take a simple example to illustrate just how efficient the tstats command can be. For this example, the following search will be run to produce the total count of events by sourcetype in the window’s index.

index=windows  
| stats count by sourcetype
| sort 5 -count
| eval count=tostring('count',"commas")

This search will output the following table.

Figure 1 - Table produced by the first search.
Figure 1 – Table produced by the first search.

By looking at the job inspector we can determine the search efficiency.

Figure 2 - Job inspector for first search. 
Figure 2 – Job inspector for first search.

This search took almost 14 minutes to run. We can calculate the Events Per Second (EPS) by dividing the event scanned by the number of seconds taken to complete. This can be helpful when determining search efficiency. The EPS for this search would be just above 228 thousand, a respectable number.

By converting the search to use the tstats command there will be an instant, notable difference in search performance.

| tstats count where index=windows by sourcetype

| sort 5 -count

| eval count=tostring('count',"commas")

This search will provide the same output as the first search. However, if we take a look at the job inspector, we will see an incredible difference in search efficiency.

Figure 3 - Search job inspector for tstats command search. 
Figure 3 – Search job inspector for tstats command search.

Here we can see that the same number of events were scanned but it only took 1.342 seconds to complete! That’s an EPS of about 142 million.

Implement tstats

The tstats command is most commonly used with Splunk Enterprise Security. Anytime we are creating a new correlation search to trigger a notable event, we want to first consider if we can utilize the tstats command. The basic usage of this command is as follows, but the full documentation of how to use this command can be found under Splunk’s Documentation for tstats.

| tstats <stats-function> from datamodel=<datamodel-name> where <where-conditions> by <field-list> 
i.e. 

| tstats `summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks where IDS_Attacks.severity=high OR IDS_Attacks.severity=critical by IDS_Attacks.src, IDS_Attacks.dest, IDS_Attacks.signature, IDS_Attacks.severity 

| `drop_dm_object_name(IDS_Attacks)`
Figure 4 - Output of example tstats search for Intrusion detection data. 
Figure 4 – Example tstats search for Intrusion detection data.

Notice in the example search that the dataset name “IDS_Attacks” is prepended to each field in the query. This is a requirement when searching accelerated data from the data models. Only the fields that are in the accelerated data models can be used. To find out more about the fields contained in the data models for ES see the documentation for Splunk’s Common Information Models (CIM).

Getting Unstuck

Understanding and correctly implementing the tstats command can significantly improve the performance of the searches being run. This command should always be considered when creating new correlation searches to improve search efficiency and overall performance of ES.

Kinney Group Splunk consultants are highly experienced and know how to get you “unstuck.” Fill out the form below to learn more about our expert Splunk solutions.

Keep Things Flowing in Your Splunk Data Pipeline

In a given Splunk environment, there are hundreds (or magnitudes more) of machines generating metrics and feeding your syslog data into Splunk. In a small environment, there is a single syslog server receiving data from hundreds or thousands of machines, like your work laptop, for instance. On the syslog server, there is a Splunk Universal Forwarder collecting all of it and sending it to the indexers for storage and so that it can be searched by users.

Most of the time, the data is coming from sources that may be distinctly separate companies or sections of a business, like HR and SecOps/ITops. We do not want HR seeing SecOps/ITops data and vice versa. Additionally, this data is likely to be of different formats, and we have to tell Splunk what to expect in order to efficiently and accurately index the data.

We can use Splunk to analyze the data before it gets to the indexes so that when it does get to the indexes, we send HR data to one index and SecOps/ITops to a different index (keeping data in different indexes is the only way to control access to and retention of the data). Additionally, if the data is of different formats, we can teach Splunk to recognize the difference and to extract fields and timestamps accordingly.

The Problem

You have syslog data coming into Splunk. It is a combination of technologies, Cisco and Juniper devices, Bluecoat data, etc. Instead of all of this data going into a single index as a single sourcetype, you need for Splunk to split this data in a way that makes sense (and abides by security and compliance policies). In this example, you are aggregating a bunch of data from multiple syslog servers on to a single syslog “aggregator.”

The syslog aggregator is putting all of the syslog data into common files rather than separating it. When you configure the UF to monitor the data, you have to set a sourcetype and index when you configure the input.  You can only have one setting for each attribute, so you set “sourcetype=syslog” and “index=common”, but this is not acceptable for the previously described reasons.

The Fix

We need Splunk to perform two operations on these events:

  1. Change the sourcetype as appropriate for the data
  2. Send data whose sourcetypes were renamed to a specific index

We will have two types of events, dhcp and dns.

Here are the sample events…

<pre>Feb  5 19:51:45 10.11.32.139 dhcpd[25291]: DHCPACK on 10.5.128.190 to 00:e0:c5:2f:55:d1 (blah-T-blah) via eth1 relay 10.5.128.131 lease-duration 691200 (RENEW)

Feb  5 19:51:45 10.11.32.139 dhcpd[25291]: DHCPREQUEST for 10.5.128.190 from 00:e0:c5:2f:55:d1 (blah-T-blah) via 10.5.128.131 TransID 400968e0 (RENEW)

Feb  5 19:51:45 10.11.32.139 dhcpd[25291]: DHCPACK on 10.5.128.190 to 00:e0:c5:2f:55:d1 (blah-T-blah) via eth1 relay 10.5.128.130 lease-duration 691200 (RENEW)

Feb  5 19:51:23 10.121.8.77 named[13705]: client 10.111.9.62#9832: received notify for zone '10.in-addr.arpa'

Feb  5 19:51:23 10.121.9.62 named[9607]: client 10.111.9.62#9832: received notify for zone '10.in-addr.arpa'

Feb  5 19:51:23 10.111.8.77 named[16928]: client 10.111.9.62#9832: received notify for zone '10.in-addr.arpa'

Feb  5 19:51:23 10.101.9.62 named[12039]: client 10.40.96.40#56180: updating zone blah.fdbl- blah.com/IN': adding an RR at '_ldap._tcp. blah.fdbl-int.com' SRV 0 100 389 usaokay1.fdbl-int.com.

Feb  5 19:51:35 10.41.1.139 dhcpd[7958]: DHCPOFFER on 10.235.140.193 to b0:90:7e:69:32:01 (blah-uc-blah) via eth1 relay 10.235.140.3 lease-duration 119 offered-duration 43200 uid 00:63:69:73:63:6f:2d:62:30:39:30:2e:37:65:36:39:2e:33:32:30

Feb  5 19:51:35 10.41.1.139 dhcpd[7958]: DHCPOFFER on 10.235.140.146 to b0:90:7e:69:32:01 (blah-uc-blah) via eth1 relay 10.235.140.2 lease-duration 119 offered-duration 43200 uid 00:63:69:73:63:6f:2d:62:30:39:30:2e:37:65:36:39:2e:33:32:30

Feb  5 19:51:35 10.41.1.139 dhcpd[7958]: DHCPOFFER on 10.235.141.19 to b0:90:7e:69:32:01 (blah-uc- blah) via eth1 relay 10.235.140.3 lease-duration 119 offered-duration 43200 uid 00:63:69:73:63:6f:2d:62:30:39:30:2e:37:65:36:39:2e:33:32:30</pre>

For the DNS events, we want to apply sourcetype=infoblox:dns…for the DHCP events, we want to apply sourcetype=Infoblox:dhcp…we want Splunk to send all of the re-sourcetyped infoblox data to index=Infoblox

Here are the props and transforms to make this happen…

Props.conf

[syslog]
TRANSFORMS-sourcetype_rename = sourcetype_rename1
TRANSFORMS-sourcetype_rename1 = sourcetype_rename2
TRANSFORMS-index_reroute = index_reroute1

Transforms.conf

[sourcetype_rename1]
REGEX = dhcpd
FORMAT = sourcetype::infoblox:dhcp
DEST_KEY = MetaData:Sourcetype
[index_reroute1]
REGEX = dhcpd|named
FORMAT = infoblox
DEST_KEY = _MetaData:Index

[sourcetype_rename2]
REGEX = named
FORMAT = sourcetype::infoblox:dns
DEST_KEY = MetaData:Sourcetype

For testing, make sure to set your sourcetype to syslog, and make sure you configure the data to be sent to an empty index(so you can confirm the success of the test). Make sure you create an index called infoblox for receiving the re-routed events. When done correctly, all ten of the sample events will go to index=infoblox…six events will be sourcetype=infoblox:dhcp…4 events will be sourcetype=infoblox:dns

In props.conf, if you do not call the transforms in the correct order, you will not be successful in performing all of the desired operations. After data gets sourcetyped, it goes to the indexqueue. Once there, the sourcetype cannot be changed. There you have it, a lesson on the Splunk data pipeline.

Figure 1: Splunk Data Pipleline
Figure 1 – Splunk Data Pipeline

Ask the Experts

Looking for Splunk help? Our Expertise on Demand subscription services will help you along your Splunk Data Pipeline journey…and just about any other issues you need resolved in Splunk. Interested in learning more about Expertise on Demand or our Kinney Group professional services? Check out the form below.

Lean on Splunk for your Remote Work Insights

In Security Tips for Work From Home (WFH) Life, we explored guidelines on how to efficiently and safely set up your work from home environments. The individual colleague has the responsibility to ensure they’re maintaining a secure remote-work environment. Looking past the individual worker, companies are now tasked with ensuring a good remote work environment for their colleagues to stay productive and secure. How can organizations get these critical insights? Let’s jump into Splunk and see your company can monitor the safety and performance of your remote workforce.

Splunk Remote Work Insights (RWI)

In light of COVID-19, Splunk has released the Remote Work Insights (RWI) Application. This free-to-download application contains reports and dashboards that provide insight into the critical applications your organization is using to keep the business running. Along with application management, the RWI solution gives immediate insight into business performance and network security. As we get through this pandemic and beyond, the Splunk Remote Work Insights solution will help your business monitor the success and safety of its remote workforce.

This Splunk application can be added to Splunk to increase your security posture and provide critical insight into how your applications are being used, who is using them, and from what locations.

Figure 1 - Splunk Remote Work Executive Dashboard
Figure 1 – RWI Executive Dashboard

When you open up the RWI application, you’ll be dropped into the Executive dashboard view. This dashboard is an aggregate summary view of all dashboards within the application. The major purpose of this dashboard is to provide the CTO/CIO or a data center of critical insights into remote business operations. RWI gives visibility into your company’s critical applications and how they are performing and being used.

Be the VPN Champion

VPN Login Activities dashboard shows where your colleagues are logging in from, the success/failure rate for these logins, and the top login failure reasons. This dashboard is a one-stop shop to audit your VPN activities. The data shown here is from GlobalProtect, but any VPN logs can be integrated into these dashboards.

The Global Protect VPN Login Activities dashboard is key for insights into VPN activities of your remote colleagues. In this example, you have a workforce that’s fully based in the U.S. Now, check out that top panel… there are some workers accessing the VPN client from China, if this is unexpected, you may have a breach on your hands!

Figure 2 - Global Protect VPN Login Activities
Figure 2 – Global Protect VPN Login Activities

Zip-Up Zoom Operations

The Zoom Ops dashboards show an aggregate view of your organization’s Zoom metrics. Looking at this dashboard, you’ll gain visibility into historical metrics and real-time information on active Zoom meetings. You can even see what devices the meetings are being accessed from, the types of meetings being conducted, and metrics surrounding the length of the meetings.

Figure 3 - Zoom Ops Dashboard
Figure 3 – Zoom Ops Dashboard

The following data sources were used to populate these dashboards:

  • GlobalProtect VPN
  • Office 365
  • Zoom Video
  • Okta Authentication
  • Google Drive
  • Webex
  • Slack

The external threats facing organizations are greater than ever. With the shift to a remote workforce, it is crucial for businesses to have these insights into their day-to-day operations to protect the safety of their organization its colleagues. Paired with all applications your organization uses today, the Splunk Remote Work Insights Application can dramatically increase your organization’s visibility into application performance. Interested in learning more about the Splunk Remote Work Insights solution or looking to implement the application? Contact our Kinney Group team of experts below.

Dude, Where’s My Data? (Part Two)

In Dude, Where’s My Data – Part One, we covered how to identify the right data to bring in. Now, let’s look at how you can ensure you’re getting that data into Splunk in the right way.

One of the easiest ways to ensure that your data is coming in correctly is to create a Technical Add-on (TA) for each data source you are sending to Splunk. By putting all the settings in one central location, you, your team, and any support representative can quickly create, read, update or delete configurations that tell Splunk how to process your data. This information can include:

  • Field extractions
  • Lookups
  • Dashboards
  • Eventtypes
  • Tags
  • Inputs (where the data is coming from)
  • And who has access to the view this data

Technical Add-ons are the lifeblood of any well-tuned Splunk environment and can mean the difference between spending hours and spending minutes troubleshooting simple problems.

Getting the Data In

There are several ways to bring data in, including uploading a log file from the Splunk Web GUI, specifying a Universal Forwarder (UF) using CLI or modifying the configuration files directly. Customers often don’t realize that using more than one of these methods can cause configurations to be stored in several places. You can find these configurations commonly stored in the following folders:

  • /etc/system/local
  • /etc/apps/<appname>/local
  • /etc/apps/<username>local

Having log files stored in that many places can make it difficult to determine which configurations take precedence. By storing configuration files related to a single data source in one central location, there is no need to wonder which configuration is the one that is active. It also allows you to quickly expand your architecture by sharing your TA with other Splunk servers in your deployment.

Call the Experts

That closes up our two-part walk-through on getting data into Splunk the right way. Now let’s get these Splunk roadblocks removed. Check out  Kinney Group’s service offerings to find the specialized work we can deliver for you.

Want to learn more? Fill out your contact information below!

Security Tips for Work From Home (WFH) Life

It’s been a few weeks since a large portion of America’s workforce has shifted to the work from home life with the mission of fighting off Coronavirus. Whether you’re a newbie or tenured in the remote work department, we’re seeing threats like never before targeting our day to day operations (and no, I’m not talking about the threat of running out of hand sanitizer…) The threats of phishing schemes and cyber-attacks are at an all-time high.

We know that you’re already juggling some new challenges working from home, so we’ve compiled some tips to make your day-to-day a little more secure…

Start with Cybersecurity Basics

Let’s start here – make sure the programs you use are up to date, including any security software you utilize. This is a great time to update your device and account passwords, making them strong and unique. (Pro tip: consider passwords that are at least 12 characters long, use a mix of numbers, symbols, uppercase, and lowercase characters – the more unique, the more secure!).

Lock Down Your Home WiFi

Many times, home networks are left on default settings by the company that does the installation, leaving your network open to attack. Check your router’s settings and change the default login and password to something unique. Then, make sure you’re using the very best encryption available on your device. Refer to WPA2/WPA3 as the current standards.

While you’re taking the time to examine your network and router settings, take a look at the devices and users that are connected. You don’t want any unknown devices using your network.

Utilize a VPN if You’re on an Unsecured Network

If for any reason, you need to use an unsecured network while working remotely, consider utilizing a Virtual Private Network (VPN). A VPN allows you to work on a private network while protecting your data and browsing activity. While we may not recommend specific third-party VPN providers, we do recommend that you utilize your company’s private VPN if and when possible.

Maintain Workplace Lock-Up Habits

Now, we’re not suggesting that your eight-year-old will be hacking into your computer in between their e-learning courses…but it’s good to maintain the habit of locking up your device as you typically would in the workplace. Like we said earlier, consider your at-home work set up to mimic your office set up. By locking up your laptop, you are maintaining a good security practice and ensuring that the contents of your laptop go untouched when you step away.

Trust, but Verify — Watch Out For Phishing

It seems like some folks are picking up phishing as a new hobby in their quaran-time. We’re talking about Phishing, the attempt to steal personal or company information as a disguised user. We’ve seen an increased number of email phishing attempts sent to work email addresses over the last few weeks. Be cautious before clicking too into questionable emails. Make sure you trust the incoming source of your emails before…

  • Opening any attachments
  • Clicking on a link
  • Replying with confidential company or personal information

Keep Work Data on Work Computers

With more time on screens working out of the workplace, it’s easier to get drop our guardrails on what should and shouldn’t be done on work laptops. Any activity that you would not typically complete in the office, shouldn’t happen on your work computer. Remember all of those security threats I mentioned above? Your IT teams are already fighting enough threats, no need to add your personal browsing to the list.

And if that’s not enough, opening your work laptop for only business-related work will help you keep a better work from home life balance. Yes, you can still keep work at …work!

Dude, Where’s My Data? (Part One)

Trying to find the proverbial “needle in a haystack” can be overwhelming when it comes to getting data into Splunk. Customers are often in such a hurry to provide value from their new Splunk deployment that they start bringing in all their data at once. This can lead to data uncertainty. How do you find what is truly important when all of it seems important? It’s as if your organization is having an existential crisis. So, what do you do?

1. Identify your use cases

Here are some questions and common use case areas you’ll need to get answered to kick things off…

Save time

  • Where are your employees spending most of their time?
  • What reports do they have to create manually every month?
  • What can be automated using Splunk?

Find the blind spots

  • Where are your organizational blind spots?
  • Do you know which servers are experiencing the most activity?
  • Are the most active servers the ones you thought it would be?

Clarity on systems

  • Are you planning for a major expansion or system adoption?
  • Do you have enough resources to accommodate the number of users?
  • Is access limited to only those users who need it?
  • Do we have an effective means of capacity planning?

Look at the ROI

  • Can we cut costs?
  • Which systems or over or undersized?
  • Do we need more bandwidth?

These and other questions are a good place to start to help you categorize your data needs quickly. Though you will probably not identify all your use cases at once, you will most likely uncover the most pressing issues on the first pass.

2. Prioritize your use cases

Once you have identified and the questions you would like to answer, you must arrange your data into categories based on their priority. The easiest grouping is:

  • Needs
  • Wants
  • Luxuries

These categories will help you segment the use cases into tasks that you should focus on immediately. Needs are things that will benefit the largest group of people and/or will potentially save your organization money in the long run. The needs are really what brings value to the way the business is run. Wants are things that will make a subset of users very happy to have but they could continue to function, albeit not as efficiently, if they had to wait a little while longer. Luxuries are cool to have, but probably satisfy a very specific niche request.

3. Identify your data sources

Once you have identified and prioritized the questions you would like to answer, you must identify which data will help you answer those questions. Make sure to consider which data sources will help you satisfy several use cases at once. This will help you correctly determine the size of your daily license and make sure you only focus on the data sources you need to address the needs and wants of your organization.

4. Identify your heaviest users

By creating a list of people who need access to each data source, you can correctly determine how large an environment is needed to support all the data sources you plan to bring in. It also helps when determining each user’s level of access. If a data source is widely popular, it may behoove you to create a dashboard and/or report to quickly disseminate important information that the users may need. It will also help size expansion of the environment.

By taking these four steps, users will not only feel like their needs are being heard, it will help them feel empowered to identify further use cases for future expansion. It will free up their time to focus on more complicated tasks and can mean the difference between them being proactive as opposed to reactionary. By taking the organization’s greatest needs into account, it can mean the difference between users adopting a Splunk implementation as their own and it being discarded as just another tool.

What’s Next?

Stay tuned for Part Two to learn more on how to get your data into Splunk in the right way. Until then, take a look at Kinney Group’s specialized service offerings. Whether we can help you clean up your existing data or in getting data into your environment correctly, Kinney Group has the expert Splunk consultants to help you with just that.

Want to learn more? Fill out your contact information below!

Time for a Tune-Up: Splunk Enterprise Security Implementation Tips

Having issues with Splunk Enterprise Security implementation? There’s a lot of value in Splunk Enterprise Security (ES), but we’ve come across a few common mistakes with ES implementation. While adding the SIEM solution to a Splunk environment, here are some questions that often come up:

  • “Where do I install Splunk ES?”
  • “Do I need all of these add-ons?”
  • “Can I run ES in my search head cluster?”
  • “Wait, why do I have so many skipped searches now?”

Let’s walk you through some of the solutions we’ve seen in our experience.

“Where do I install Splunk Enterprise Security?”

While Splunk Enterprise Security can be installed on any Splunk Search Head, it is best practice for the application to have its own dedicated server.  This will ensure that ES can perform at its best without impacting any of your other visualization apps or ad-hoc searches.

“Do I need all these add-ons?”

Splunk Enterprise Security comes pre-packaged with a number of add-ons.  The add-ons that are required will automatically install with ES.  There are some add-ons that aren’t required and come from right from Splunkbase, the center for apps & add-ons for Splunk, like the Splunk Add-on for Windows.  It is usually recommended to skip the installation of any Splunkbase add-ons and only let ES install supporting add-ons it requires to function.

Pro tip: Many times, you will already have a number of Splunkbase add-ons from onboarding your data, and you won’t need to reinstall these add-ons.

“Can I run ES in my search head cluster?”

Yes! But we recommend only running version 5.3 or later due to improvements in managing ES in a Search Head Cluster. Earlier versions can be sufficient, but will require a staging server for making configuration changes to ES.

“Wait, why do I have so many skipped searches now?”

That’s good news! This means that ES is starting to work.  ES comes pre-packaged with a number of correlation searches you can enable.  Did you know that there are also dozens of supporting scheduled searches?  These could include lookup generating searches, datamodel acceleration, summary generating searches, and many others.

Out of the box, ES adds a significant search load to any Splunk server. Often times, the number of concurrent scheduled searches trying to run is simply more than the server can accommodate.  Don’t go buying more hardware just yet!  There are a number of tuning steps we can take early on to ensure ES is using the existing resources allocated to it.

Check out these areas when tuning:

  • Tuning artificial search limits for scheduled searches
  • Tuning datamodels
  • Adjusting scheduled searches to make the most use out of every minute in an hour

That’s a lot!

Yes it is! But, we can help! With over 500 Splunk engagements globally, Kinney Group’s consultants have the knowledge and experience to make sure your start with Splunk Enterprise Security is solid, smooth, and productive. With Jumpstart Service for Splunk, you can bypass road bumps and start identifying threats. If you’re looking to learn more, fill out the form below!

Is this thing on? A quick and easy Splunk dashboard status tip

Many clients request some sort of “up or down” status indicator for their customized dashboards. There are many potential uses for such a solution (a simplified result for checking server status, for example; or changing a complex numerical result into an easy-to-read text visualization), and since this is a common question in the Splunk user community, I wanted to share my go-to approach.

Exploiting the Rangemap Command

“Up or Down” functionality isn’t native to Splunk, so for this example we’re going to “exploit” the rangemap command, used extensively in ITSI, and modify the dashboard XML to get the desired result.

Let’s consider the following search:

index=_internal sourcetype=splunkd earliest=-30m latest=now
|eval CountStatus=&quot;No Activity&quot;
|stats count
|eval CountStatus=if(count==0,&quot;Down&quot;,&quot;Up&quot;)
|eval alert_level = case(CountStatus==&quot;Up&quot;,1,CountStatus==&quot;Down&quot;,2)
|rangemap field=alert_level
low=1-1 severe=2-2

This will yield results along the following lines:

Figure 1 – The Rangemap feature results

The Single Value visualization will display the count:

Figure 2 – The Single Value visualization display

What we really want to show, however, is the countStatus of “Up” or “No Data”

To do this, we must get into the XML so we need to save the search as a dashboard and single value in Visualizations.

Figure 3 – Save the search as a dashboard and single value in Visualizations

Then, edit the XML and add the following two lines:

&amp;lt;option name=&quot;classField&quot;&amp;gt;range&amp;lt;/option&amp;gt;
&amp;lt;option name=&quot;field&quot;&amp;gt;CountStatus&amp;lt;/option&amp;gt;

And there you have it!

Figure 5 – Up Status

What do you need to get done with Splunk? We’d love to help!

Kinney Group’s Expertise on Demand (EOD) for Splunk service provides immediate access to our team of Splunk-certified professionals with experience delivering 500+ Splunk engagements worldwide. Contact us below to get started or for more information.