The Ultimate Guide to Splunk Universal Forwarders

For many getting started with Splunk, the question of “How do I get my data into Splunk” comes up quite regularly. The answer to that question is most often: “use the Universal Forwarder.”

What is a Splunk Universal Forwarder?

The Universal Forwarder is a Splunk instance that can be installed on just about any operating system (OS). Once installed, the Universal Forwarder can be configured to collect systems data and forward it to Splunk Indexers. The Universal Forwarder can also be configured to send data to other forwarders or third-party systems as well if you so desire.

New call-to-action

Universal Forwarders use significantly fewer resources than other Splunk products. You can install literally thousands of them without impacting network performance and cost. The Universal Forwarder does not have a graphical user interface, but you can interact with it through the command line or REST endpoints. The Universal Forwarder also comes with its own license pre-installed, so there is no need to purchase a license for it.

There are many benefits to using a Universal Forwarder to forward your logs as opposed to other solutions.

  1. Forwarding data from a Universal Forwarder is reliable right out of the box, but it can be configured to further protect in-flight data using Indexer Acknowledgement.
  2. The Universal Forwarder uses an internal index called the fishbucket which is used to track previously read files and directories so that Splunk does not send the same data twice.
  3. Universal Forwarders can be configured for load balancing which enables scaling and improved performance.
  4. Using a Universal Forwarder gives you the ability to remotely manage its configurations using apps and add-ons that are deployed by a Deployment Server.

Splunk Pro Tip: There’s a super simple way to get visibility into the health of your universal forwarders using Forwarder Awareness in the Atlas app on Splunkbase. Here’s a snapshot of the data you’ll see with the click of a button. With this information, you’ll never have to wonder if all of your data is being ingested or whether it’s vulnerable. You can give it a go and decide for yourself right now, completely free.

Atlas Forwarder Awareness - A Splunkbase app for forwarder visibility

Try the Atlas Forwarder Awareness Tool Free in Splunkbase

Types of Splunk Forwarders

There are two types of forwarders: the Universal Forwarder, and the Heavy Forwarder.

1. Splunk Universal Forwarder

Universal Forwarders are more commonly utilized in most environments; Heavy Forwarders are used for specific use cases. In most situations, users simply want to collect data from a file or directory on a host and forward it to Splunk as is.

2. Splunk Heavy Forwarder

There are some instances where the format of the data might not be very pretty or even readable, or the data contains Personally Identifiable Information (PII), credit card information, etc. which needs to be masked or omitted—this is where the Heavy Forwarder comes in. A Heavy Forwarder can be configured to parse and perform transformative changes on the data BEFORE it is forwarded to Splunk indexers or another destination. Parsing data is done using props.conf and transforms.conf files.

How to Download and Install the Universal Forwarder

Step 1: Login to Splunk.com

How to Download and Install the Universal Forwarder: Step 1 - Login to Splunk.com

The Universal Forwarder can be downloaded two ways, and both involve logging into Splunk.com. Don’t panic, creating a Splunk account is quick, easy, and most importantly, free.

Step 2: Find the Universal Forwarder Install Package

Once logged into Splunk.com, hover over the Products tab at the top of the page and click on “Free Trials & Downloads”.

How to Download and Install the Universal Forwarder: Step 2 - Choose the Download type of the Universal Forwarder from Splunk.com

From the downloads page, scroll down toward the bottom until you see the “Download Now” link for the Splunk Universal Forwarder and click it.

How to Download and Install the Universal Forwarder: Step 2 - Download the Universal Forwarder from Splunk.com

It is on this page that you will be presented with a variety of choices for which OS you wish to install your Universal Forwarder package on. The Universal Forwarder can be installed on a wide variety of platforms such as Windows, Linux, Mac OS, Free BSD, Solaris, and AIX. This is where you get to choose how you want to download your Universal Forwarder package. Once you click the download now button, the package should automatically download to your system.

How to Download and Install the Universal Forwarder: Step 2 Find the Universal Forwarder Install Package

 

Step 3: Download the Universal Forwarder

Clicking the download button also loads a new page, it is here where you will have the option to copy a wget command (my preferred method) and download the install package directly to your system or any other system that has wget installed. I will demonstrate this process below.

How to Download and Install the Universal Forwarder: Step 3 - Download the Universal Forwarder onto your machine

Copy and paste the wget command into your terminal to download the Universal Forwarder install package (Yes, I am using root for the sake of ease during this tutorial)

How to Download and Install the Universal Forwarder: Step 3 - copy and paste the wget command

wget -O splunkforwarder-9.0.1-82c987350fde-Linux-x86_64.tgz “https://download.splunk.com/products/universalforwarder/releases/9.0.1/linux/splunkforwarder-9.0.1-82c987350fde-Linux-x86_64.tgz”

Since I chose the tarball download, there is an additional step that needs to be done before installing and that is creating the Splunk user. This can be done with the following command:

“sudo adduser splunk”

How to Download and Install the Universal Forwarder: Step 3 - copy and paste the wget command use the sudo adduser splunk command

Step 4: Install the Universal Forwarder

Now that we have downloaded the Universal Forwarder, we need to extract the archive file to the “/opt” directory. To perform this action, I will use the following command:

tar –zxf  splunkforwarder-9.0.1-82c987350fde-Linux-x86_64.tgz -C /opt/

How to Download and Install the Universal Forwarder: Step 5 - install the universal forwarder

Next, we will start Splunk for the very first time and accept the license agreement so that we don’t get bombarded with a wall of text. I will also use the default admin username and password of admin/changeme

/opt/splunkforwarder/bin/splunk start –accept-license

How to Download and Install the Universal Forwarder: Step 5 - install the universal forwarder and start splunk for the first time

*NOTE* there is a message that you will get about an invalid stanza, this is currently a known issue with the 9.0.1 Universal Forwarder. It won’t cause you any problems though.

We also get some warnings about the permissions on $SPLUNK_HOME, this is to be expected since we are starting the service as root. Starting in 9.0, Splunk does not like to be issued commands by root and will complain every time you do so.

How to Configure the Splunk Universal Forwarder

Now that we have the Universal Forwarder installed, it’s time to configure it. The Universal Forwarder has two main files that need to be configured for it to collect and forward data, inputs.conf and outputs.conf. If you will be utilizing a Deployment Server to manage your Universal Forwarders, you will also need to configure a deploymentclient.conf file that tells the Universal Forwarder where to ‘phone home’ to retreive the appropriate Splunk apps and any other configuration information. We will assume this is a simple, no Deployment Server installation for this article.

Inputs.conf is where you configure the Universal Forwarder to collect data. For this tutorial, we will add a monitor stanza for /var/log/messages. But first, we will need to create an inputs.conf within /opt/splunkforwarder/etc/system/local. Now you may be wondering why we are creating the file in this location and why one already exists in /opt/splunkforwarder/etc/system/default. As a rule of thumb and best practice, you should never modify files within the default directory. These files exist to provide default settings and can help you identify certain settings that you may need to adjust, and this directory will be overwritten during upgrades so any changes you might make in the .conf files in the default directory would be lost. So, take only what you need from default and place it into same name .conf files in the local folder.

Step 1: Create an inputs.conf

So now we will create an inputs.conf in the local directory and add the monitor stanza. You can use your text editor of choice for this task. Underneath the stanza we will apply two additional settings, one for the index our events will be sent to and one for enabling the input. There are many other settings that can also be applied here – typically, you will also specify a sourcetype – but we will stick to what is needed for data collection to function for this tutorial.

How to configure the splunk universal forwarder: step 1 - create an inputs.conf

*NOTE* the three slashes represent that this file is contained on the localhost.

Step 2: Create an outputs.conf

Next up we will create an outputs.conf in the same directory and configure our forwarder to forward data to two indexers. Here you will specify the IP address of the Indexers that you want to forward data to and the port that you want that data to be forwarded over. Even though Splunk does not list it as a default port, 9997 is typically used as the standard port for data forwarding.

How to configure the splunk universal forwarder: step 2 - create an outputs.conf

Step 3: Restart Splunk

With our settings applied, we now must restart the forwarder for our changes to be committed to disk. This is a key aspect of Splunk to remember as well, that ANY changes you make to Splunk will require a restart.

/opt/splunkforwarder/bin/splunk restart

How to configure the splunk universal forwarder: step 3 - restart splunk

Congratulations! You now have a working Splunk Universal Forwarder and should see data in your “os” index.

Your Crash Course to Splunk Universal Forwarders

As you can see, installing the Universal Forwarder Is straightforward and takes minimal configuration to get up and running. Whether you are an aspiring Splunk admin or someone that has used Splunk for a while but has never gone through the process of installing a forwarder, this is a good process to get familiar with as the process for installing other Splunk Enterprise components is not any different.

If you found this helpful…

You don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate.

Cue Atlas Assessment: a customized report to show you where your Splunk environment is excelling and opportunities for improvement. Once you download the app below, you’ll get your report in just 30 minutes.

New call-to-action

Meet Atlas App Awareness

Over time the installation, upkeep, and management of applications in your Splunk environment can become increasingly difficult. The more apps you install and use, the more you have to keep track of. What apps are installed? Are the versions of my apps the same throughout my environment? Is anyone even using these apps? While there’s not an easy way to answer these questions from within Splunk natively, App Awareness in the Atlas Platform allows you to get these answers and more, all from one easy-to-use interface.

Better Management of Splunk Apps

App Awareness Atlas allows admins to see which Splunk apps are deployed in Splunk environment, how they are being used, and who is using them. App Awareness also identifies any inconsistencies in your Splunk apps that may exist by displaying differences in app versions or default/local Knowledge Objects that are distributed across your Splunk deployment.

App Awareness is useful for helping to identify changes made to Splunk Apps over time, and understanding if any customizations or other preparations are needed before migrating to other servers, data centers, or Splunk Cloud (App Awareness is a companion application to the Atlas Splunk Migration Helper application).

Better Management of Splunk Knowledge Objects

App Awareness also provides expanded information for each installed application which includes a list of all known knowledge objects for that app, their type, owner, who can see them, where they are stored, and the last time that KO was updated.

Tracking local knowledge objects is notoriously difficult without a tool like App Awareness. This information is crucial for troubleshooting problematic apps, planning a migration to other Splunk servers, or considering a move from Splunk On-premise to Splunk Cloud.

Now that you know how the Atlas App Awareness application can take the guess work out of your Splunk migration and application management, why not check out our Atlas documentation for a closer look, or schedule a 1:1 discovery session to answer any questions you may have?

How To Use the Splunk dedup Command (+ Examples)

splunk dedup command

What is the Splunk dedup Command? 

The Splunk dedup command, short for “deduplication”, is an SPL command that eliminates duplicate values in fields, thereby reducing the number of events returned from a search. Typical examples of a dedup  produce a single event for each host or a pair of events for each sourcetype.

New call-to-action

How the dedup Command Works 

Dedup has a pair of modes. We’ll focus on the standard mode, which is a streaming search command (it operates on each event as a search returns the event).  

The first thing to note is the dedup command returns events, which contrasts with stats commands which return counts about the data. Outputting events is useful when you want to see the results of several fields or the raw data, but only a limited number for each specified field. 

When run as a historic search (e.g., against past data), the most recent events are searched first. If the dedup runs in real-time, the first events received are searched, which does not guarantee that they are the most recent (data doesn’t always arrive in a tidy order). 

Splunk dedup Command Example

Let’s run through an example scenario and explore options and alternatives. I will use the windbag command for these examples since it creates a usable dataset (windbag exists to test UTF-8 in Splunk, but I’ve also found it helpful in debugging data). 

Step 1: The Initial Data Cube 

| windbag 

Result: 100 events. Twenty-five unique values for the field lang, with the highest value having eight events.  

Step 2: Using Dedup to reduce events returned 

Now, let’s limit that to 1 event for each of those values in lang.  

| windbag | dedup lang 

Result: 25 events. Lang still has 25 unique values, but there is only one event for each language specified this time. 

We can also reduce by a combination of fields and even create fields before using dedup. 

Step 3: Cast time into a bin, then reduce fields with lang and time bin 

The windbag data is spread out over the past 24 hours (because I’m running 24-hour time). Taking advantage of this, we can create another usable field by using bin to set the time into 12-hour buckets. Using bin like this is one way to split the data. Since I ran this at 21:45, I wound up with four buckets (Who said this was perfect?), with the middle two buckets having forty-two events each.  

| windbag | bin span=12h _time | dedup lang, _time 

Result: 64 events. Twenty-five different lang fields, with the highest event count at 3. 

Step 4: Add a random 1 or 2 to the mix, and dedup off of those three fields. 

The above exercise was one way to divide the data up. This time, we’re going to randomly assign (using random and modulo arithmetic) each event a 1 or 2 for the group, and then use that in a dedup along with the span of 12 hours. 

| windbag | eval group = (random() % 2) + 1 | bin span=12h _time | dedup lang, _time, group 

Result: each run changes. It ranged from seventy-five events to eighty-six in the ten runs I let it try. 

Step 5: What if we want more than one event per field? 

This time we’ll add an integer behind dedup to give us more results per search. 

| windbag | dedup 2 lang 

Result: Each of the twenty-five lang entries returned two events.  

Step 6: How to Use the Data

Great, so we can reduce our count of events. What can we do with this? Anything you can picture in SPL. We may want a table of different fields. Stats counts based upon fields in the data? Why not? 

index=_internal | dedup 100 host | stats count by component | sort - count 

Result: Returned 500 events, then stats counted. In case anyone is wondering, ~80 of that data is the component Metrics (apparently, we need to use this cloud stack more) 

Other dedup Comand Options and Considerations 

There are several options available for dedup that affect how it operates.  

Note: It may be better to use other SPL commands to meet these requirements, and often dedup works with additional SPL commands to create combinations. 

  • consecutive: This argument only removes events with duplicate combinations of values that are consecutive. By default, it’s false, but you can probably see how it’s helpful to trim repeating values.
  • keepempty: Allows keeping events where one or more fields have a null value. The problem this solves may be easier to rectify using fillnull, filldown, or autoregress.
  • keepevents: Keep all events, but remove the selected fields from events after the field event containing that particular combination. 

This option is weird enough to try:  

| windbag | eval group = (random() % 2) + 1 | dedup keepevents=true lang, group 

Then add lang and group to the selected fields. Note how each event has lang and group fields under the events. Now, flip to the last pages. The fields for lang and group are not present for those events. Bonus points if you can tell me why this exists. 

  • sortby: A series of sort options exist, which are excellent if your dedup takes place at the end of the command. All options support +/- (ascending or descending). The options possible are field, auto (let dedup figure it out), ip to interpret results as IPs, num (numeric order), and str (lexicographical order). 
| windbag | bin span=12h _time | dedup lang, _time sortby -lang 

This command will sort descending by language. What is nice is that we don’t have to pass the command to sort, which would result in an additional intermediate search table. 

  • Multivalue Fields: Dedup functions against multivalue fields.  

All values of the field must match to be deduplicated.

  • Alternatives Commands:The uniq command works on small datasets top remove any search result that is an exact duplicate of the previous event. The docs for dedup also suggest not running on _raw, as that field would result in many calculations to determine if it is a dupe.
  • MLTK Sample Command: The Sample command that ships with the machine learning toolkit does a great job of dividing data into samples. If my goal is to separate data, and MLTK exists on the box, then the sample command is preferred. 
  • Stats Commands: The stats command, and its many derivatives, are faster if your goal is to return uniqueness for a few fields. For example, | windbag |bin span=12h _time |stats max(_time) as timebucket by lang returns the max value of _time, similar to dedup after a sort. 

If you found this helpful…

You don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate.

Cue Atlas Assessment: a customized report to show you where your Splunk environment is excelling and opportunities for improvement. Once you download the app below, you’ll get your report in just 30 minutes.

New call-to-action

Splunk spath Command: How to Extract Structured XML and JSON from Event Data

splunk dedup command

Your dilemma: You have XML or JSON data indexed in Splunk as standard event-type data.

Sure, you’d prefer to have brought it in as an indexed extraction, but other people onboarded the data before you got to it and you need to make our dashboards work.

How do you handle Splunk data and make it searchable? We could make regular expressions and hope the shape of the data is static—or we can use the easy button: spath command.

New call-to-action

Continue reading

Splunk Security Starts with Data Collection

With cyberthreats of every kind emerging daily, you may be asking how you can defend your organization’s data and proactively identify and mitigate new challenges. Splunk has put forward a six-stage security journey to provide guidance:

Every beautiful dashboard and impressive visual Splunk is capable of producing is, ultimately, driven by two things: data and search. And while search is the primary driver behind the analytics and visualizations in Splunk, all the perfectly written and executed searches in the world can’t help you if you’re missing the most important resource of all — quality data.

The same is true of your security use case.

If you’ve ever put together a Lego® set, you know you’ve got to have all the pieces if you’re going to be able to build the Lego Death Star. Even one missing piece could leave you frustrated and incapable of building what you set out to create. So whether you’re using Splunk for incident investigation and forensics, security monitoring, advanced threat detection, or SOC automation, your security journey begins with this critical step:

You have to collect basic security logs and mission-critical machine data from throughout your organization’s environment.

While these practices are going to look different from organization to organization, there are some fundamental components that will build a solid foundation for your security infrastructure:

Network Data Sources

The ability to monitor and analyze network traffic is critical for any team, regardless of your security use case. In the data collection phase, you should be most concerned about your ability to have “eyes on” traffic that is entering and exiting your network. Of course, you also want to be able to identify traffic that was blocked and denied entry.

As an example, network data sources might include firewall traffic logs from companies such as Cisco, Fortinet, and Palo Alto Networks (among others).

Internet Activity Data Sources

Many cyberattacks start with an inside user visiting an infected website either directly (they clicked a link online during a search, for example) or via email. These malicious sites may provide bad actors with access to valuable user or company information, or open the organization to malware or ransomware attack. Having access to a log of visited websites is critical during investigation of a security-related incident.

Important sources to ingest might include next-generation firewall (NGFW) traffic filters or proxy logs from sources such as: Bluecoat, Cisco, Palo Alto Networks, and Websense (again, among others).

Endpoint Data Sources

Logs from endpoint data sources work hand-in-hand with your network data sources to provide valuable insights into attack activities such as malware execution, unauthorized access attempts from an insider, or attackers “lying in wait” on your network. You’ll want to capture data from your individual workstations, organization servers, and operating systems.

Examples of this type of data source would include Windows event logs, MacOS system logs, and Linux system and auditd logs.

Authentication Data Sources

Utilizing authentication logs will allow you to not only see that unauthorized access has occurred in your system or applications, but where the access was attempted from, and when. Having access to this data allows you to tell whether a login attempt is valid or potentially a bad actor.

Windows Active Directory, MacOS system logs, Linux auditd, and local authentication are examples of this type of data source.

Then What?

As with most things, this step of the journey is the most important, but also the most time-consuming, tedious, and — frankly — painful. Because of this, many organizations miss critical pieces or gather insufficient information. At Kinney Group we operate by a guiding principle:

Bad data is as bad (or worse) than no data.

If you’re at this stage of the journey (or revisiting this stage), take your time, slow down, and be thorough. Beyond that, make sure your critical logs live in a separate system that can’t be easily accessed by an attacker.

Need help to be sure you’re collecting the right data sources and setting your organization up for security success with Splunk? We’d love to chat.

New call-to-action

One more thing…

Once you have your data sources identified and “reporting in” to Splunk, you need to make sure they continue to report and that you’re alerted if a source drops offline. While it’s possible to create these alerts and reports manually, we’ve developed applications that make defining your data sources, setting up alerts, and monitoring data sources and forwarders push-button simple as part of the Atlas Platform for Splunk.

We’d love to show you the solution and answer any questions you may have, or you can get started with a free 30-day trial by clicking the link below.

New call-to-action

See More: Data Awareness in Splunk

Every beautiful dashboard and impressive visual Splunk is capable of producing is, ultimately, driven by two things: data and search.

And while search is the primary driver behind the analytics and visualizations in Splunk, all the perfectly written and executed searches in the world can’t help you if you’re missing the most important resource of all — quality data.

If you’ve ever put together a Lego® set, you know you’ve got to have all the pieces if you’re going to be able to build the Lego Death Star. Even one missing piece could leave you frustrated and incapable of building what you set out to create.

In short: you have to know what data you’re working with, and that you have all of it.

Data Awareness Defined

“Data awareness” refers to your organization’s ability to look at the infrastructure bringing data into your Splunk environment, the visibility you have (or don’t have) when there’s a failure in your data pipeline, and the health of your forwarder infrastructure.

To get the most from Splunk, and to empower you to do more with the platform, there are two critical questions you must address:

Question 1: Do you have an alerting system in place when critical data streams fail in your Splunk environment?

Setting alerts for critical data streams is important for ensuring your dashboards and processes are up to best practices. You want to be the first person to know an issue has occurred so it can be solved before it becomes a larger problem.

Some may read that and think, “We check our data streams monthly or weekly, so we have a pretty good idea of how healthy our data pipeline is.” But what about those moments when a data stream or forwarder goes offline in between those manual checks?

Maybe it’s not important data… but maybe it is.

If you’re using Splunk for compliance, even a few moments of downtime can cause huge problems down the road. If you’re using Splunk for security, you know all too well how much meaningful (and dangerous) activity can occur within even a few minutes.

Alerts are best practice for a reason. Your team and those throughout the organization who rely on Splunk dashboards and visualizations for day-to-day operations, security, insights, and decision-making have to be able to trust the data. If your alerting isn’t strong, that means you could be missing data. And bad data is worse than no data at all.

Question 2: If you’re using Splunk Enterprise Security (ES), are you confident you’re ingesting all the appropriate data to get the most from your investment?

Splunk ES is an incredible tool, but depends on being fed the proper data for it to really shine. Without the ability to ensure you have full coverage for your priority data and clear eyes on that data’s acceleration, you could be leaving yourself vulnerable. The continuous security monitoring, advanced threat detection, and your ability to rapidly investigate and respond to threats is all contingent on priority data being fed into the system.

At a minimum, without that information, you’re certainly not maximizing your use of ES (or the dollars you’ve invested in the platform).

Solving Alerts

It’s possible that you can manually create alerts for any number of situations and needs within Splunk. Once again, that’s the power of having such a versatile platform at your fingertips. The downside to that approach, however, is that it’s time consuming and requires, many time, a degree of technical proficiency with Splunk that many internal teams lack.

What would be ideal is a single pane of glass that shows a complete inventory of every sourcetype in your Splunk deployment. Even better would be if that inventory could also show how much data is being received by that sourcetype, its status, a use case or description, admin notes, who owns it… you get the idea. And the cherry on top of this magical solution would be a push-button simple way to create an alert for that sourcetype.

This is exactly what the Data Management application within the Atlas Platform provides.

The Data Inventory component of Data Management allows you to easily see every sourcetype, the last time it was ingested, how much of your license that data is utilizing, and a host of other important information.

Utilizing the Data Watch feature by clicking on the alert icon inline with this information, you can also utilize Splunk to keep a watchful eye on the sourcetype and alert you when there’s a certain percentage drop in hosts or events:

Of course, watching sourcetypes is only a piece of the puzzle. You also need a way to provide that same level of protection and visibility to your forwarders. This is where the Atlas Forwarder Awareness application swoops in to save the day with a system-wide overview of forwarder status that you can group however you wish, with the ability to dive into each group for details.

Within each group, you’ll have visibility into missing forwarders, the SSL status of each forwarder, the version of Splunk each forwarder is running, and a variety of other information that allows you peace of mind that data is reliable and being brought in to your environment properly.

Solving Enterprise Security

As stated, Enterprise Security depends on the right data — and especially priority data with clear eyes on acceleration of that data.

That’s why we’ve developed the Atlas ES Helper application to guide the process and ensure you have the coverage you need and you’re utilizing the platform effectively.

In addition to a comprehensive inventory of ES-related data models, the power of ES Helper is its ability to give you an understanding of your environment’s overall utilization, data coverage, and acceleration at a glance.

The proprietary ES Utilization score is based on scoring your system’s Priority Data Coverage, Priority Data Acceleration, Lower Data Coverage, and Lower Data Acceleration. The takeaway is an easy to understand and actionable report that tells you, with certainty, if you’re getting the most and doing the most with your investment in Splunk Enterprise Security.

Wrapping Up

Whether it’s a comprehensive understanding of your sourcetypes, data models, and forwarders or getting more from Splunk ES, the value of Data Awareness can’t be overstated. Downloading the free Atlas Assessment application from Splunkbase is the perfect way to see if Atlas is the right fit to solve these challenges in your environment. Still not convinced? A free 30-day trial of Atlas will provide you with the opportunity to see for yourself. If you’d like to read more, grab our free “Do More with Splunk” ebook (just tap the button below — no email required). You’ll learn what a Splunk “Creator” is (and does), and get actionable next steps for accelerating your Splunk journey.

New call-to-action

What is a Splunk Creator?

Before we dive into our three primary topics, we want to set the stage by explaining a term you’ll see used throughout this piece:

“Creator.”

Typically when we think of “creators,” we think of, well, creative occupations. Painters, designers, architects, and the like. Or maybe you think in more modern terms, such as YouTubers and TikTok content creators. But most individuals working with Splunk wouldn’t consider the work they do with and in the platform to be creative.

Except… it really is.

Think about it for a moment. An architect or product designer gathers a list of requirements and then creates the best possible solution with existing materials, working within the budget and time constraints the client has laid out for them.

Isn’t that exactly what you do each and every day in Splunk?

Someone needs an outcome:

“We want a dashboard to view security incidents.”

“I need a visualization for revenue vs targets.”

“We need a way to collect Remote Work Insights.”

And what do you do? What every creator does. You look at the available materials — the data you’re pulling in to your environment — and then you begin to use the platform to create a solution. An outcome.

A masterpiece.

A dashboard in Splunk is more than graphs and charts and tables. It’s the output of one of the most complex functions asked of any technical professional — telling a story with data.

“Is our organization safe from threats today?”

“Are we delivering on our promise to our customers and stakeholders?”

You tell those stories — and many others like them — every day with the solutions and outcomes you create in Splunk.

Look at you go.

Want to learn how?

If you want to create incredible outcomes in Splunk that empower you to see more, create more, and save more with the platform, grab our free “Do More with Splunk” ebook (just tap the button below — no email necessary). You’ll learn what a Splunk “Creator” is (and does), and get actionable next steps for accelerating your Splunk journey.

New call-to-action

 

Do More With Splunk

In every recession, organizations find themselves in uncharted waters — after all, no two recessions or downturns are the same. What worked for Company A during the early 2000’s
recession may or may not work for Company B facing the economic downturn ahead of us today.

What organizations can do, however, is identify the patterns and behaviors of the companies that managed to thrive in challenging economic times. Harvard Business Review conducted a year-long study of nearly 5,000 companies and their behaviors in the periods immediately preceding, during, and after an economic downturn.

While 17% of the companies they studied didn’t survive (for a wide variety of reasons), and the overwhelming majority were unable to regain their pre-recession rate of growth, 9% were able to gain ground and outperform competition.

What Do the Winners Do Differently?

Harvard Business Review’s study effectively found that the key to coming out ahead, during and after a recession, is an adept combination of “defensive” and “offensive” moves. Defensive moves are those that are, perhaps, the most common response to a downturn — spend less and cut costs.

But “offensive” moves?

The companies that fared best in a downturn were those that focused on maximizing what they already had. This approach to improving operational efficiency had the same net effect as reducing headcount or slashing expenses without actually doing those things at the same levels as their competition.

Doing More

Regardless of economic conditions around us, the reality is we could all use some help in getting more from what we’ve got. The average utilization of a software platform across all industries looks something like this:

Take a moment to let that sink in. What is your company’s annual investment in the Splunk platform? What could it mean to your organization if you could unlock that 50-80% of the platform that you may not be utilizing to the fullest?

The key to your future success will be found in unlocking the underutilized potential of the most amazing security and observability platform available… and creating incredible outcomes in the process.

Want to learn how?

If you want to create incredible outcomes in Splunk that empower you to see more, create more, and save more with the platform, grab our free “Do More with Splunk” ebook (just tap the button below — no email necessary). You’ll learn what a Splunk “Creator” is (and does), and get actionable next steps for accelerating your Splunk journey.

New call-to-action

 

Splunk Data Stream Processing: What It Is & How To Use It (+Examples)

Splunk Data Stream Processor is finally here! The long-awaited Splunk Data Stream Processor is no longer in beta and is now released for public consumption. We’ve been anticipating the DSP service for quite some time. Who hasn’t been craving the real-time data processing and insights that DSP provides?

What is the Splunk Data Stream Processor?

The Splunk Data Stream Processor (DSP) is a data stream processing service that manipulates data in real time and shoots that data over to your preferred platform. DSP provides the ability to continuously collect high-velocity, high-volume data from diverse data sources, and distribute it to multiple destinations in milliseconds.

Stream processing is the processing of data in motion, it is designed to analyze and compute data instantaneously as it is received. The majority of data sources are born in continuous streams, so being able to process them as such provides almost real-time insight into events for your analysts.

Batch Processing vs. Data Stream Processing

This is different from the “standard” data processing called batch processing. Batch processing collects the data (in batches) and then processes that data. The benefit to Stream processing is that you will have immediate insight into your critical events and can act on notable events more quickly.

How to Use Splunk Data Stream Processing (+Examples)

Use Case #1: Data Filtering/Noise Removal

With DSP, you can filter or route non-useful and noisy logs to a destination of your choice. This use case allows you to route these logs to a separate syslog or storage solution for aggregation, but it is outside of Splunk, so it does not affect your Splunk license and it doesn’t fill your indexes with unwanted data. 

Use Case #2: Data Routing

With DSP, you can receive a high-velocity and high-volume of data to multiple destinations. This use case allows you to send your data to Splunk, containers, S3, syslog aggregate, and more at a rapid pace. This allows you to split the data to send to multiple destinations at the source without first indexing the data into Splunk and then sending it off. This allows for more efficient data flow.  

Use Case #3: Data Formatting

With DSP, you can format your data using provided functions based on your configured conditions. This is a fairly straightforward use case allowing you to format your events to make your raw logs human-readable and informative without having to first index the data into Splunk. This can be combined with any of the use cases in this list to achieve maximum value with DSP.

Use Case #4: Data Aggregation

With DSP, you can aggregate data based on configured conditions and identify abnormal patterns in your data. You can pre-configure rules or conditions that will send data to different aggregate points based on the patterns within the data, that pertain to the rules configured. If you have a data source with a mixture of different kinds of logs, you can now pick up all the logs and forward them to different destinations with ease. 

New call-to-action

Data Sources for Data Stream Processing

First, look into what data sources are supported by Splunk DSP. Here are the data sources that are currently supported by the current version. Be on the lookout for more data sources that to be added in future releases.

Figure 1 - Splunk DSP supported data sources
Figure 1 – Splunk DSP supported data sources

Here are the system requirements that come with Splunk DSP.

Figure 2 - Splunk DSP system requirements
Figure 2 – Splunk DSP system requirements

We’ve been more than excited about the release of this data stream processing service and we hope you are too. If you’re interested in learning more about Splunk Data Stream Processing, we’re here to help. You don’t have to master Splunk by yourself in order to get the most value out of it. Small, day-to-day optimizations of your environment can make all the difference in how you understand and use the data in your Splunk environment to manage all the work on your plate.

Cue Atlas Assessment: a customized report to show you where your Splunk environment is excelling and opportunities for improvement. Once you download the app, you’ll get your report in just 30 minutes.

New call-to-action

How to Use Splunk Remote Work Insights (RWI) to Secure Your Organization in 2023

In Security Tips for Work From Home (WFH) Life, we explored guidelines on how to efficiently and safely set up your work-from-home environments. Looking past the individual worker, companies are now tasked with providing a productive and secure remote work environment for their colleagues. 

How can organizations achieve this if they’re not already there yet? Here, we’ll show you how to use Splunk to monitor the safety and performance of your remote workforce.

What is Splunk Remote Work Insights?

In light of COVID-19, Splunk has released the Remote Work Insights (RWI) Application. This free-to-download application contains reports and dashboards that provide insight into the critical applications your organization is using to keep the business running. Along with application management, the RWI solution gives immediate insight into business performance and network security. As we get through this pandemic and beyond, the Splunk Remote Work Insights solution will help your business monitor the success and safety of its remote workforce.

This Splunk application can be added to Splunk to increase your security posture and provide critical insight into how your applications are being used, who is using them, and from what locations. It’s also mobile-friendly so you never miss an alert.

How Splunk Remote Insights Work

When you open up the RWI application, you’ll see the Executive dashboard view. This dashboard is an aggregate summary view of all dashboards within the application. The major purpose of this dashboard is to provide the CTO/CIO or a data center of critical insights into remote business operations. RWI gives visibility into your company’s critical applications and how they are performing and being used.

New call-to-action

Remote Work Insights Dashboards

Executive Dashboard

Use this dashboard to get a bird’s eye-view of how applications are performing across your organization. The metrics displayed on this dashboard include:

  • VPN Sessions
  • Zoom Meetings In Progress
  • Most Popular Applications on the Network
  • Geographic Locations of Logins

    Figure 1 - Splunk Remote Work Insights Executive Dashboard
    Figure 1 – RWI Executive Dashboard

VPN Ops

Use this dashboard to monitor how securely your team is able to connect to different applications accessible through your VPN.

VPN Login Activities dashboard shows where your colleagues are logging in from, the success/failure rate for these logins, and the top login failure reasons. This dashboard is a one-stop shop to audit your VPN activities. The data shown here is from GlobalProtect, but any VPN logs can be integrated into these dashboards.

The Global Protect VPN Login Activities dashboard is key for insights into VPN activities of your remote colleagues. In this example, you have a workforce that’s fully based in the U.S. Now, check out that top panel… there are some workers accessing the VPN client from China, if this is unexpected, you may have a breach on your hands.

Figure 2 - Global Protect VPN Login Activities

Figure 2 – Global Protect VPN Login Activities

The metrics displayed on this dashboard include:

  • Geographic Locations of Logins
  • Failure Rate of Connections (over time)

Authentication Ops

  • Authentication of Login Attempt Activity
  • Failures of Legitimate Login Attempt Activity
  • Infrastructure Stress and Failure

Zoom Ops

The Zoom Ops dashboards show an aggregate view of your organization’s Zoom metrics. Looking at this dashboard, you’ll gain visibility into historical metrics and real-time information on active Zoom meetings. You can even see what devices the meetings are being accessed from, the types of meetings being conducted, and metrics surrounding the length of the meetings.

Figure 3 - Zoom Ops Dashboard

Figure 3 – Zoom Ops Dashboard

  • Zoom Adoption and Utilization
  • Number of Active Meetings
  • Number of Participants in Meetings
  • Length of Meetings

Protect Your Team From External Threats

The external threats facing organizations are greater than ever. With the shift to a remote workforce, it is crucial for businesses to have these insights into their day-to-day operations to protect the safety of their organization and colleagues. 

Interested in learning more about the Splunk Remote Work Insights solution or looking to implement the application? You don’t have to master it by yourself in order to get the most value out of it. Small, day-to-day optimizations of your Splunk environment can make all the difference in how you understand and use the data to manage all the work on your plate.

Cue Atlas Assessment: a customized report to show you where your Splunk environment is excelling and opportunities for improvement. Once you download the app, you’ll get your report in just 30 minutes.

The external threats facing organizations are greater than ever. With the shift to a remote workforce, it is crucial for businesses to have these insights into their day-to-day operations to protect the safety of their organization and colleagues. Paired with all applications your organization uses today, the Splunk Remote Work Insights Application can dramatically increase your organization’s visibility into application performance. Interested in learning more about the Splunk Remote Work Insights solution or looking to implement the application? Contact our Kinney Group team of experts below.

New call-to-action