Data Model Mapping in Splunk for CIM Compliance

Join Kinney Group Splunk Enterprise Security Certified Admin Hailie Shaw as she walks through the process of data model mapping in Splunk for CIM compliance. Catch the video tutorial on our YouTube channel here.

Note: the data visible in this video and blog post are part of a test environment and represent no real insights. 

Starting Out with Data Model Mapping

As Splunk engineers, we constantly deal with the question: How do I make my data, events, and sourcetypes Common Information Model (CIM) compliant? This is especially crucial with Enterprise Security use cases, when data will need to map to a CIM compliant data model.  When we search the environment pre-CIM compliance, our query will return no results. This is what we aim to change with data model mapping. 

Figure 1 - Pre-data mapping, the search returns no results in Splunk
Figure 1 – Pre-data mapping, the search returns no results

Begin by downloading the Splunk Add-on Builder app, which you’ll use later. Let’s transition to Splunk docs. The left panel of the webpage lists all data models. For any option, Splunk docs will list out all fields to which users can map. This should be the first step when choosing the data model that matches your environment’s existing fields as closely as possible.  

Creating the Add-on

Turn to Splunk’s Search & Reporting app and navigate to the data that you’re looking to map; the fields will populate on the left side of the screen. Pay special attention to the sourcetype that you wish to matchyou’ll need to supply this to the Splunk Add-on Builder app that will map the fields for CIM compliance. Within Splunk Add-on Builder, select “New Add-on.” The default settings will be sufficient with the exception of the required Add-on name. Click “Create.”  

Figure 2 - The form in this window creates the Add-on in Splunk
Figure 2 – The form in this window creates the Add-on

Click Manage Source Types on the top banner menu. Click “Add,” and then “Import from Splunk” from the dropdown menu. Select your sourcetype, which should populate within the menu after you import data from Splunk. Click Save, and the events will be uploaded. Next, click Map to Data Models on the top banner menu.  

Figure 3 - Import data by selecting the sourcetype
Figure 3 – Import data by selecting the sourcetype

Mapping the Data Model

From here, select “New Data Model Mapping.” You’ll be prompted to enter a name for the event type and select the sourcetype you’re using. The search will populate below, automatically formatted with the correct name. Click “Save.” The resultant data model will include, on the left-hand side of the screen, a list of the event type fields. Hovering over each entry within this list will reveal a preview of the data that exists within the field. 

Essentially, the app takes the fields from your initial data and transfers them onto the data model. On the right of the screen is where you’ll select a data model for the fields to map to. Each data model is filtered through Splunk’s CIM, and you can select which is most appropriate based on the Splunk documentation with which we began.  

Figure 4 - The empty Add-on Builder field before data mapping
Figure 4 – The empty Add-on Builder field before data mapping

When you select a data model, the Add-on Builder will provide supported fields, which you can cross-reference with Splunk Docs; the app is a field-for-field match. This step will give you a list of CIM-approved fields on the right to complement the original fields on the left. To map them together, click “New Knowledge Object” and select “FIELDALIAS.” This is where you’ll need to select which data model field most closely matches the initial event type or expression.

Once you’ve made a match, select OK, and the app will provide a field alias. Repeat this process for each field you wish to include. Once you’re satisfied, click “Done.” As you can see, the data has now populated with the sourcetype listed. 

Figure 5 - Match the original entry field to its CIM compliant counterpart using "FIELDALIAS"
Figure 5 – Match the original entry field to its CIM compliant counterpart using “FIELDALIAS”

Validate and Package 

It’s important to validate your package to ensure that it follows best practices. To do so, click “Validate & Package” from the top banner menu. Click the green “Validate” button. When the validation reaches 100%, you can download the package. This page will also generate an Overall Health Report detailing which elements, if any, should be addressed. Once downloaded, change the name of the file to a .zip. By double-clicking, the file will extract, and you can open it to view details of the sourcetype and events within.  

Figure 6 - The overall health report indicates that the validated data package is ready to be downloaded in Splunk
Figure 6 – The overall health report indicates that the validated data package is ready to be downloaded

Return to the Splunk instance through Search and Reporting and run the data model command again as a search. Now, your events will populate in the data model! You can also view the CIM compliant aliases in Settings > All Configurations.  

 

Compliance, Achieved

The resultant data model is good to go—CIM compliant and ready to be exported. Check out more Splunk tips on our blog and our YouTube channel. To get in touch with engineers like Hailie, fill out the form below: 

 

Kinney Group Core Values: What Is a Chopper?

As part of the team who defined Kinney Group’s five core values back in 2016, I can still remember much of the conversation. We had borrowed a conference room in the basement of IUPUI and locked ourselves away with a bunch of data, word clouds, post-it pads, and a mission to finalize our core values. We knew how critical it was to get it right—if not, we could easily steer our colleagues wrong for years to come. After hours of deliberation, surveys, healthy debate, and gridlock over wordsmithing, I looked at my teammates in a moment of desperation and asked, “WWJKD?” What Would JK Do?

A Founding Vision

As a founder-led organization, much of our culture over time has been grounded in our founder, Jim Kinney (also known as JK). JK is a visionary and leads our team with confidence and passion for our mission. We took each word that we were unsure of and asked again, WWJKD? The five core themes were easy—all of our surveys identified resonant concepts. Finding the right words to convey the spirit of the company was the tricky part.

We came to the fifth value with our core four now in place: Customer-centric, Innovative, Competitive, and Bias to Action. We had a handful of phrases and ideas for the fifth: perseverance, removing barriers, never give up, determination. None of it was right. The concept was there, but it simply didn’t feel like Kinney Group.

Just then, my coworker stared into the corner and muttered, “What if we just call it Chopper?” After a couple of chuckles, someone else muttered, “No one will know what we’re talking about.” All other ideas left my brain.

This was it. That was the point. No one outside would know what it meant, but inside, we would know exactly what it meant to live out the legend of Chopper Man. In interviews, we’d get to answer the question, “What is a Chopper?” Someday, I’d write this blog post to answer our most important cultural question: what is a Chopper?

The Legend of Chopper Man

The history of Chopper at Kinney Group is folklore; we still tell the story at every Quarterly Business Review. In 2011, when the company comprised fewer than ten people, our founder had to make the tough choice that every founder has faced: push through or shut it down. No one knows what JK was searching for when he happened upon the Google image of a man chopping a pile of wood. Now, that image is an emblem of his choice to push through. He opted to remove the obstacles that plagued our organization and pursue his vision and our mission: harness the power of IT to improve lives.

A sepia photograph of a man chopping wood.
The image of Chopper Man that inspired JK in 2011

When things look bleak and another roadblock presents itself, we choose to Chop through. Can’t see the end in sight and lost in the middle of the forest? Start chopping.

When we answer the question for potential new colleagues, there is a gleam in the eye of future Choppers. They get it right away and then want to hear more. The question changes from “what is a Chopper?” to “how do I become a Chopper?”

What Makes a Chopper?

At Kinney Group, we move at Silicon Valley speed. Just as problems drive innovation, ideas then become reality. The ability to persevere and find an innovative way through a process, project, or opportunity is imperative to our success as an organization.

Rather than float a deadline, Choppers will finalize a minimum viable product with the client. This is done to move the ball forward and accomplish the task as initially defined.

Through open book management, we can empower the full organization to chop through a financial target. This transparency ensures we are all moving in the same direction.

If the story of Chopper Man—or these examples of the Chopper mentality at work—inspire you, then consider joining our team! A position at Kinney Group is an investment in your career. See a list of our current openings and apply online here.

Splunk 101: Data Parsing

 

When users import a data file into Splunk, they’re faced with a dense, confusing block of characters in the data preview. What you really need is to make your data more understandable and more accessible. That’s where data parsing and event breaking come in.

In this brief video tutorial, TechOps Analyst, Hailie Shaw, walks you through an easy way to optimize and configure event breaking— spoiler alert: it boils down to two settings in props.conf.

This function within Splunk can greatly simplify the way you interact with your data. After learning it, you’ll get the insights you need, faster.

Event Breaking

When importing data into your Splunk instance, you’ll want to be able to separate it based on events, which enables legibility and ease of interpretation. Because the imported data file isn’t pre-separated, event breaking is an essential skill. The most important part in separating data is the line breaker within the event boundary, which is how Splunk decides how to group and separate certain events.

LINE_BREAKER is how Splunk separates events into different lines. The default = a sequence of new lines, followed by all carriage returns within parentheses.

SHOULD_LINEMERGE is how Splunk merges separate lines into individual events. The default value = true, but should always be set to false as a best practice. If set to true, the value will automatically set to false once Splunk is supplied with your new regex, which provides a specific guide for breaking up new lines and preventing them from merging.

When you import the data file, the term “<event>” appears several times intermittently within the block of data, which is listed as one large event. By pasting the original block of data into regex101.com, users can identify the <event>s that should ideally begin new lines. Then enter the regex resulting from this step into Splunk in props.conf, which will populate the desired fields.

By selecting regex as the event-breaking policy and entering the pattern from regex101.com, the data preview will display your data separated into events (each beginning with “<event>”). The regex in props.conf defines both terms listed above. This data preview can now be saved as a new sourcetype.

Learn More!

Learning event breaking can help make your data more organized and legible for members of your team. Kinney Group’s Expertise on Demand service offers a wealth of Splunk tips and knowledge, tailored by analysts like Hailie to your team’s specific needs. Check out the blog for additional tutorials from our engineers. To learn more about Expertise on Demand, fill out the form below!

Contact Us!

Meet Atlas’s Forwarder Awareness

If there was a secret sauce for Splunk, the key ingredient would be the platform’s universal forwarders. Providing users with the ability to automatically send data for indexing, Splunk forwarders are essential to data delivery in Splunk Enterprise and Splunk Cloud environments.

In most Splunk instances, you have multiple forwarders. These forwarders throw data at your search heads and indexers in order to read and store your data. However, there has historically been an issue with forwarders: they go missing and they fail.

If you’re looking at your data pipeline in Splunk, your forwarders are on the front line. Forwarders play a pivotal role in ingesting your data; however, they can disappear or unexpectedly fail (without you knowing). A missing forwarder may result in an issue as small as temporarily not ingesting data. However, a missing forwarder could also be an indication of a much larger issue, like an entire server going down.

 

To solve this time-old problem in Splunk, we’ve built an application within Atlas, our new platform for Splunk, that allows you to have eyes on all of your forwarders in one place.

Atlas’s Forwarder Awareness Application

Atlas Forwarder Awareness is an application that provides visibility into all of your forwarders, their statuses, and any misconfigurations or failures within your environment. Built within the Atlas Application Suite, the Forwarder Awareness tool enables teams to have constant visibility into their forwarders’ health and statuses.

Now teams can quickly determine if a forwarder is missing and take action—immediately.

In Splunk’s own Forwarder Management interface, users are alerted when forwarders go missing with limited information or guidance on the issue. When this happens, Splunk teams have to dig through alerts in their Splunk monitoring console to try and identify an issue with their forwarders.

Figure 1 - Forwarder Awareness Interface
Figure 1 – Forwarder Awareness Interface

The Atlas Forwarder Awareness tool sends you a list of the forwarders that are missing and which data sources are impacted. Instead of requiring users to log into their Splunk monitoring console, users can now access this critical information on their forwarders directly through their Search Head Cluster. This application offers real-time visibility and awareness into your forwarders’ health and status.

The Value in Visibility

Without requiring admin access, you (and your team) have full visibility into the status and health of your forwarders in one view. From this view, users can view visual graphs representing forwarder statuses by operating system, forwarder types in use, and forwarders’ SSL status. On the application, users also have visibility into top-performing forwarders (by total WB) and missing forwarders.

Figure 2 - Forwarder Awareness Dashboards
Figure 2 – Forwarder Awareness Dashboards

In the example below, you’ll see a really powerful element of Forwarder Awareness. You’ll notice on the screen below that a forwarder is offline (with no contact in 15 minutes or longer), the last time Splunk saw that forwarder, and what sourcetypes may be affected. That view does not require admin access.

Figure 3 - Example of "Missing Forwarder" feature
Figure 3 – Example of “Missing Forwarder” feature

These insights are invaluable and critical information for you and your team to identify — and these immediate insights are only available to users through Atlas’s Forwarder Awareness application. In any other situation, teams would spend hours of their time and resources to identify this same information.

To put it simply, a missing forwarder means missing data, failed compliance standards, inactive SSL certificates, and many more detrimental losses for Splunk teams. Ultimately, a missing forwarder can be extremely costly to an organization in both data loss and spent resources.

Conclusion

Every Splunk instance is at risk of a failed or missing forwarder. With your forwarders being at the front line of your data pipeline, it’s essential to have eyes on them at all times. With Atlas’s Forwarder Awareness Application, you have the visibility you need.

This is just a glimpse into the power of the Atlas platform. Paired with more applications, reference designs, and support services, Atlas enables all Splunk teams to be successful. If you’d like to learn more about the Atlas Platform, let us know in the form below.

Schedule a Meeting

Tips for Tech Recruiters: Learning the Lingo

Throughout my recruiting career, I’ve been primarily sourcing, screening, and networking with candidates in the technical field.  I’ve worked with .NET developers, web developers, systems administrators, desktop and helpdesk support, network administrators, and more.  All of these individuals have a very unique language deriving from their respective careers that were learned over years.  Through different conversations, interviews, and networking events you really start to pick up on the lingo and what different technical terms mean; which helps me determine which candidates are knowledgeable and which candidates are just giving me surface-level answers.

Given all that has gone on in the world currently, I’ve had to learn a number of different “languages” in different recruiting positions within the last 12-18 months.  I’ve learned and used many tools that helped progress my expertise in the recruiting space. I’d love to share these tips with you.

Teaching an Old Dog New Tricks

The best way to learn different technical languages, specific positions and different industry lingo is to talk to as many people as possible.  I have to be frank here for a second – all of the new technical languages were a little overwhelming when I started here at KGI. Everyone has to ramp into these new environments, so here are a couple of practices that can help:

Interviews are Learning Opportunities

Completing a lot of interviews with candidates who may or may not be qualified is good. But you should also use those conversations to learn more about their specific experience, details of job duties, and language that may differ from other environments.

Learn From Your Technical Talent

A great resource is utilizing our own colleagues and technical consultants as learning tools.  These individuals make themselves readily available to talk to and give more detail about their past experiences, current skill set, and job details they execute for Kinney Group.  This has been a HUGE help for me in terms of learning about Splunk, data analytics, and the different certifications and security clearances that are required.

 

The Secret to Security Clearances

Speaking of security clearances… this has been another great learning curve moving into a governmental talent acquisition opportunity.  From the outside looking in, you don’t ever think about how high-security clearance levels go.

Lean on Your Experts

A great aspect about Kinney Group is that our very own Facility Security Officer, Casie Nolan. Having a person whose specific job is to know each security clearance, how much it takes that security clearance, and how long the process will be to obtain that security clearance is incredibly helpful.  She talks to candidates and knows what qualifications a potential candidate should have – and when a candidate gives her their current security clearance, she’s able to verify the accuracy, quickly.

This is a tricky area for most recruiters, as recruiting cleared resources can be complicated. If you don’t have a direct work resource to lean on, reach out to your network for guidance.

 

To Sum It Up: Be a Sponge

When working with different companies and industries, be a sponge.  Soak up every bit of information and talk to as many expert level people in that field as possible.  Lastly, the best way to learn is to get uncomfortable and just jump in the pool with both feet, survive initially, and then excel!

Being in the recruiting world within tech comes with incredible opportunities for growth and learning. I hope you could take some practical tips back to your own organizations.

A Lesson on Splunk Field Extractions and Rex and Erex Commands

With Splunk, getting data in is hard enough. After uploading a CSV, monitoring a log file, or forwarding data for indexing, more often than not the data does not look as expected. These large blocks of unseparated data are hard to read and unable to be searched. If the data is not separated into events, you may be wondering how to correctly parse, and perform advanced search commands using fields.

This is where field extraction comes in handy.

Field Extraction via the GUI

Field extractions in Splunk are the function and result of extracting fields from your event data for both default and custom fields. Basically, organize your data with field extractions in order to see the results you’re looking for.

Figure 1 - GUI in Splunk
Figure 1 – Extracting searchable fields via Splunk Web

 

Pictured above is one of Splunk’s solutions to extracting searchable fields out of your data via Splunk Web. Within the Search and Reporting App, users will see this button available upon search. After clicking, a sample of the file is presented for you to define from events the data. The image below demonstrates this feature of Splunk’s Field Extractor in the GUI, after selecting an event from the sample data.

 

Figure 2 - Splunk’s Field Extractor in the GUI
Figure 2 – Sample file in Splunk’s Field Extractor in the GUI

From here, you have two options: use a regular expression to separate patterns in your event data into fields, and the ability to separate fields by delimiter. Delimiters are characters used to separate values such as commas, pipes, tabs and colons.

Figure 3 - Regex delim in Splunk’s Field Extractor in the GUI
Figure 3 – Regular expressions vs delimiter in Splunk

 

Figure 4 - Delimiter in Splunk’s Field Extractor
Figure 4 – Delimiter in Splunk’s Field Extractor

If you have selected a delimiter to separate your fields, Splunk will automatically create a tabular view in order to allow you to see what all events properly parsed would look like compared to its _raw data pictured above.

Functionality is provided to rename all fields parsed by the selected delimiter. After saving, you will be able to search upon these fields, perform mathematical operations, and advanced SPL commands.

What’s Next? Rex and Erex Commands 

After extracting fields, you may find that some fields contain specific data you would like to manipulate, use for calculations, or display by itself. You can use the Rex and Erex commands to help you out.

Rex

The Rex command is perfect for these situations. With a working knowledge of regex, you can utilize the Rex command to create a new field out of any existing field which you have previously defined. This new field will appear in the field sidebar on the Search and Reporting app to be utilized like any other extracted field.

Syntax

| rex [field=<field>] (<regex-expression>)

For those who would like to use the Rex command, and would like resources to learn, please utilize websites such as https://regex101.com/ to further your development.

In order to define what your new field name will be called in Splunk, use the following syntax:

| rex [field=<field>] (?<field_name>”regex”)

Erex

Many Splunk users have found the benefit of implementing Regex for field extraction, masking values, and the ability to narrow results. Rather than learning the “ins and outs” of Regex, Splunk provides the erex command, which allows users to generate regular expressions. Unlike Splunk’s rex and regex commands, erex does not require knowledge of Regex, and instead allows a user to define examples and counterexamples of the data to be matched.

Syntax

 | erex <field_name> examples="<example, <example>" counterexamples="<example,

<example>"

Here’s an example of the syntax in action:

 | erex Port_Used examples=”Port 8000, Port 3182”

That’s a Wrap

There is a ton of incredible work that can be done with your data in Splunk. When it comes to extracting and manipulating fields in your data, I hope you found this information useful. We have plenty of Splunk tips to share with you. Fill out the form below if you’d like to talk with us about how to make your Splunk environment the best it can be.

Meet Atlas’s Reference Designs for Splunk

There is immense power built within the Splunk data center model. However, with great power comes even greater complexity. The traditional model is difficult to scale for performance, and Splunk teams already working against license costs are constantly faced with performance issues.

We know your teams have more to do with their time, and your organization has better ways to spend their budget.

With Atlas’s Reference Designs, we’ve found a better path forward.

Background

The Atlas Reference Designs provide a blueprint for modern data center architecture. Each provides blazing-fast performance increases and reduces indexer requirements, leading to a lower cost of ownership for your environment. Our Reference Designs are built to remove any friction caused by building and deploying your Splunk environment, serving all Splunk customers. Whether you’re running Splunk on-prem, in a hybrid fashion, or in the cloud, Atlas’s Reference Designs provide you with a simplified and proven solution to scaling your environment. 

There is a common problem that every organization faces with Splunk: ingesting data effectively.

Figure 1 - Atlas Reference Designs
Figure 1 – Atlas Reference Designs

The Atlas Reference Designs present you with a solution that pairs the Splunk Validated Architectures with the needed hardware or environment in order to deliver proven performance results.

Our first set of Reference Designs were built for on-prem Splunk instances in partnership with Pure Storage, a leader in the enterprise platform and primary storage world. Check out Pure Storage’s many nods in the Gartner Magic Quadrant for their innovative solutions.

With the matched power of Pure Storage with Kinney Group’s Splunk solutions, we’ve been able to decrease the indexers needed through Pure’s speed and increased IO on the back end—and we have the tested results to prove it.

Proven Results

Splunk recommends that for every 100 GB you ingest, you create at least one indexer. Let’s look at a real-world example delivered by the Atlas team: a company is ingesting 2 TB/day in Splunk—based on Splunk’s recommendations, this company relied on 20 indexers.

By applying the Atlas Reference Design, we were able to reduce that physical indexer count to just five indexers. This significantly reduces the cost of owning Splunk while increasing performance by ten times.

Figure 2 - Atlas Reference Design results
Figure 2 – Atlas Reference Design results

For those who invest in Splunk infrastructure on-prem, this means huge savings. Think of the costs that 20 indexers entail versus five—the cost savings are huge. Then factor in the impact of reducing the exhausting manual hours spent on Splunk to just a few minutes of human interactions.

Conclusion

To sum it up, we took an incredibly complex design and made it shockingly easy. Through automation and the great minds behind Atlas, your Splunk deployments can now launch with the push of a button, less time and fewer resources, and guaranteed results. The Atlas Reference Designs are built for all customers, with many releases to come; get in touch using the form below to inquire about our other Reference Designs.

 

Meet Atlas: Your Guide for Navigating Splunk

Last week, we shared some big news! Kinney Group launched Atlas — an exciting new platform for navigating and achieving success with Splunk. We’re thrilled to reveal this revolutionary platform to the world and share more about its vision and technical components.

The mission of Atlas is to help customers derive meaningful value from Splunk for their organization and their colleagues.

Background

What’s stopped organizations so far from reaching their goals in Splunk? The answer is all too real for our friends in the industry: IT teams are under siege.

Splunk is and should always be a key enabler for organizations to reach business, security, and operational objectives. Splunk gives IT teams meaningful insights into security posture, IT Ops, and other analytics. Here’s the issue: these teams are buried in work. IT teams are already charged with an ever-growing list of IT demands. And to top it off, they’re now tasked with managing, implementing, and learning Splunk.

Atlas enables these teams to derive value out of Splunk, fast.

A Look Inside Atlas

Atlas is a subscription service that provides a clear path forward on your Splunk journey with revolutionary datacenter architectures, personal guidance and support, and a collection of applications and utilities that provide powerful insights, instantly. Let’s take a closer look into each component of Atlas.

1. Atlas Reference Designs

Atlas Reference Designs provide a blueprint for modern data architecture. They provide rapid performance increases and reduce indexer requirements, leading to a lower total cost of ownership for your environment. Atlas Reference Designs are built on Splunk validated designs, paired with top hardware and cloud environments, and powered by Kinney Group’s unique Splunk tuning and automation solutions.

Atlas Reference Designs are proven to reduce server counts, cut storage costs, and eliminate hidden OpEx expenses, all while enabling 10x improvements in your Splunk performance.

2. Atlas Applications and Utilities

Atlas’s Applications and Utilities include multiple tools built to monitor and easily manage your data in Splunk. With an interface that’s clean and easy to navigate, working in Splunk has never been easier.

“The current components in the Atlas platform were selected because they address horizontal pain points across many different Splunk instances and deployments that we’ve seen with Kinney Group customers,” Georges Brantley, Scrum Master at Kinney Group, emphasizes.“We want to help Splunk admins and users further curate their Splunk environment to produce what they need in order to accomplish their mission.”

3. Expertise on Demand

If those tools aren’t enough, how about we arm you with a team of expert Splunkers to help with everything else?

Expertise on Demand (EOD) is your team of dedicated Splunk professionals to help you with on-demand support and Splunk best practice training. Our EOD team can quickly assist with any Splunk question or fix in increments as small as 15 minutes.

Atlas gives you essential visibility into your environment, and Expertise on Demand makes sure you get the support you need.

Conclusion

Splunk is a journey, not a destination. And every component we’ve built into Atlas is specifically and thoughtfully designed to reduce the complexities of Splunk while removing the roadblocks on your journey to success.

There’s still plenty to come from the Atlas platform, and we can’t wait to share more with you. If you’re interested in learning more about the platform, fill out the form below.

A Beginner’s Guide to Regular Expressions in Splunk

No one likes mismatched data. Especially data that’s hard to filter and pair up with patterned data. A Regular Expression (regex) in Splunk is a way to search through text to find pattern matches in your data. Regex is a great filtering tool that allows you to conduct advanced pattern matching. In Splunk, regex also allows you to conduct field extractions on the fly.

Let’s get started on some of the basics of regex!

How to Use Regex

The erex command

When using regular expression in Splunk, use the erex command to extract data from a field when you do not know the regular expression to use.

Syntax for the command:

| erex <thefieldname> examples=“exampletext1,exampletext2”

Let’s take a look at an example.

In this screenshot, we are in my index of CVEs. I want to have Splunk learn a new regex for extracting all of the CVE names that populate in this index, like the example CVE number that I have highlighted here:

a CVE index
Figure 1 – a CVE index with an example CVE number highlighted

Next, by using the erex command, you can see in the job inspector that Splunk has ‘successfully learned regex’ for extracting the CVE numbers. I have sorted them into a table, to show that other CVE_Number fields were extracted:

a search job inspector window in front of CVE_Number table
Figure 2 – the job inspector window shows that Splunk has extracted CVE_Number fields

The rex Commands

When using regular expression in Splunk, use the rex command to either extract fields using regular expression-named groups or replace or substitute characters in a field using those expressions.

Syntax for the command:

| rex field=field_to_rex_from “FrontAnchor(?<new_field_name>{characters}+)BackAnchor”

Let’s take a look at an example.

This SPL allows you to extract from the field of useragent and create a new field called WebVersion:

an SPL window
Figure 3 – this SPL uses rex to extract from “useragent” and create “WebVersion”

As you can see, a new field of WebVersion is extracted:

a window displaying WebVersion and its data
Figure 4 – the new field in WebVersion

 

The Basics of Regex

The Main Rules

^ = match beginning of the line

$ = match end of the line

Regex Flags

/g = global matches (match all), don’t return after first match

/m = multi-line

/gm = global and multi-line are set

/i = case insensitive

Setting Characters

\w = word character

\W = not a word character

\s = white space

\S = not white space

\d = a digit

\D = not a digit

\. = the period key

Setting Options

* = zero or more

+ = 1 or more

? = optional, zero or 1

| = acts as an “or” expression

\ = escape special characters

( ) = allows for character groupings, wraps the regex sets

Some Examples

\d{4} = match 4 digits in a row of a digit equal to [0-9]

\d{4,5} = match 4 digits in a row or 5 digits in a row whose values are [0-9]

[a-z] = match between a-z

[A-Z] = match between A-Z

[0-9] = match between 0-9

(t|T) = match a lowercase “t” or uppercase “T”

(t|T)he = look for the word “the” or “The”

Regex Examples

If you’re looking for a phone number, try out this regex setup:

\d{10} = match 10 digits in a row

OR

\d {3}-?\d{3}-?\d{4} = match a number that may have been written with dashes 123-456-7890

OR

\d{3}[.-]?\d{3}[.-]?\d{4} = match a phone number that may have dashes or periods as separators

OR

(\d{3})[.-]?(\d{3})[.-]?(\d{4}) = using parentheses allows for character grouping. When you group, you can assign names to the groups and label one. For example, you can label the first group as “area code”.

 

If you’re looking for a IP address, try out this regex setup:

\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3} = searches for digits that are 1-3 in length, separated by periods.

Use regex101.com to practice your RegEx:

a practice search at regex101.com
Figure 5 – a practice search entered into regex101.com

We’re Your Regex(pert)

Using regex can be a powerful tool for extracting specific strings. It is a skill set that’s quick to pick up and master, and learning it can take your Splunk skills to the next level. There are plenty of self-tutorials, classes, books, and videos available via open sources to help you learn to use regular expressions.

If you’d like more information about how to leverage regular expressions in your Splunk environment, reach out to our team of experts by filling out the form below. We’re here to help!

Kinney Group, Inc. Launches Atlas, a Groundbreaking Platform That Empowers Rapid Success With Splunk

Atlas is a revolutionary new platform from Kinney Group, Inc. (KGI) that allows customers to bypass the complexities of Splunk through a suite of powerful applications and utilities that simplify day-to-day operations within Splunk. The Atlas Platform includes KGI’s unparalleled Expertise on Demand service, delivered by Kinney Group’s team of award-winning Splunk professionals. Available beginning today, the new platform is guided by the promise, “You’re never lost in Splunk with Atlas.”

“We’ve worked with hundreds of wickedly smart and capable customers over the years who depend on Splunk for business operations and security,” said Jim Kinney, CEO of Kinney Group. “What we’ve found is those tasked with managing Splunk also have a heavy responsibility in their day-to-day jobs. So, for customers, Splunk needs to be usable and add value quickly. The Atlas Platform removes friction and guides the way to success with Splunk.”

Splunk is the #1 big data analytics platform, serving thousands of customers worldwide. With the incredible results Splunk can produce, however, it’s also incredibly complex. The Atlas platform brings new, innovative solutions to the Splunk community enabling customers to achieve scalable and consistent success with Splunk.

For many users, the benefits of the Atlas platform could cut costs associated with operating Splunk in half.

“Atlas serves everyone who lives in Splunk, from users to administrators to architects,” explains Roger Cheeks, Director of Analytics Technology at Kinney Group. “Anyone working in the platform who needs consistent and high-performing results will benefit from Atlas. You no longer have the questions and burdens behind building, monitoring, and managing your data within Splunk — now you have Atlas.”

Atlas Reference Designs

Atlas Reference Designs provide a clear roadmap for data center architecture, enabling Splunk customers to model and build their on-premise Splunk environments at scale. For customers running Splunk “on-prem,” Atlas Reference Designs significantly reduce compute, storage, and network infrastructure footprints while delivering 10x improvements in performance and reliability when compared to legacy designs.

Atlas Applications and Utilities

The Atlas platform includes powerful applications, utilities, and integrations for Splunk that simplify daily tasks within Splunk. Core capabilities within Atlas provide clear visibility into data sources, a library of powerful searches that eliminates the need to understand Splunk’s Search Processing Language (SPL) for non-admin users, Splunk Forwarder awareness, and a scheduling assistant that allows users to optimize scheduled searches and jobs.

Expertise on Demand

Expertise on Demand (EOD) provides anytime access to a certified team of Splunk professionals on-demand, and in increments as small as 15 minutes. It’s like having Splunk experts on call to support your needs, large and small, with the Splunk platform. EOD combined with Atlas enables customers to quickly realize success in their journey with Splunk.

Also introduced today — Kinney Group Managed Services

Kinney Group also introduced Managed Services for Splunk (MSP) at a company launch event today. With deep technical expertise and proven experience implementing and achieving success with Splunk for hundreds of commercial and public sector organizations worldwide, Kinney Group’s Splunk-certified professionals will manage your Splunk needs 24/7 including monitoring infrastructure (forwarders, indexers, search heads, etc.), system upgrades, monitoring of log collection, custom dashboards and reports, searches, and alerts. This offering allows companies to reduce overhead related to Splunk, without losing the value and powerful insights the platform provides.

The Kinney Group MSP offering disrupts traditional managed services offerings and sets a new standard for outsourced management of the Splunk platform. KGI’s MSP offering is for both on-prem Splunk Enterprise and Splunk Cloud customers and combines world-class Splunk architecture support with KGI’s EOD and the power of the Atlas platform. The end result for Splunk customers is a managed services offering that is purpose-built to enable organizations to maximize their investments in Splunk, while dramatically reducing costs associated with operating the Splunk platform.

About Kinney Group

Kinney Group, Inc. was established in 2006 and has grown into an organization with the singular purpose of delivering best-in-class professional and subscription services. Partnering with some of the largest and most prestigious organizations in both the Public and Commercial sectors, Kinney Group boasts a growing list of over 500 successful Splunk engagements. Kinney Group consultants are not only focused on delivering superior technical solutions, but also driving the human, mission, and financial outcomes that are vital to your success.