VMware Orchestrator Tips: Stopping a Second Execution of a Workflow

Kinney Group’s own Troy Wiegand hosts this blog series, which will outline some simple tips for success with VMware vRealize Orchestrator. Join Troy as he applies his automation engineering expertise to shed some light on VMware! 

When you have a really important workflow to run in VMware vRealize Orchestrator, you don’t want another one running at the same time. As of right now, vRO doesn’t have an option for stopping that second workflow execution from happening simultaneously. Never fear, though—Kinney Group has you covered. 

The One and Only Workflow

Some workflows are just more impactful than others. They have the power to do serious damage to your environment—they could tear down your infrastructure, or they could build it up. Depending on what you’re orchestrating, running another workflow simultaneously to this crucial one could be problematic. Luckily, we can use a workaround to prevent any potential damage. When implemented, the workaround will look like this: 

Figure 1 – The workaround functions to stop a second execution while the first one is running

Configuring the Field

The ideal result of this workaround is that when you try to run a workflow for a second time while it’s already running, vRO will send an error message to prevent that redundancy. Let’s investigate how that alert happens by going to the Input Form. The text field that contains the error message is populated with some placeholder text to identify the error. Under “Constraints,” note that “This field cannot be empty!” is marked as “Yes” from the dropdown menu. This ensures that a user cannot submit the form unless the field has something in it. However, that’s only possible if the field itself is visible. Set the “Read-only” menu to “Yes” under the Appearance tab so it can’t be edited. 

Figure 2 – The configuration of the workaround, as seen through “Input Form”

The External source action is set to “/isWorkflowRunning,” and it’s configured from the variable “this_workflow.” You can add this as a new variable and assign the same value as the name of the workflow you wish to run without concurrencies.  

Figure 3 – The form to edit a new variable

By examining the script, we can see that we’ve inputted our workflow ID. We’ve also configured the server to get executions. (Alternatively, you could skip to tokens = executions for a simpler function.) The flag below that function should be set to “false.” We can then check the state of every token, and if it’s “running,” “waiting,” or “waiting-signal,” set the flag to “true.” Those are the three states for a workflow that indicate being in motion or in progress, so it’s essential to identify them as triggers for our error message. 

Figure 4 – The workaround script

The Result: Your Workflow Runs Alone

This combination of settings allows the field to appear when you try to run a workflow while another execution of it is currently running. This way, when you try to run an interrupting workflow, it’ll be stopped.  

We hope this tutorial has been helpful in streamlining your workflows and making the most of VMware Orchestrator! We’ve got more tips for vRO here. Fill out the form below to get in touch with automation experts like Troy:

2020: The Year in Kinney Group

Although 2020 was fraught with hardship, uncertainty, and unrest, we can all take pride in moments of success despite adversity. Like so many other businesses, Kinney Group transitioned to a remote workforce in mid-March. This change brought some challenges for sure, but it also revealed the resilience and persistence of our Choppers. Now that last year has come to a close, Kinney Group is busy with our 2021 Kickoff, starting the year off right with setting goals and celebrating our successes. We’re taking a moment to look back at some of the 2020 wins here at Kinney Group:

Company

In 2020, our content team published 90 blog posts.

Engineers undertook 120 new engagements/projects in 2020.

Over the course of the year, we held 15 webinars with 370 attendees.

Our audience engagement on LinkedIn grew by 563% (follow us!).

Our incredible team of engineers spent 51,728.75 hours on engagements, delivering exceptional solutions, services, and results for our customers.

We launched 1 incredible new platform for Splunk: Atlas.

Our incredible team of engineers spent 51,728.75 hours on engagements, delivering exceptional solutions, services, and results for our customers.

Colleagues

28% of our colleagues joined after March 12, 2020, meaning that our Work From Anywhere (WFA) policy is the norm for almost one-third of the company.

Our internal IT department resolved 449 tickets for colleagues over the course of the year.

We welcomed 33 new colleagues and offered 32 promotions.

The average tenure for KGI colleagues is 2.75 years, which exceeds the average tenure of tech companies like Apple and Google by nearly a year.

Altogether, colleagues completed 385 assignments on Lessonly.

Over 200 devices were “delivered” to colleagues working from anywhere.

We welcomed 33 new colleagues and offered 32 promotions.

Culture

Colleagues recognized each others’ work with 377 culture coin nominations.

100+ songs were featured over the course of the year on our Kinney Tunes colleague playlist.

We ordered 150 #hoodies for our Atlas launch in November.

Our 2021 Kickoff boasted 10 sessions hosted by colleagues, for colleagues, including bread-baking, “For Bees’ Sake,” and a virtual fitness challenge.

Colleagues recognized each others’ work with 377 culture coin nominations.

But Most Importantly…

We are One Team, and we can’t wait to see what 2021 has in store!

As the year progresses, make sure to follow us on Facebook, LinkedIn, and Twitter to stay tuned. We’ll be updating the blog regularly with Splunk tips and tutorials, Atlas announcements, insights into Kinney Group culture, and more! Special thanks to Joi Redmon, John Stansell, Christina Watts, Cory Brown, Alex Pallotta, Wes Comer, Brock Trusty, and Zach Vasseur for their help in gathering data for this report.

Meet Atlas’s Data Management

Splunk is the data to everything platform, capturing massive volumes of data every day. Users will know, though, that without visibility, it can be difficult to extract the maximum value from Splunk. Too often, insufficient monitoring can lead to serious issues in a Splunk instance: platform underutilization, license overage, and even missing data. Each of these problems translates into a serious cost in financial resources, not to mention the hours of human intervention spent on troubleshooting a Splunk environment.

Atlas makes data management easy.

Figure 1 – the Data Management icon on the Atlas Core homepage

Atlas, Kinney Group’s revolutionary new platform for Splunk, includes the Data Management application, a tool that displays all data requests and definitions for accessible monitoring and management. Gone are the days of mysterious license overages and redundant data requests—Atlas ensures unparalleled visibility to guarantee efficient use of data resources.

Data Management

The Data Management tab, built for Splunk admins, is a centralized hub for all data requests and definitions. The data requests section shows current and past requests, including status and sourcetype, so users can easily view important information in one place.

Figure 2 – the Data Management tab dashboard

Expandable metadata reveals the “why” of each entry in the form of details and customizable notes. From this section, Atlas users can easily edit the request and create a definition from it. New requests can also be created directly from this section, and any request can be edited or deleted at any point. This feature empowers any Splunk admin to get the information they need from their data, fast.

Figure 3 – the New Data Request pop-up window
Figure 4 – the Data Definitions section within the Data Management tab

The data definitions section displays active and inactive definitions, providing a comprehensive view of details for all entries. This section also includes expandable metadata so users can see descriptions and notes. This degree of transparency is key to taking advantage of your Splunk license by ensuring that your instance is ingesting the right data. Each definition can be easily edited, and users can create a new definition directly within the section. Like a data request, the definitions can also be easily edited or deleted.

Figure 5 – the New Data Definition pop-up window

Data Inventory

The Data Inventory tab is a dashboard of existing data organized by sourcetype and index built for admins. With access to sourcetype details, including the capability to edit definitions, you’ll never lose sight of what your instance is ingesting and why. To help users monitor the volume of their data usage, this dashboard is purpose-built to include a measure of license usage for each entry.

This is a fantastic place to start adding definitions to your high-volume data sets, which will then appear on your Data Management dashboard.

Figure 6 – the Data Inventory tab dashboard

Request Data

Users can request new data and monitor current data requests from the Request Data tab. Administrators can then address those requests directly in Atlas. This dashboard includes the detailed visibility of Data Inventory and Data Management, as well as the ability to edit, add, and delete requests directly within the tab. This is a dashboard that any Splunk user can visit, and it enables them to create and track their own requests. Admins can view all requests and can approve, reject, and turn them into definitions.

Figure 7 – the Request Data tab dashboard

 

Conclusion

The “data to everything” platform promises incredible results—but you need a high degree of visibility within a Splunk environment to make that happen. Atlas’s Data Management application provides the transparency you need to ensure your data requests are being collected and addressed efficiently, eliminating costly data sprawl. Teams can now collaborate seamlessly with the knowledge that their data requests won’t be hidden or lost, bringing your organization one step closer to getting every insight you can out of your data.

There’s more to come from Atlas! Fill out the form below to stay in touch with Kinney Group.

 

Contact Us!

Meet Atlas’s Scheduling Assistant

Searches are at the heart of Splunk. They power the insights that turn data into business value—and Atlas has plenty of them collected in the Search Library. Simple dashboards and ad-hoc searches, though, are only the first step: the real magic happens with the Splunk scheduler. However, as Splunkers will know, it’s all too easy to bog down an environment with poorly-planned search schedules, redundancies, and heavy jobs. Soon, this leads to skipped jobs, inaccurate results, and a slow and frustrating user experience.

Atlas has a first-of-its-kind solution.

The Scheduling Assistant application provides a real-time health check on the use of Splunk’s scheduler and scheduled searches. In addition, it includes a built-in mechanism to fix any issues it finds. Atlas’s powerful Scheduling Assistant ensures that your scheduled searches in Splunk are running efficiently by providing the visibility you need to make the most of your data resources.

Scheduler Activity

In Atlas’s Scheduling Assistant, you’ll find the Scheduler Activity resource. The Scheduler Activity tab is your starting point for assessing how efficiently your environment is currently executing scheduled Splunk searches. Then, the Scheduler Health Snapshot section offers a health score based largely on historic findings like skipped ratio and search latency, as well as a glimpse forward at future schedule concurrency.

Figure 1 - Scheduled Activity tab in Splunk
Figure 1 – Scheduled Activity tab in Splunk

Below the Health Snapshot, the Concurrency Investigation section lets users view and sort their scheduled searches with a helpful translation of the scheduled run times. These dashboards display Atlas’s computed concurrency limits for a Splunk environment, which dictate the maximum number of searches that can be run at any given time.

These real-time insights inform how users can schedule searches for the fastest, most efficient results.

Figure 2 - Concurrency Investigation tab in Scheduling Assistant
Figure 2 – Concurrency Investigation tab in Scheduling Assistant
Figure 3 - Scheduling Assistant preview for Splunk
Figure 3 – Scheduling Assistant preview for Splunk

Next up is Historical Performance, which interprets how scheduled searches are running. This dashboard and graph display average CPU and physical memory used. Also included are search metrics like run time and latency, for example.

Figure 4 - Historical performance of scheduled searches in Splunk
Figure 4 – Historical performance of scheduled searches in Splunk

After Historical Performance, the Scheduled Search Inventory section provides details on all manually scheduled searches. It also allows users to quickly drill down to the Scheduling Assistant tool for any given search.

Figure 5 - Search Inventory of all searches in Splunk
Figure 5 – Search Inventory of all searches in Splunk

Scheduling Assistant

The Scheduling Assistant dashboard allows users to select a single scheduled search to investigate and modify.

Figure 6 - Snapshot of Scheduling Assistant dashboard
Figure 6 – Snapshot of Scheduling Assistant dashboard
Figure 7 - Key metrics on search activity in Splunk
Figure 7 – Key metrics on search activity in Splunk

This section provides key metrics for the search’s activity to highlight any issues. Atlas users can experiment by changing the selected search’s scheduling setting. By editing the Cron setting and submitting a preview, users can compare the Concurrent Scheduling and Limit Breech Ratio to see if their tested Cron setting improves overall outcomes.

If the modified schedule is satisfactory, the user can then save changes and update the saved search—all within the Atlas platform.

Cron Helper

Splunk uses Cron expressions to define schedules, and Atlas’s Cron Helper tab provides a quick and easy way to test them. Not only does this tool enable fast, direct translations, it also acts as a learning tool for those new to Cron.

The syntax key below the Cron bar displays the definitions of each character, allowing users to try their hand at creating and interpreting their own Cron expressions.

Figure 8 - Atlas Cron Helper
Figure 8 – Preview of Atlas Cron Helper

Scheduler Information

The Scheduler Information dashboard is a knowledge base for the complex definitions and functions that power Splunk’s scheduled searches. The environment’s limits.conf is present for review, and the current statistics on currency limits are provided for clarity.

These relatively static values are vital to understanding the scheduler and taking full advantage of its potential.

Figure 9 - Preview of Scheduler Information dashboard
Figure 9 – Preview of Scheduler Information dashboard

In Conclusion

Powered by these four revolutionary features, Atlas’s Scheduling Assistant provides unprecedented insight into Splunk searches. The power to survey, schedule, and change searches is in the user’s hands, saving your team time and resources.

There’s more to come from Atlas! Stay informed by filling out the form below for more information from KGI.

Contact Us!

Meet Atlas’s Search Library

One key pain point for Splunk admins and users is the inability to track, store, and view searches in one place. On top of keeping tabs on a dizzying amount of searches, users must write queries in Splunk Processing Language (SPL), which is complex and difficult to learn. Writing efficient searches in SPL takes abundant time and resources that many teams can’t afford to spare. Coordinating searches between users and admins eats up further time and can produce confusion for any team—and that’s not to mention the major obstacles that slow or failed searches can introduce.  

Optimizing and keeping track of searches is just one of the issues facing IT teams today—thankfully, we’ve got a solution. Atlas, a platform developed by Kinney Group to help users navigate Splunk, includes a comprehensive and customizable Search Library to aid users in creating and using searches.  

Figure 1 – The Search Library icon from the Atlas Core homepage

The Atlas Search Library

Collected Searches

The Search Library contains a collection of helpful, accessible searches pre-built by KGI engineers. Users also have the ability to save their own custom searches, which can be edited or deleted at any time. These are listed by name and use case, making it easy to identify the purpose of each search. All searches in the library include expandable metadata so that users can see additional information, including the SPL query, within the table. This insight into the SPL enables faster, easier education for those looking to write their own queries. Users can also filter searches to quickly and easily find all applicable listings, giving users and admins an unprecedented degree of visibility.  

Figure 2 – Atlas’s Search Library tab 

Using the Searches

Performing one of these searches couldn’t be easier. Clicking “Launch Search” will open a separate tab where you can view details of the search’s results and tweak the SPL query—all without changing the originally saved search. This capability enables those without a knowledge of SPL to learn and use powerful, intricate searches.  

Figure 3 – The launched search, open in a separate tab

Search Activity

The Search Library component also includes a Search Activity tab, which can be used to monitor which searches are run when, how frequently, and by whom. Having this visibility on one page allows users to see redundancies and overall usage of a search. The Search Activity tab includes the same level of detail as the Search Library, meaning users can dive into the specifics of each search. The tab is also filterable so users can identify exactly which searches they’re shown. You can also add any search in the Search Activity tab to the Search Library, making it easier than ever to keep track of what you need in Splunk.  

Figure 4 – The Search Activity tab of the Search Library

Conclusion

Any user is liable to hit a few roadblocks on their Splunk journey. With Atlas’s Search Library application, your team can be sure that searches won’t be one of them.  

The Search Library is only one of Atlas’s innovative features, and we’re looking forward to sharing so much more from the platform with you. If you’re eager to learn more about Atlas in the meantime, fill out the form below.

Schedule a Meeting

Lower Total Cost of Ownership with Splunk

The distributed data center model provides high availability by replicating data, but effectively eliminates any benefits gained from Splunk data compression by increasing storage requirements. Co-locating storage and compute means when you need more storage, you have to add both compute and storage. To further increase the total cost of ownership (TCO), Splunk indexers with a distributed scale-out architecture usually have more servers with less storage to minimize the amount of data and time associated with server maintenance and failures.

 

In short, the old-school, conservative approach of an ever-growing physical data center comes with incredible expense and tremendous financial risk.

 

Reduce Server Counts

Splunk recommendations for an Enterprise Security (ES) deployment with a 2 TB daily ingest call for up to 20 indexers. Based on validated testing with this Reference Design, we were able to achieve similar or better performance with only 5 indexers. This 4x improvement over Splunk recommendations represents incredible cost savings for organizations in year one alone.

Using SmartStore with FlashBlade®, Kinney Group’s Reference Design lowers the storage and compute requirements when compared to Splunk’s classic, “bare metal” storage architecture. With this approach, indexers can be sized based on ingest rates and concurrent search volumes instead of worrying about storage. Additionally, SmartStore only requires the storage of a single copy of warm data, and FlashBlade® further reduces storage requirements for the object tier by 30–40% through data compression.

 

Reduce Storage Costs by 62%

The impact is even greater when you consider the topic of storage efficiency using the Kinney Group PureStorage Reference Design on FlashBlade®. Storage efficiency — fitting more data into less raw flash — is a key contributor to reducing the amount of physical storage that you must purchase, deploy, power, and cool.

In ESG’s Economic Validation report, “Validating the Economics of Improved Storage Efficiency with Pure Storage,” the results show that Pure saved financial services organizations up to 59% in TCO, and healthcare and government organizations up to 62% through storage efficiencies alone.

 

Impact TCO through storage performance, availability, scalability… all while providing unparalleled results and reducing risk

 

Reducing CapEx AND OpEx: Considering Total Financial Impact

While a reduction in the capital costs associated with server and storage acquisition are compelling, those costs typically contribute only 20% (or less) to a 3-year server TCO, with management and other OpEx contributing the remaining 80%.

How does this reference design decrease operating expenses? The short answer is that a smaller footprint means a reduction across the board in the month-to-month and year-over-year expenses hidden in operating a data center — costs like power consumption and other utilities, preventative and predictive maintenance, connectivity, and staffing, to name a few.

With this reference design, you’ll impact bottom-line savings through storage performance, availability, scalability, and performance — providing the potential to grow revenue streams and lower costs. You’ll significantly reduce overhead by reducing the number of servers required to drive your Splunk ES environment, while simultaneously providing unparalleled results and reducing security risk. And, especially of importance, you’ll substantially reduce operating expenses associated with a sprawling data center footprint.

Total Cost of Ownership (TCO) is a complex subject, to be sure. The bottom line is that implementing a powerful, scalable compute and storage solution such as FlashBlade® technology in conjunction with SmartStore in a Kinney Group-tuned Splunk environment provides both immediate and long-term financial benefits for your organization.

 

Modernize Your Splunk Environment

We gave you a taste of the power backing the reference design model and how it can modernize your Splunk environment. Now, it’s time to download your copy and access the full document of information instantly. Within the reference design, we’ll dig deeper into the 3 Key Benefits of utilizing the reference design in modernizing your Splunk operations and dive into the technology supporting the findings. Download your copy of the white paper here.

Simplified Scaling with Splunk

Accommodating scale is an ever-present struggle for IT teams and data center operators — providing sufficient infrastructure to facilitate more demanding requirements such as increasing compute, storage, and network needs. Complexities introduced by Splunk’s specialized data ingest requirements only make the situation more challenging (not to mention costly).

The true benefit of scaling is realized not just when future growth is enabled, but when front-end requirements can be met with less hardware, expense, and footprint. Scaling only matters if you can grow from a reasonable starting point. The Kinney Group PureStorage Reference Design empowers users to achieve better performance at scale from their Splunk environment while requiring 75% less hardware.

Managing growth requires systems and strategies that cost-effectively and efficiently support scale. While traditional data center models rely on prohibitive infrastructure requirements in order to scale (square footage requirements, ballooning engineering and operational costs, and a never-ending list of hardware requirements and purchases), FlashBlade® allows incredible scaling in a smaller form factor. Cloud infrastructure provides great scaling, but growing out an existing Splunk cloud architecture is costly, complex, and operationally challenging. The Kinney Group PureStorage Reference Design is a powerful and elegant solution that enables data centers and Splunk solutions to “grow in place.”

 

The Power of Virtualized Scaling

Splunk excels at extracting hidden value from ever-growing Machine Data. This workload, however, requires massive storage capacity, so infrastructure needs to be flexible and scalable, while also providing a linear performance increase alongside that scaling. Simply put, more data means more storage and computing power needs.

While the traditional approach of using physical servers for deployment is certainly an option, utilizing virtual machines on scalable hardware solutions allows you to save time, space, and budget while being able to scale and grow “on the fly” as required.

Typical Splunk deployments utilize a handful of components at their core — Forwarders, Indexers, and Search Heads. Forwarders collect and forward data (lightweight, not very resource intensive). Indexers store and retrieve data from storage, making them CPU and disk I/O dependent. Search Heads then search for information across the various indexers, and are usually CPU and memory intensive.

By properly utilizing virtual machines, the Kinney Group PureStorage Reference Design allows users to scale resources to match the increasing demands of these components.

 

Physical Scaling that Doesn’t Grind Operations to a Halt

Modern data centers are looking less and less like giant warehouses of server racks and becoming more distributed, but the basics of traditional data center growth have experienced little disruption, depending heavily on increasing the number of servers, racks, electrical distribution, and space required to accommodate growth.

Utilizing PureStorage FlashBlade® enables cloud-like simplicity and agility with consistent high performance and control. The primary way FlashBlade® enables growth in place scale is by allowing massive physical expansion in a single chassis by adding “blades,” each of which increases capacity and performance without requiring an ever-growing footprint. Rather than shutting down data center operations to scale out by adding new servers and bringing them online alongside existing infrastructure, the FlashBlade® solution allows users to grow in place. PureStorage FlashBlade® provides up to 792 Terabytes of raw storage in a single 4 rack unit (RU) chassis. Storage is further optimized by using SmartStore, which removes the need for indexer replication (typically a factor of 2 for all data). The total system can grow to ten chassis. FlashBlade® also supports in-service hardware and software updates, so scaling up and scaling out won’t interrupt operations.

 

Meet Any Compliance Requirement with Unlimited Scaling

Splunk SmartStore makes the daunting task of data retention simple for organizations that have compliance or organizational obligations to retain data. This PureStorage architecture supports up to 10 FlashBlade® chassis, potentially representing years of data even for high-ingest systems.

 

Modernize Your Splunk Environment

We gave you a taste of the power backing the reference design model and how it can modernize your Splunk environment. Now, it’s time to download your copy and access the full document of information instantly. Within the reference design, we’ll dig deeper into the 3 Key Benefits of utilizing the reference design in modernizing your Splunk operations and dive into the technology supporting the findings. Download your copy of the white paper here.

Unmatched Performance in Splunk

The beauty of Kinney Group’s new reference design for Splunk lies in the unmatched performance provided by combining PureStorage FlashBlade®, Splunk SmartStore, and Kinney Group’s advanced Splunk configuration tuning in a virtualized environment.

PureStorage FlashBlade® supports file and object storage, producing a seamless integration with Splunk SmartStore. These technologies provide an all-flash performance, even for data that would have been traditionally rolled into cold buckets on slower storage tiers. Kinney Group optimizations enable rapid ingest and quick searches even at high volume, and testing showed the reference design can easily ingest up to 4x the Splunk-recommended limit. That means a Splunk-recommended architecture for 500 GB of daily ingest can handle 2 TB or more. In fact, testing showed that a sustainable result of 8x the recommended limit (4 TB/day) is possible.

Optimizing Splunk for Lightning Fast Search

Kinney Group’s engineering expertise in optimizing Splunk enables users to ingest more data, more quickly. Optimization and fine-tuning of the environment yields astonishing results. Splunk searches on traditional, distributed scale-out architectures lead to significant performance degradation as data ages. As it ages, data is tiered to cheaper and lower-performance storage tiers in cold buckets, significantly impacting search performance. This storage approach is especially impractical when responding to search requests related to regulatory or compliance requirements, cybersecurity, and legal discovery—all of which demand information beyond the most immediate data.

Utilizing SmartStore with FlashBlade®, however, provides all-flash performance with high bandwidth and parallelism for data operations and searches outside of the SmartStore cache. It also ensures that you can efficiently complete critical, non-repetitive tasks while supporting the bursting of SmartStore indexers. By Splunk best practices, high search execution latency should be avoided and can cause a cascading degradation in performance. At the highest levels of data throughput tested in the validation of this design, disk latency never exceeded 2ms, and Input/Output Operations Per Second (IOPS) remained flat.

Optimizing Splunk for Lightning Fast Security Workloads

Using Splunk Enterprise Security (ES) “off the shelf,” there are a number of inefficiencies in search configuration. In the testing and validation of this Reference Design, Kinney Group was able to tune ES to avoid skipped searches while maintaining the level of searches in the environment. Splunk will often skip scheduled searches — as a result of high latency that Splunk is not able to overcome — by postponing or rescheduling the search or searches. This was accomplished, in part, by including updated timing of searches and increasing search slots in the software. (See the “Enterprise Security Tuning” section of this document for details.)

The net result is an environment with such precise software tuning and hardware engineering that you’ll imagine the sound of a perfect Formula-1 racing engine every time you walk by your server room.

Enabling Data Security without Hindering Performance

In a traditional Splunk environment, enabling data security introduces various considerations that significantly impact performance. Pure Storage FlashBlade® supports native data encryption while still maintaining incredible single chassis performance of 1.5 million IOPS and 15 gigabytes per second (GB/s) of throughput at consistently low latency.

We hate to say “faster, better, cheaper,” but…

We know how tired the “faster, better, cheaper” trope is, but the reality simply can’t be avoided. This unmatched performance doesn’t come with the soul-crushing price tag you’d expect. Rather, we’ve engineered a solution that allows you to reduce footprint and impact the total cost of ownership (TCO) in a way that demands further inspection — you’ll save on capital expenses, operating expenses, and who knows how much on aspirin.

Modernize Your Splunk Environment

We gave you a taste of the power backing the reference design model and how it can modernize your Splunk environment. Now, it’s time to download your copy and access the full document of information instantly. Within the reference design, we’ll dig deeper into the 3 Key Benefits of utilizing the reference design in modernizing your Splunk operations and dive into the technology supporting the findings. Download your copy of the white paper here.

An Introduction to Modernizing Your Splunk Environment

At Kinney Group, we believe the best way to solve the inefficiencies of traditional Splunk operations is to throw out the script and modernize the approach.

 

The traditional Splunk data center model is complex and difficult to scale for performance, requiring IT professionals to increase server counts and expand data center footprint to gain compute and storage capabilities. This outdated approach means expensive upgrade cycles, disruptive downtime, and increasingly complicated operation, all in an architecture fraught with performance “gotchas.”

 

We’ve found a better path forward.

 

Kinney Group and Pure Storage have teamed together to create a reference design that provides benefits, insights, and a technical overview of a high-performance, scalable, and resilient data center infrastructure for the Splunk Enterprise platform. This revolutionary reference design is comprised of a powerful combination of VMware virtualization, Pure Storage hardware, Splunk SmartStore, and Kinney Group engineering expertise.

 

Kinney Group and Pure Storage have teamed together to create a reference design that provides benefits, insights, and a technical overview of a high-performance, scalable, and resilient data center infrastructure for the Splunk Enterprise platform.

 

The end result is an elegant approach to hosting Splunk Enterprise that enables dramatic reductions in storage complexity and infrastructure footprint, with transformative performance improvements and a lower total cost of ownership.

Here’s a glimpse into the 3 Key Benefits of utilizing the reference design in modernizing your Splunk operations

Key Benefit #1: Unmatched Performance

The beauty of this reference design lies in the unmatched performance provided by combining PureStorage FlashBlade®, Splunk SmartStore®, and Kinney Group’s advanced Splunk configuration tuning in a virtualized environment.

By combining expert engineering, finetuned software solutions, and solid-state storage can provide a 4x performance improvement over Splunk’s own recommendations.

Key Benefit #2: Simplified Scaling

Accommodating scale is an ever-present struggle for IT teams and data center operators — providing sufficient infrastructure to facilitate more demanding requirements such as increasing compute, storage, and network needs.

Complexities introduced by Splunk’s specialized data ingest requirements only make the situation more challenging (not to mention costly). The 1-2 punch of VMWare virtualization and Pure Storage’s FlashBlade® technology allows for “grow in place” scaling that won’t require disruptive downtime, and makes scaling storage for compliance a breeze.

Key Benefit #3: Lower Cost of Ownership

The distributed data center model provides high availability by replicating data, but effectively eliminates any benefits gained from Splunk data compression by increasing storage requirements. Co-locating storage and compute means when you need more storage, you have to add both compute and storage. To further increase the total cost of ownership (TCO), Splunk indexers with a distributed scale-out architecture usually have more servers with less storage to minimize the amount of data and time associated with server maintenance and failures.

Saving on capital expenditures such as servers, storage, and square footage is just the beginning. Reductions in equipment and increases in productivity represent savings on OpEx that make this reference design the gift that keeps on giving.

 

Modernize Your Splunk Environment

We gave you a taste of the power backing the reference design model and how it can modernize your Splunk environment. Now, it’s time to download your copy and access the full document of information instantly. Within the reference design, we’ll dig deeper into the 3 Key Benefits of utilizing the reference design in modernizing your Splunk operations and dive into the technology supporting the findings. Download your copy of the white paper here.