Splunk Assist in Splunk 9

The wait is over and Splunk 9 is officially here! This release introduces a number of features and improvements aimed at making life easier for Splunk admins and users alike. Wondering which announcements and improvements really matter for you? Join us as we investigate and explore some of our favorite discoveries!

Splunk Assist in Splunk 9

Splunk 9 brings with it Splunk Assist, an exciting improvement to the Monitoring Console for On-Premises Deployments, helping Splunk Admins configure and secure their Splunk Environment faster than before!

Assist is a brand-new tab and suite of functionality for the Monitoring Console. It can be easily reached from the Monitoring Console’s navigation in Splunk 9.

So what does Splunk Assist bring to the table?

First, the primary Splunk Assist pane provides insights, grouped into three distinct areas:

  • Indicator Tabs: These are categories of indicators for which you can see additional information. Clicking on one of these tabs loads information about that indicator, including a graph with the number of instances in your deployment, and what their compliance status is (conforms, warning, or critical).
  • Overview Pane: Provides detailed information about the nodes in your Splunk environment. Status icons here also reflect their compliance status, and are grouped into search, indexing, and collection tiers.
  • Indicator Summary: This pane provides a list of all the indicators along with a summary of the information it collects, and why. You’ll see information relative to the indicator’s category, scope (where does the indicator apply), and results (compliance state).

Splunk Assist also tracks and visualizes SSL certificate best practices. This ensures your Splunk ecosystem is meeting security standards by clamping down on possible attack vectors, while also ensuring that expired SSL certs don’t creep up on a busy IT team.

What’s the catch?

This is incredible functionality, but it does come with a catch — Splunk Assist requires Splunk admins to leave Support Data Usage turned on in their environment. This will enable Splunk HQ to start investigating and tracking your Splunk environment data to find potential issues. While this won’t send your private event data upstream to Splunk, it does require outbound connections that many on-premises installations may lack.

Note: As with Splunk 8, Data Collection is switched ON by default when you install. You’ll need to manually opt-out if you don’t want to share your usage data with Splunk. If you opt-out, the Splunk Assist service won’t work.

Get Started with Monitoring Console

First, you’ll need to prepare your environment: You’ll need to enable support usage data, as previously mentioned, to get the insights that SA provides. (http://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/Shareperformancedata) You’ll also need to make sure you’ve configured Monitoring Console, if you haven’t already. If you’re utilizing a firewall, make sure you allow *.scs.splunk.com on Port 443.

Next, you’ll activate Splunk Assist. From the system toolbar, choose Settings > Monitoring Console, and then choose Assist. You’ll need an activation code, which is tied to your Splunk license. If you’re not the license owner, you’ll need to reach out to get this code. If you’re the license owner and you’re thinking, “Uh… I didn’t get a code,” no worries — follow those same instructions, but choose the “Get an activation code” link.

The setup is very straightforward. Once you’ve activated Splunk Assist, it’s time to start using it! We’ll take a deeper dive into how to get the most value from Splunk Assist soon.

What’s next?

This is just one of the many incredible new features available in Splunk 9! Need to get up and running with Splunk 9 quickly? Our expertise (nearly 700+ Splunk engagements over the years) coupled with Atlas — The Creator Empowerment Platform for Splunk — means we can make your transition to Splunk 9 quick and effective!

We’d love to hear from you! Schedule a quick call to discuss your needs, or check out the Atlas overview video to learn more about empowering your team of Splunk Creators.

 

Solving Splunk Bundle Size Issues

Cluster Bundles are packages of knowledge objects that must be shared between indexers and search heads in clustered environments. Unfortunately, these can get too big and cause performance issues for your Splunk environment. We’ve discovered a trick that can drastically reduce bundle size, while maintaining operations, and improving performance! Strangely enough, this method is barely mentioned in Splunk Docs. It’s a hidden feature we need to share!

Bundles of Awesome!

The modern Splunk deployment has clustered Indexers and Search Heads that help share the load of reading, searching, and computing data for users and alerts, every second of the day. These separate instances communicate with each other to properly execute tasks and keep things running as smoothly as possible — but what happens when a user makes a change on one instance? It needs to waterfall down to the many other pieces in the Splunk architecture, and it does that using Splunk Bundle Replication.

This usually works great! Users edit items, the Search Heads and Indexers share information, and everything stays relatively up to date and actionable for users. However, when its functionality is pushed to its limits, both Splunk Admins and Users will experience a headache like none other.

A Bundle of Pain!

In mature Splunk ecosystems, this bundle system can start tripping over itself and quickly cause issues downstream. Having Knowledge Objects which are too big (or having too many) can cause replication errors, leading to search slowdowns for users, Search Heads spending precious CPU managing large files instead of search execution, and updates failing to be shared between Splunk instances. All these errors are the fast lane to Splunk instability (and a royal pain).

If only there was “one weird trick” to alleviate your bundle sized pain and prevent these issues!

One Weird Trick!

Surprise, surprise — there is! One cause of large bundle sizes is big lookup files your Splunk system creates and relies on for quick referencing. Unlike dashboards or other Knowledge Objects, however, lookups can get big and unwieldy, leading to your bundle size growing and growing.

Fortunately, this “hidden trick” we’re talking about can reduce the size of your lookups, and greatly reduce your bundle size. This trick? Compression!

Compress your Problems!

Splunk supports lookup compression, enabling Admins to convert their lookups to a much more reasonable size. If done right, there will be no usability difference! Follow the steps below to compress your largest lookups and fix your bundle size!

  1. Identify a large lookup file you would like to compress to reduce your bundle size
  2. Navigate to that file in the Command Line Interface of the system
  3. Gzip the lookup file (gzip largelookupfile.csv)
  4. Searches of these compressed lookups will now need to include a .gz, unless you create a lookup definition that uses the original lookup name, mapped to the new file.gz name!

That’s it! With this workflow, you can reduce the size of lookups by around 50%, and potentially reduce your bundle size by around 30% or more! All the while, your users’ searches and dashboards will operate the exact same, except for being error free. Compressed lookups can still be edited using outputlookup, and can of course be referenced using the lookup command.

Scrum 101: The 80/20 Rule

The principle of the 80/20 rule is 80% of the whole, or value, comes from 20% of the execution. Also known as the “Pareto Principle,” this rule is intentionally vague because, as discussed by Kevin Kruse in Forbes, it’s found everywhere. From an execution and product standpoint, Scrum applications of the 80/20 rule vary. The rule could mean a minority of the development team will interact with the majority of your tasks. The rule could also imply that a small subset of your bugs will interfere with users the most. Like laws of nature, these elements occur naturally, but the true value of the 80/20 rule is when you actively exploit it to improve your overall Scrum developing & delivery process.

The 20

Scrum is aligned to exploit the Pareto Principle. The time constraints of a sprint paired with the pressure to produce a high-functioning product can be daunting. Sometimes, these constraints lead to engineers building around requirements instead of developing a solution. Then, post-review, the developers can make their case for further enhancements for the next sprint. This candid conversation fosters an environment of simplicity and team efficiency.

Now the developers and the project can stay on track for delivering not just a product, but the value to the client. The Scrum process empowers the team to keep building the 20% that matters and to cast a critical eye towards scope creep. This attitude is not easy. As we know, many developers feel like the task’s not finished until it can cook itself dinner (wouldn’t that be great?). By setting good task definitions, the value-based solution development can be achieved!

The 80

Kinney Group strives to connect our work directly with our stakeholder’s needs and objectives. We developed “The 3 ROIs Framework” to make that happen. Through the framework, our Kinney team outlines the financial, human, and business objectives of our customers on every project we deliver. The 3 ROIs Framework is further enhanced by Scrum teams that fully exploit the 80% value to deliver the unexpected experience to our clients.

By nurturing an environment for candid feedback between developers, product owners, and users, the team can identify key features that are punching above their weight. These are your “special treatment” features. These features may take additional interface work or enhancements towards improvement. Although they may seem small, these features can improve first impressions towards the overall product. This precision can turn your application from a jack of all trades to an instrumental piece of software, strengthening your client’s mission, financial, and human goals.

Again, this laser focus is at odds with developer behavior behind building the complete picture, requiring user feedback. It is important for product owners to take on the role of championing the users’ needs and equally important for the development team to listen.

The Takeaways

Of course, the 20% of this post that provides 80% of the value is the wrap-up summary. Take these tips and help shape your Scrum and project delivery processes:

  • Re-enforce iterative development. This will ensure each sprint is forcing developers to take the short, simplest routes. Then, they can re-assess in the next sprint if additional work is actually needed.
  • Have candid conversations with your Product Owner and users. These folks will lead you to find the 20% of your product that provides 80% of value.
  • Prioritize work that aligns with the 20% features. You want your solution to shine for users. Always have a bias to improving this feature over adding additional bells and whistles.
  • Gather your ROIs from your client and start by discussing the 80/20 features.

At Kinney Group we design, build, and integrate IT infrastructure solutions for some of the most demanding government agencies and commercial organizations in the country. As an organization, we develop our solutions through Scrum and agile best practices and delivery. By leveraging next-generation technologies, adopting proven engineering practices, and agile development principles, we create custom solutions and world-class environments for data.

Want more information on how we can put that power to work for you?

Scrum 101: Three Myths of Traditional Scrum

Hi, I’m Georges, Scrum Master Myth Buster.

As a resident Scrum Master at Kinney Group, I’m responsible for promoting and championing Agile & Scrum habits on our automation projects. I started off as a developer, which shaped a lot of my goals and expectations in Scrum today. I have a bias to producing work, lightweight management involvement, and simplifying everything that’s not technical. A Scrum Master’s role is to produce an environment that is encouraging for developers and for growing the Agile mindset. Throughout my journey of training, execution, and experimentation, I have seen common Scrum myths permeate through the software development and project management ecosystem. Follow along as I debunk three myths about traditional Scrum best practice, so hold on to your Jira tickets and let’s go through the big offenders!

Myth #1

Scrum makes my development teams more efficient.

One of the first books I read for training was “Scrum: The Art of Doing Twice the Work in Half the Time.”A great read for learning the Scrum trade, but with an untrained Scrum eye, some directors and management may take away the wrong impression of how their teams will operate under a new delivery framework.

Teams today are most likely doing a hybrid practice of Homebrew/Waterfall/Agile systems. A distinct change in your Agile practice may reinforce good habits that make your team faster, however, it’s important to remember that speed is not the goal of Scrum. The goal of Scrum is to make our teams more effective not efficient.

With a bias to client and user interactions, our engineers are executing on project goals while verifying their work repeatably. Many may argue that effectiveness leads to efficiency from a high level, and they would be right, but the message behind Scrum-based development should never promise the moon. Short-term stumbles and smaller gains will happen along the way and Scrum allows your team to break these bad habits along the way. Sometimes, a team needs to slow down before they can speed up, resulting in more effective work to reach a customer’s goal.

Myth #2

Scrum is a set of meetings and rules for my developers.

I love this myth. It’s at the same time entirely true and completely misleading. By reducing Scrum to its artifacts and ceremonies, we ignore the foundation that Scrum rests upon — the Agile Mindset. When you ignore the Agile Mindset, your Scrum practices will never reach their full potential. This may ultimately get in your way instead of fostering a better development environment.

Equally important, the Agile Mindset not only requires your developers’ participation, but it involves your entire delivery pipeline. Your company’s salespeople need to champion the need for Product Owner involvement. Your business analysts need to write Statements of Work that promote handshake agreements on changing priorities. Your project managers need to keep their development teams as stable as possible. Scrum is not a set of meetings and rules for your engineers to follow. It’s a shift in the entire delivery framework and requires everyone to pitch in.

Myth #3

Scrum is the evolution of Waterfall and supersedes it.

Scrum and Waterfall are two delivery frameworks eternally locked in a misguided war of buzzwords and superstition. The truth is, they both have value for different reasons. Scrum did not kill Waterfall. However, Scrum did create a system that is better suited for rapid development, experimentation, and unknowns. Waterfall still has place for fixed, ‘simple’ work requirements. By requirement and by design, these won’t deviate from the initial technical and feature scope. With complex, initial asks on a Waterfall project, it’s no wonder that Agile processes have taken over. However, it is important for developers and managers to treat the words, “We are doing Scrum” with respect and not as a not-so-clever way of saying, “We aren’t doing Waterfall.”

Myths = Busted

There is a common misunderstanding intertwined within these myths: Scrum is easy. The truth is that Scrum isn’t just a simple evolution of the classic plan to execution flow of Waterfall. Scrum requires all hands-on deck in an organization to support. It is not a band-aid, but a methodical shift in how a company operates. If executed and supported sufficiently, adopting a Scrum practice can result in the effective delivery of your product and, most importantly, happy clients.