Indy Splunkers Unite With the Messaging Tool Slack

When you find yourself stuck with a stubborn data issue, just digitally tap a fellow Splunker and share technical tips. This is possible through the Splunk User Group Slack domain.

New call-to-action

How to Access the Splunk Slack User Group

To receive access, complete this short Google Form. Be sure to join the #indianapolis channel once you’ve been accepted.

New call-to-action

Splunk MLTK: What It Is And How It Works

What if there was a tool you could use to automate the time-consuming and nearly impossible parts of your job as a Splunk administrator? There is, and it’s called the Splunk Machine Learning Tool Kit (MLTK). It can predict analytics, identify patterns in your data, and even detect abnormalities in your data. 

In this post, we cover how MLTK works and how you can use the power of artificial intelligence to work more efficiently.

What is the Splunk Machine Learning Tool Kit?

The Splunk Machine Learning Tool Kit (MLTK) is an app that lets Splunk creators deploy SPL commands and custom visualizations that explore and analyze data using machine learning technology.

MLTK is available for both Splunk Enterprise and Splunk Cloud Platform on Splunkbase.

There are three main features of the Splunk MLTK app:

    • Anomaly Detection: By analyzing your past data, Splunk’s machine learning tool can automatically detect abnormalities within your current and future data.
  • Predictive Analytics: Predicting events and transactions is made simple with MLTK so you can make informed decisions in real-time.
  • Data Clustering: Clustering data into groups allows MLTK to identify patterns in your data that humans might miss.

How does the Splunk Machine Learning Tool Kit Work?

In order to work efficiently, the Splunk MLTK app must learn information and then provide that knowledge to the end user. Although the process for how MLTK works is not cut and dry, it can be generally outlined like this:

Step 1: The MLTK collects data

Step 2: The MLTK transforms the data into actionable intelligence

Step 3: The MLTK explores and visualizes that data in the proper context

Step 4: The MLTK models the data

Step 5: The MLTK evaluates the data

Step 6: The MLTK deploys the data

The great thing about the MLTK is that you’re not alone when using it. The Assistants are tools within the MLTK that walk you through the tools and features you’ll need when preparing, building, validating, and deploying models.

Machine Learning & Data Science

The gist of machine learning is to provide systems with the ability to learn. That is, we give the system’s algorithms to start with, and they can adapt based upon data, make classifications, and make decisions with little to no human intervention.

The Splunk Machine Learning Toolkit

The MLTK is a Splunk app, which is free by the way, that helps to create, validate, manage, and most importantly, operationalize, machine learning models. The MLTK includes a variety of algorithms including several hundred from the Python for Scientific Computing Library, that give the power to try different algorithms to find the right insights for your data.

Two Example Scenarios
  • Resource Management: When we’ll need more capacity
  • Systems breaking: Identify the items that are indicative of forthcoming system failures

Looking Forward with Splunk MLTK

We are in a new day and age of IT Operations, where many manual processes can start to be automated with the help of these tools. Putting the power of Splunk’s MLTK into the hands of your IT Operations personnel can empower them to begin a transition to a more automated approach to their everyday work. Such as, being able to investigate and troubleshoot a problem before you even see the effects of what may be going on. This approach is not mainstream—and may be daunting to some—but now is the time to get a grasp on the next generation of IT Operations.

Want to know what Splunk MLTK do for you and your organization? You can actually get access to Kinney Group’s deep bench of Splunk experts, on demand. Check out our Expertise on Demand for Splunk service offering for more information on our various packages and let us know how we can help unleash the power of Splunk.

Visit www.kinneygroup.com/contact-us or call us at (317) 721-0500.

New call-to-action

Webinar Replay: Expertise on Demand for Splunk Services

Splunk Expertise on Demand Services - Webinar Replay - Kinney Group

Get The Most Out of Your Splunk Investment with Expertise on Demand for Splunk Services

 

Listen in as Splunk experts from the Kinney Group explain how you can get dedicated Splunk expertise at your fingertips now. Co-hosts Carl Bahor and Jake Miller will do a deep dive on our Expertise on Demand (EoD) for Splunk Services and answer questions regarding the changing landscape of Splunk.

 

Click here to unleash Splunk’s potential.

This pre-recorded webinar was held on Wednesday, January 30, 2019.

 

New call-to-action

 

EoD for Splunk was created to keep Splunk doing its best work for your teams and organization. Built for companies executing important, complex, sensitive work, Splunk is the No. 1 big data platform in the market today. When used to its potential, it can deliver powerful insights in the areas of security and IT operations. As a strategic platform, Splunk can power organizations’ use cases for Internet of Things (IoT), compliance, and business intelligence applications.

But Splunk can’t function without people. When organizations don’t have enough Splunk expertise on staff, nor the time needed for teams to develop that expertise with their personnel, your investment of time, money, and energy in Splunk can fall flat. This is precisely why Kinney Group developed EoD for Splunk.

 

Kinney Group’s Splunk Experts to the Rescue

Whether you’re short-staffed, on a tight budget, or banking on the internet to get over all your Splunk hurdles, Kinney Group’s Expertise on Demand (EoD) for Splunk service can help. Expertise on Demand for Splunk bridges the gap between SOW-driven professional services and break/fix customer support.

Don’t let your investment in Splunk fall flat because you don’t have enough expertise on staff. Let Kinney Group help you maximize your time, investment, and energy in Splunk. Find out how to get anytime, immediate access to our deep bench of Splunk experts with a subscription to Expertise on Demand for Splunk services.

Visit Splunk.KinneyGroup.com today.

New call-to-action

4 Splunk Stream Tricks of the Trade

Splunk Stream

Splunk Stream is a purpose-built wire data collection and analytics solution from Splunk. Splunk Stream can be one of the most robust products Splunk offers as a free addition to your Splunk Enterprise environment.

However, some of us know that Splunk Stream can be daunting to setup and utilize to its full potential. With that in mind, let’s jump into some tips and tricks of the trade for working with the Splunk Stream.

1. One Simple REST Call

  • The Stream REST API is a powerful function, and one simple REST command can help you power through configuring Stream Forwarders. One of the most common errors that is seen when deploying a Stream Forwarder is “Unable to ping server.” At times it can become difficult in determining whether this issue lays within your configuration or a network configuration.
    • Utilizing the following curl command helps determine whether you have the correct App location: curl http://<stream_app_server>:8000/en-US/custom/splunk_app_stream/ping 
  • Using this command before deploying the Stream Add-on, or Independent Stream Forwarder, can help determine if the Stream Forwarder can access the Stream App within your deployment.

2. Independent Stream Forwarder or Stream Add-on?

  • Planning a new deployment, or the addition of a forwarder can spring the above question, should I install an ISF or the Stream TA on a Universal Forwarder? The answer to this can vary by environment and collection method. But as with any Splunker, I love my data!
  • From the above charts you can start to compare the performance benefits of the ISF. Although your environment may never reach the ingestion rate at which you start to see dropped events from the Universal Forwarder, it is a peace of mind knowing that your forwarder can handle considerable amounts of data

 

New call-to-action

 

3. Hunting Down Suspicious Subdomains using URL Toolbox

  • You can perform some simple Stream hunting just utilizing DNS data. With DNS data from Stream you can start to investigate suspicious DNS queries and subdomains from within your environment. You can empower your investigations by utilizing this URL Toolbox link.
  • For example, if you perform a Splunk search for your stream:dns data, then after populating the query value you can pass the queries to the URL Toolbox. This allows you to filter out URLs that you know are not suspicious and ones that don’t have a Top Level Domain. You can take this a step further by utilizing the URL Toolbox to calculate entropy values of the subdomains, and sort to see the highest scores. (The higher the score, the more randomized the URL is) Taking these scores into account, you can start digging into specific IP investigation.

4. Splunk Stream on a Raspberry Pi

  • Of course it can work! One Splunk engineer put the Independent Stream Forwarder to the test to see how light-weight it really is. The Raspberry Pi is a cheap and easy way to play around with the possibilities of Splunk Stream. You could even implement this at home environment to add even more capabilities to your own lab environment. In fact, here is a link to the Splunk forwarder for Linux ARM download, which is installed on the Raspberry Pi for Splunk forwarder capabilities.
  • This is a great example of the power of a Stream Independent Forwarder. The Raspberry Pi in my home environment is currently running as a Pi Hole, but I am going to implement the Streamfwd to run some searches and create dashboards of the queries and how the Pi Hole handles them.

Splunk Stream

 

Need help with Splunk Stream? You can actually get access to Kinney Group’s deep bench of Splunk experts, on demand. Check out our Expertise on Demand for Splunk service offering for more information on our various packages and let us know how we can help unleash the power of Splunk.

About Kinney Group’s Splunk Practice:

The Kinney Group team has the deepest bench of Splunk expertise in North America. Our team provides a comprehensive Splunk customer experience across multiple disciplines including Splunk Enterprise, Splunk Enterprise Security (ES), IT Services Intelligence (ITSI), and custom use cases in the areas of compliance, IoT, and machine learning. Kinney Group highlights include:

  • A Top Global Splunk Professional Services Practice
  • Splunk Elite Partner
  • Splunk Public Sector Services Partner of the Year
  • Experience with 300+ projects delivered nationwide and overseas
  • Application development expertise for the Splunk platform

Visit www.kinneygroup.com/contact-us or call us at (317) 721-0500.

New call-to-action

Leverage Splunk at Scale, Fast with Pure Storage

We live in the age of the operational intelligence platform. This technology undeniable for organizations because it is making machine data accessible, usable, and valuable to everyone. By harnessing platforms like Splunk, organizations can tear down departmental silos enterprise-wide, and can creatively ask questions of all their data to maximize opportunities. To make this a reality, hardware and infrastructure is a critical consideration.

Leveraging traditional storage approaches for Splunk at scale is under siege. One of the greatest risks to enterprise-wide adoption of Splunk is inadequate, under-sized, or non-performant hardware. Organizations frequently want to repurpose existing and aging hardware, leaving Splunk customers dissatisfied with the implementation, and possibly the entire platform.

 

New call-to-action

 

Good news. A superior hardware approach is here: running Splunk on Pure Storage FlashStack.

Get the Data Sheet: Click Below

Imagine an all-flash reference architecture that enables true harnessing of a Splunk deployment (technically, and for the bottom-line):

  • Smarter Computing. 5x greater efficiency at the compute layer
  • Operationally Efficient. 10x greater efficiency in rackspace, power, heating, and cooling compared to equal disk-based solution
  • Uniquely Virtualized. On a 100% virtualized environment with native Pure + VMware High Availability features
  • Smarter Spending. Higher ROI on hardware, so the same budget can be reallocated to further harnessing the power of Splunk at scale

The result is true competitive advantage as an organization achieves improved simplicity and performance while lowering the Total Cost of Ownership (TCO) of enterprise Splunk deployments. Splunk on Pure Storage Flashstack empowers organizations to manage large Splunk instances as they journey toward the analytics-driven, software-defined enterprise.

New call-to-action

Two Analytics Platforms Synergize for Holistic Application Monitoring

Pairing Splunk and AppDynamics

We now live in the era of the “software-defined enterprise”. Software applications represent the key enablers for commercial businesses and public sector organizations. Applications are no longer just enablers for back-office processes. Today, software applications are now the “face of the organization” to customers, partners, and also internal co-workers.

The era when customers would tolerate application failures being fixed in hours, days, or weeks are long gone. Today’s constituencies expect applications to be “always on”, and problems identified and resolved in minutes (if not quicker).

The ability to leverage analytics to support critical applications within the software-defined enterprise will define the winners and losers in the market. The power of IT operations analytics holds promise as the enabler for dramatically reducing Mean Time to Repair metrics for critical applications, regardless of where a problem exists.

The paragraphs that follow will provide insights into a proven approach for leveraging the power of analytics to identify and solve application problems quickly and to win in the market as a software defined enterprise.

A One-Two Approach: Winning Against Problematic Application Stacks

Pinpointing problems with large, distributed, and often legacy application stacks is difficult. Troubleshooting and identifying the underlying cause of internal and external customer facing problems can often take weeks or months. The result for organizations unable to solve applications problems is negative. End-user satisfaction goes down and precious customers can be lost forever. Organizations feel the pressure of hectic customer support war rooms, missed goals, and upset leadership and investors. Time is money; inefficiency and downtime for mission critical systems means lost revenue and angry customers.

But, there is hope. It’s a new day in analytics, and several solutions have entered the market recently that attempt to reduce Mean Time to Identify (MTTI) and Mean Time to Repair (MTTR) metrics for application troubleshooting with varying levels of success.

The bottom-line: in order for organizations to get the full picture and achieve holistic application stack monitoring, they need to use Splunk and AppDynamics for a cohesive view of their entire application stack. Splunk can natively see across the application stack to point to an issue. Then, AppDynamics can drill down and see into the proverbial “black box” (as illustrated in Figure 1), which is typically a database layer, the application layer, and UI/ Web layer.

Splunk can see around the “black box”, and AppDynamics can see into it.

Figure 1: Splunk can see around the “black box”, and AppDynamics can see into it.

Where Splunk Ends AppDynamics Begins, and Vice Versa

Splunk and AppDynamics can artistically be woven together to build a cohesive analytics solution for end-to-end application visibility. Here’s how.

Splunk Pros and Cons

Arguably, the most flexible tool to address application stack monitoring is a platform called Splunk. Entering the market in 2005 initially as a type of “Google” for monitoring, Splunk software quickly evolved into a flexible and scalable platform for solving application problems. It also emerged as a platform with a robust and configurable user interface, touting sleek data visualization capabilities. Those qualities have allowed it to become a standardized platform in application stack monitoring teams. How is Splunk better than the rest? There are two main reasons.

First, Splunk’s ability to correlate disparate log sources allows it to identify and find issues in tiered applications. Applications are commonly written in very different languages. Thus, they have few logging similarities in structure, content, or methodologies. For most traditional monitoring tools, configuring data source setups is labor intensive and needs to be aggressively maintained if the application or its environment changes. On the other hand, Splunk is elite in dealing with these differences “on the fly”, as it is able to monitor these disparate log sources in real-time as the data is consumed. Splunk’s advantage is because it can provide very flexible, reporting driven schemas as the data is searched. This is important with legacy applications due to limited standardization, especially in the application layer where most of the business logic and “glue” code resides for an application to work.

 

New call-to-action

 

Second, Splunk is easy to use for monitoring around an application, particularly in the networking, infrastructure, and Operating System (OS) layers, it has standard configurations which are fast to implement and where one can start deriving technical and business insights quickly. The areas where Splunk is the straightforward solution in IT Operations Analytics includes networking, operating system, storage, hypervisor, compute, load balancers, and firewalls.

Where does Splunk need assistance? With deep application performance monitoring in complex, highly distributed environments. This is because many mission-critical applications cannot be easily updated, and doing so is often too labor intensive (or impossible) to use the application logs to derive insights into problems. While legacy approaches to solving these monitoring problems are under siege, their existence is a reality as organizations transform. Splunk’s answer to this issue is in Splunkbase, the community for certified apps and add-ons. There is the Splunk App for Stream to monitor, ingress, and egress communicate points between the layers in the application stacks, database to application, and then application to UI/ Web layer. Still, with Splunk App for Stream this is deficient when compared to AppDynamics because monitoring “around” a problem only describes the downstream impacts, it cannot pinpoint the actual problem quickly.

BusinessTransactions

Figure 2: Pairing Splunk and AppDynamics achieves unparalleled visibility into the entire infrastructure (Splunk) while providing unified monitoring of business transactions to pinpoint issues (AppDynamics).

AppDynamics Pros and Cons

AppDynamics entered the marketplace in 2009 with a simple purpose: be the best for addressing deficiencies in application stack monitoring options, particularly for large, distributed, and often legacy application stack monitoring. They monitor the business transactions, which are the backbone of any application. In doing so, they found a common auditing language that transcends database, application, and UI/ Web layers, including full support for legacy applications, provided the application language is one that AppDynamics supports. You can access a list of languages and system requirements here.

A primary AppDynamics differentiator is that it has the innate ability to understand what “normal” looks like in an environment. The platform automatically accounts for all of the discrete calls that run through an application. Then, it can find bottlenecks by identifying application segments, machines, application calls, and even lines of code that are problematic. Unlike other Application Monitoring Tools (APMs), AppDynamics can monitor the application from the end user point of view.

Regarding business value, what does AppDynamics bring that Splunk cannot? As the application is updated as part of a normal software development cadence, AppDynamics agents will then autodiscover again, saving time on professional services and money on re-customizing monitoring. Conversely, the Splunk App for Stream can require re-customization as application code and topology is updated.

AppDynamics does need some augmentation its counterpart, Splunk, in looking outside of an application at the full stack. If the underlying problem is not with the code, but with the functionality of the environment, such as storage, networking, compute, or the operating system, AppDynamics cannot do in-depth problem diagnosis on broader infrastructure components. Instead, the traditional approach is that APM teams use several, narrowly focused “point tools” to monitor each layer, which causes silos within teams. To skip the silos, cue Splunk. Its sweet spot is as a “Single Pane of Glass” where it can tie together its own visibility and the visibility provided by AppDynamics to identify where in the massive environment the problem lies.

So, where Splunk ends AppDynamics begins, and vice versa.

Skip the Silos: Splunk and AppDynamics Synergize for a Holistic Approach

Splunk and AppDynamics both interact with the application infrastructure in a way that is straightforward to setup, easy to maintain, and can deliver fast time-to-value. By visualizing the output of these two platforms in Splunk, teams achieve a “single pane of glass” monitoring approach that gives the business a real-time, holistic view into distributed, complex application stacks.Spunk-ITSILayout

Figure 3: Visualizing the output of these two platforms together in Splunk, teams achieve a “single pane of glass” for applications and the infrastructure.

Pairing together the analytics platform synergies of Splunk and AppDynamics to achieve holistic application stack monitoring for the mission will reduce MTTI and MTTR. The organization will observe reliable, sustainable ROI as applications and the environment evolve with the inevitable business transformation. Leveraging machine data in real-time is the cutting edge in analytics and empowers organizations to creatively scrutinize all their data in an automated, continuous, and contextual way to maximize insights and opportunities.

About Kinney Group

Kinney Group is a cloud solutions integrator harnessing the power of IT in the cloud to improve lives. Automation is in Kinney Group’s DNA, enabling the company to integrate the most advanced security, analytics, and infrastructure technologies. We deliver an optimized solution powering IT-driven business processes in the cloud for federal agencies and Fortune 1000 companies.

New call-to-action

Kinney Group Named Splunk 2016 Public Sector Services Partner of the Year

Cloud Solutions Integrator Named Public Sector Professional Services Partner of the Year for Outstanding Performance

Indianapolis, IN – March 15, 2016 – Cloud solutions integrator Kinney Group today announced it has received the Public Sector Professional Services Partner of the Year for exceptional performance and commitment to Splunk’s Partner+ Program. The Public Sector Professional Services Partner of the Year award recognizes a Splunk® partner that demonstrates excellence in professional services implementations. Technical excellence, certifications, and customer satisfaction are also elements of the award.

“We are honored to be named Splunk Public Sector Professional Services Partner of the Year,” said Kinney Group President and CEO Jim Kinney. “Kinney Group is committed to helping public sector agencies transition to the cloud in a transformational way that best serves their organizational needs and improves the lives of their stakeholders and clients. We believe the Splunk platform is a key enabler for success in cloud computing and look forward to continuing our partnership with Splunk.”

Kinney Group ensures end-to-end success for Splunk professional services operations in Civilian, Intelligence, Department of Defense, and in other federal and SLED entities. A key differentiator: Kinney Group empowers customers to fully harness the power of their Splunk investment by taking a holistic, consultative approach to engagements.

In 2013, Kinney Group started working with Splunk. Early on, the company was inspired by the machine data platform and created a practice area dedicated to the art of IT operations analytics (ITOA). Fast-forward to 2016, this move has proven to be visionary because Splunk has evolved to be a leader in big data analytics across several use cases including security, systems operations, and business analytics. As organizations continue their march to the cloud, Kinney Group sees its Splunk practice as a key resource for leaders to ensure a successful transition by making their data accessible, usable, and valuable to everyone.

 

New call-to-action

 

“Congratulations to Kinney Group for being named the Public Sector Professional Services Partner of the Year,” said Dave Schwartz, area vice president, global strategic alliances, Splunk. “The Partner+ Program promotes growth and allows our joint customers to achieve productivity, profitability, security and a competitive edge. The Global Partner Awards highlight outstanding partners like Kinney Group for their commitment to customer success and their close collaboration with Splunk.”

Splunk global partner awards reflect the top-performing partners globally within specific technology markets. All award recipients were selected by a group of Splunk executives and the global partner organization. See the full list of Splunk 2016 Partner+ Award Winners here.

About Kinney Group
Kinney Group is a cloud solutions integrator harnessing the power of IT in the cloud to improve lives. A trusted partner of federal agencies and Fortune 1000 companies, Kinney Group incorporates security, analytics, automation, and orchestration as part of a cloud transition strategy to help forward leaning organizations leverage the latest in cloud solutions, optimize internal resources, and achieve success more efficiently. The company is a HUBZone certified business based in Indianapolis, IN and can be found online at KinneyGroup.com.

About Splunk Inc.
Splunk Inc. (NASDAQ: SPLK) is the market-leading platform that powers Operational Intelligence. We pioneer innovative, disruptive solutions that make machine data accessible, usable and valuable to everyone. More than 11,000 customers in over 110 countries use Splunk software and cloud services to make business, government and education more efficient, secure and profitable. Join hundreds of thousands of passionate users by trying Splunk solutions for free.

Splunk>, Listen to Your Data, The Engine for Machine Data, Hunk, Splunk Cloud, Splunk Light, SPL and Splunk MINT are trademarks and registered trademarks of Splunk Inc. in the United States and other countries.

Learn more about Splunk Services from Kinney Group.

New call-to-action

IOPS and Splunk Enterprise Success

What is more important for a successful Splunk Enterprise implementation: IOPS, number of CPU cores, or the amount of memory (RAM)? The answer is IOPS is the most important requirement. While both CPU and memory are important factors, they are both building blocks on the core requirement of Splunk Enterprise, which is 800 (or preferably more) IOPS.

Taking a step back though, what are IOPS? IOPS are a measurement that stands for input/output operations per second. To further simplify, IOPS measure the number of times the disk or storage device can read or write per second. IOPS are an industry standard for benchmarking storage devices and disks.

Pro tip: moving from spinning disk storage to all-flash storage arrays can increase IOPS in your data center.

How does this storage hardware jargon pertain to Splunk Enterprise? Since Splunk Enterprise collects, indexes, and stores any type of machine data, IOPS lie at the core of how Splunk Enterprise works because servers are fundamentally machines with physical components. As data is forwarded from a universal forwarder to an indexer, the indexer must parse and write that data to disk. Then, when the search head looks for the data, the indexer must recall that data from the disk and pass it back to the search head, where it will be formatted and presented for analysis, dashboarding, and reporting. The goal: the storage needs faster reads and writes. The faster the disk can read and write the data, the less the CPU has to wait for the data to arrive and be processed, whether that be during the search or while the data is in the parsing and indexing phase.

Here is an example to demonstrate how the IOPS bottleneck can negatively impact the success of Splunk Enterprise implementation. Let us pretend we are passing 100 GBs per day to an indexer with 12 cores, but the storage is only providing 200 IOPS. This means that the CPUs will be stuck holding the data while it waits for the disks to be ready, causing a cascading effect of negative consequences. How? This scenario adversely impacts the environment twofold. First, it increases your CPU utilization, which creates data center inefficiency; your power consumption increases, causing more heat, which then causes your cooling costs in your data center to rise. Second, since your CPUs are allocated, you need to incorporate an additional three indexers to handle the amount of data you are trying to ingest with Splunk Enterprise. The lack of IOPS is not only causing processing trouble at the time of indexing, but also at the time of searching. This is because indexing is always prioritized over searching, so anytime a search is requested it is getting queued.

 

New call-to-action

 

Think about how you get queued in line at the amusement park at the hot new roller coaster. This analogy describes processing index time versus search time. You (“searching”) have been patiently waiting in a treacherous line for two hours. You finally make it to the front. You jump with joy, naturally, and mentally prepare to get strapped in for the ride of your life. Then suddenly, a big group of people (“indexing”) who have VIP passes walk up, and hop right on in front of you. Surely your dreams are shattered as you learn you have to wait another half hour. This example shows how the lack of IOPS puts the searching functionality on a lower priority.

LatencyDiskAnatomy1-50707Spinning disk is infamous for being the bottleneck for IOPS. All-flash storage can be the fast track to improving IOPS for your Splunk Enterprise implementation.

In a perfect world, for a successful Splunk Enterprise integration it is important you meet the minimum requirements of 800 IOPS. Remember: at 800 IOPS, the CPUs are no longer bottlenecked. The process does not have to wait for the disks to be available for reads or writes, and it can pass the data straight along to its storage destination. At 800 IOPS, you no longer need the extra servers to support your license. For your installation, you could reduce down to a single indexer, although I still recommend having a second one to help your search performance in the long run. Ultimately, you will bring down your hardware cost, cooling cost, and energy costs.

Spinning disk has physical limitations and less IOPS performance than all-flash storage arrays, which are now much more affordable for the data center than they were 5 years ago.When you dedicate an all-flash storage array to your Splunk deployment and thus present thousands of IOPS to your indexer, now that single indexer that was handling 100 GB a day can now handle 200 GB or more. Now, the IOPS game changes: your storage is waiting on the CPUs to pass the data, as it can write data faster than the CPU can process the data.

To measure your existing IOPS, which will help forecast what you might need, you have free, effective options. We recommend free tools, such as Bonnie++ on Linux and IOmeter on Windows. Both of these tools can match the full, random read and write process of Splunk Enterprise. Make your Splunk Enterprise deployment a success and ensure that your architecture has the correct number of IOPS, because it is the most important requirement of a successful deployment.

New call-to-action

Splunk ONTAP – Not Just a Tongue Twister

The Splunk App for NetApp Data ONTAP isn’t just a great way to integrate NetApp systems into your enterprise monitoring and logging solutions; it’s also a great tongue twister. Go ahead, try: “Splunk App for NetApp Data ONTAP”.  The Splunk App brings in data from all the NetApp FAS systems to give insights into what is happening in the enterprise, in a real-time capacity. It’s a great tool to watch NetApps and proactively identify problems.

Doesn’t NetApp already have a tool like this? Of course! NetApp makes great software. Yet, the big advantage of using the Splunk App is the ability to reach into other tiers of the infrastructure. The NetApp administrator isn’t about to let the JBOSS developer reach into his/her storage systems. The Network admin sure isn’t letting anyone else log on their server.

[perfectpullquote align=”right” cite=”Michael Simko” link=”” color=”#ED8B00″ class=”” size=””]…the big advantage of using the Splunk App is the ability to reach into other tiers of the infrastructure.[/perfectpullquote]

This app combines NetApp ONTAP (7 mode or cluster mode) system information, performance, and configuration with the rest of the systems in an enterprise to give a complete view of your infrastructure.

 

New call-to-action

 

Now, what does that mean in non-sales talk? Simple: It lets every admin see where problems are hiding. The VMware admin can see if “that storage stuff” is where their problems exist. It lets the Network team see that the JBOSS server is blowing up without spending too much time chasing down issues. The app lets the storage admin find which of his 24 FAS is running slow. When Splunk brings in data from the hypervisors, servers, storage, networking systems, and environmental systems (hey, why not get told when you are on battery power or the air conditioning stops working?) it gives a real view of the entire stack.

Key capabilities of  the Splunk App for NetApp Data ONTAP:

  1. Overview of all 7-mode Filers
  2. Overview of all Cluster-mode Filers
  3. Details on NetApp FAS entities: Aggregates, Disks, Volumes, qTree, LUN.
  4. 36 built-in reports focusing on things from Failed Disks to Aggregates that have too high of capacity.

 

Inside the Splunk App for NetApp Data ONTAP (Version 2).

We start with 7-mode filers

filer_overview_7mode

and cluster mode-filer details.

filer_overview_clustermode

The overview page also has overview portions for each NetApp entity.

aggregates_mainpage

Drill-ins are available for each NetApp entity, such as the volumes

volume_detail

and aggregates here.

aggregate_detail

New call-to-action

What is Agile?

Agile is a lot of things. In a nutshell, it is an alternative paradigm to software development. It supports and encourages the philosophy of valuing individuals and interactions, working software, customer collaboration, and response to change. It opposes strict processes, tools, comprehensive documentation, contract negotiation, and detailed planning.

Being Agile about changing our thought process. We must rethink our priorities, how we handle change, stakeholder and developer interactions, teamwork, pride in our work, trust, how we measure success, and refinement. It’s also about placing our main focus on the product itself and stakeholder happiness and staying out of the ruts through continuous improvement and innovation. Finally, it is about executing iterative development cycles to produce incremental pieces of functionality until the stakeholder’s vision has been brought to life.

 

New call-to-action

 

Agile is a Product development way of thinking that encompasses ‘The Twelve Principles of Agile Software’ in the Agile Manifesto:

We follow these principles:

  1. Our highest priority is to satisfy the customer
 through early and continuous delivery
 of valuable software.
  2. Welcome changing requirements, even late in 
development. Agile processes harness change for 
the customer’s competitive advantage.
  3. Deliver working software frequently, from a 
couple of weeks to a couple of months, with a 
preference to the shorter timescale.
  4. Business people and developers must work 
together daily throughout the project.
  5. Build projects around motivated individuals. 
Give them the environment and support they need, 
and trust them to get the job done.
  6. The most efficient and effective method of 
conveying information to and within a development 
team is face-to-face conversation.
  7. Working software is the primary measure of progress.
  8. Agile processes promote sustainable development. 
The sponsors, developers, and users should be able 
to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence 
and good design enhances agility.
  10. Simplicity – the art of maximizing the amount 
of work not done – is essential.
  11. The best architectures, requirements, and designs 
emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how 
to become more effective, then tunes and adjusts 
its behavior accordingly.

– The Agile Manifesto

New call-to-action