In Dude, Where’s My Data? (Part One), you learned how to configure your data source to send your data to Splunk. In Dude, Where’s My Data? (Part Two), you learned how to configure Splunk to receive and parse that data once it gets there. However, you still aren’t seeing your data in Splunk. You are pretty sure you configured everything correctly, but how can you tell?
Check your Splunk configuration for any errors. In these instances, there are three troubleshooting steps that I like to look at in order to ascertain what the problem could be.
They are as follows:
1. Check for typos
2. Check the permissions
3. Check the logs
Check for typos
There is always the possibility that even though the inputs look correct, there may be a typo that you originally missed. There may also be a configuration that is taking precedence over the one you just wrote. The best way to check is to use btool on the Splunk server configured to receive the data. This command-line interface (CLI) command checks the configuration files, merges the settings that apply to the same stanza heading, and returns them in order of precedence.
When looking for settings that relate to the inputs configured for a data source, this simple command can be run:
./splunk btool <conf_file_prefix> list -app=<app> --debug | grep <string>
Where <string> is a keyword in the input that you are looking for, will help quickly locate the settings that apply to that particular input.
Check the permissions
More times than not, the issue preventing Splunk from reading the log data is that the user running Splunk doesn’t have the correct permissions to read the file or folder where the log data is stored. This can be fixed by adding the user running Splunk to the group assigned to the file on the server that is configured to send data to Splunk. You should then, make sure that the group has the ability to read the file. On a Linux host, if you wanted Splunk to read, for example, /var/log/secure/readthisfile.log, you would navigate to the /var/log/secure folder from the command line interface using the following command:
Once there, you would run this command:
This will return results that looks similar to the line below:
rwxr----- creator reader /var/log/secure/readthisfile.log
Where creator, who is the user that owns that file, has the ability to read, write, and execute the file; reader, who is the group that owns the file, has the ability to read the file, and all other users cannot read, write, or execute the file.
Now, in this example, if the user running Splunk is Splunk, then you can check which groups that Splunk belongs to by running the following command:
id splunk OR groups splunk
If the results show that the Splunk user is not a member of the reader group, a user with sudo access or root, can add Splunk to the reader group using the following command:
sudo usermod -a -G reader splunk_reader
Check the logs
If the Splunk platform’s internal logs are accessible from the Splunk GUI, an admin user can use the following command to check for errors or warnings:
index=_internal (log_level=error OR log_level=warn*)
As a bonus, if your firewall or proxy logs are configured to send data to Splunk, and those logs are capturing data about network traffic between the data source and the Splunk server configured to receive data from your data source, searching these logs for errors by specifying the IP address and/or hostname of the sending or receiving server will help you find out if data is being blocked in transit. On a Linux host, the following commands can also tell you which ports are open:
sudo lsof -i -P -n | grep LISTEN sudo netstat -tulpn | grep LISTEN sudo lsof -i:22 ## see a specific port such as 22 ## sudo nmap -sTU -O localhost
Check the permissions
One, Two, Three strikes… and you’re out of problems with Splunk. Ha, if only these three blog posts could fix all of your Splunk issues, but we hope it helps. If you’re still having Splunk data configuration issues, or have any other troubleshooting needs, see how our Splunk services can help!