Role Your Own File System Monitor With Tripwire And Splunk

Don’t Tell Tripwire Though, They Want You To Pay For Their App

Vince Sesto
Splunk User Developer Administrator
4 min readApr 6, 2017

--

Tripwire can be a great tool to monitoring and detecting changes in your operating systems that may have been caused by intruders or malicious software. Although the paid version can bring its benefits to any environment it may be overkill or a costly expense that could be used in other areas of your network.

There is an open source version which can be utilized and with a small amount of time can be paired with Splunk to give you a handy little monitoring app for your filesystem.

Install Tripwire And Get It Running On Your File System

Installing Tripwire and getting it running on your environment is pretty strait forward and the following documentation from DigitalOcean will give you a good introduction.

Once Tripwire is installed and configured, I am sure you’ll see that it is a helpful tool but running the daily report and reading through things manually will not be the most effective way to monitor things. This is why teaming it up with Splunk is a great way to improve the process.

Create a cronjob to run tripwire daily

This is pretty simple, all you need to do is run a tripwire — check command and direct the output to a file for future use. I have simply added the following:
15 9 * * * /usr/sbin/tripwire — check > /var/log/daily_report.log

Create a script to extract the data you want

To start, I like extracting all of the Rule summary data from the Tripwire report and placing it into a nicely formatted log file. Something similar to the following:

I have only set up a BASH script to do the work for me, where it runs through the daily_report.log and searches for the Rule summary, extracting the useful information and hopefully not including any of the formatting. My script looks something similar to the image below:

I know, its a bad image, but it basically reads through each line of the report, runs through a case statement to see if the data is useful or not, and then provides an output depending on the type of data it is. The code can be downloaded from github at the following location:
https://github.com/vincesesto/tripwire_data/blob/master/triplog.sh

Get your extract script running on cron

Another simple step, were we take the script that we created in the previous step and send the output to our new log file. All we need to do is make sure our daily report is complete before we run the data extraction:
15 10 * * * /root/triplog.sh >> /var/log/tripwire/tripwire_splunk.log

Index It Baby!!

If you know Splunk, this is a pretty simply step. I like to create my own application specific sourcetype as “TripWireData”. The log files created should give you an easy time picking out the date, and when indexed the fields should be extracted as well.

Create Your Splunk Dashboard

The screen shot provided at the start of this article is just a basic example and can be found at the following location:
https://github.com/vincesesto/tripwire_data/blob/master/tripwiredata_overview.xml

If you are interested in learning more about Splunk Dashboards and Developing with the Splunk Web Framework, there are many resources available.

It is a pretty basic Simple XML script showing an overview of violations, files removed and files modified. The data in the daily report could provide a lot more details information, but I think it is a good example to get started.

Start Alerting

For my example, I am happy with checking the results of my dashbaord daily, but if your covering a large number of hosts, you may want to set up alerting around your dashboards to allows some extra viability when things could potentially go wrong.

Conclusion

As you can see, with a small amount of time, we have been able to set up a nice tool and make use of a neat open source application. If we wanted to, this could be deployed via puppet or ansible across our servers as well as setting up an App to further segregate data and allow ease of deployment within our Splunk search heads.

Found this post useful? Kindly tap the ❤ button below! :)

About The Author

Vince has worked with Splunk for over 5 years, developing apps and reporting applications around Splunk, and now works hard to advocate its success. He has worked as a system engineer in big data companies and development departments, where he has regularly supported, built, and developed with Splunk. He has now published his first book via Packt Publishing — Learning Splunk Web Framework.

--

--

Vince Sesto
Splunk User Developer Administrator

Vincent Sesto is a DevOps Engineer, Endurance Athlete, Coach and Author. One of his passion’s in life is endurance sports as both an athlete, coach and author.