Sitescope alerts to Elasticsearch

HP SiteScope is agentless monitoring software focused on monitoring the availability and performance of distributed IT infrastructures, including Servers, Network devices and services, Applications and application components, operating systems and various IT enterprise components. [wikipedia]

So the way it is, Sitescope runs on Java.

“It does not have a proper dashboard”.. “we have to login, generate reports which is sort of cumbersome” .. “wish we had an easier way of gathering the reports”

These are some of the lines that caught my attention and got be thinking. What if we can get the logs from Sitescope into ELK and make dashboards so we can query them across time and easily generate reports?

Note that I’m talking about the case where Sitescope is installed in Windows. I’m not sure how it stores the data if it’s installed on Linux. Sitescope stores all the data in log files which are rotated periodically.

So to kick off I wanted to tackle the alert log. Every alert, whether it’s a SMS or an email, Sitescope stores the details in a file alert.log. The logs in Sitescope are in a multi-line format :

10:47:30 02/03/2022     alert
alert-type: Mailto
alert-name: Oracle Perfomance
alert-failed: true
alert-message: Service not available
alert-monitor: Dynamic Disk Space Monitor on server1
alert-group: servergroup1 : oracle : server1
alert-id: 1224528389
alert-monitor-id: SiteScope/201721545/201721547/1224491210/1224491202/202886322/4:37688
action-name: Perfomance EMail Alert
alert-to: foo@bar.com
alert-subject: Critical : Monitor Dynamic Disk Space Monitor on server1 is in critical status
alert-body:

Monitor: server1:Dynamic Disk Space Monito

Group: server1

CountersInError: Disk/File System/[/dev/oravol (/ora)]/percent full: 85

Time: 10:47 03/02/22

So the plan was to install filebeat to monitor the alert log and send the logs to Logstash where we’ll apply a filter and then send the alert to Elasticsearch. Here’s the setup in abstract.

Filebeat

Setting up filebeat is a breeze. Download the package, run the powershell script (set executionpolicy to bypass if required) to set it up as a service, tweak the filebeat.yml and fire it up :-)

Here’s our filebeat.yml

filebeat.prospectors:
- input_type: log
paths:
- D:\Apps\SiteScope\logs\alert.log
fields: {logtype: sitescope_alert}
multiline.pattern: '^[0-9]{2}:[0-9]{2}:[0-9]{2}'
multiline.negate: true
multiline.match: after

The multiline.pattern specifies that all lines following the line that matches this pattern are to be combined into a single line. But how did I know this? If you notice every alert starts with a line like 10:47:30 02/03/2022. So the timestamp line is sort of like the delimiter.

multiline.negate and multiline.match

It took me some time to get my head around the explanation provided here. So all lines that are not matching this pattern combined with the first one that matches and that’s what we wanted. All of that in a single line. Once we have it as an event we send it off to logstash to grok it off :-)

The grok filter I used is here

So I did some basic gsub to remove new line characters. I wanted to extract the server name from the alert_group. So I took the alert group and extracted the last WORD from it.

alert-group: servergroup1 : oracle : server1
grok { match => { “al_group” => “.*: %{WORD:al_host}” } }

Other than that I’ve added a custom field monitor to put them up as a ‘tag cloud’. I was wondering if there’s an easier way than mentioning so many if-else. Maybe something like defining an external file with a mapping. Leave a comment if you know.

This is pretty much how we started out. It was fun trying to work on these multiline events. As always feel free to share your feedback in the comments below.