Machine Learning for your Infrastructure: Anomaly Detection with Elastic + X-Pack

Velotio Technologies
Velotio Perspectives
8 min readMar 11, 2019

Introduction

The world continues to go through the digital transformation at an accelerating pace. Modern applications and infrastructure continue to expand and operational complexity continues to grow. According to a recent ManageEngine Application Performance Monitoring Survey:

  • 28 percent use ad-hoc scripts to detect issues in over 50 percent of their applications.
  • 32 percent learn about application performance issues from end users.
  • 59 percent of trust monitoring tools to identify most performance deviations.

Most enterprises and web-scale companies have instrumentation & monitoring capabilities with an ElasticSearch cluster. They have a high amount of collected data but struggle to use it effectively. This available data can be used to improve the availability and effectiveness of performance and uptime along with root cause analysis and incident prediction.

IT operations & Machine learning

Here is the main question: How to make sense of the huge piles of collected data? The first step towards making sense of data is to understand the correlations between the time series data. But only understanding will not work since correlation does not imply causation. We need a practical and scalable approach to understand the cause-effect relationship between data sources and events across the complex infrastructure of VMs, containers, networks, micro-services, regions, etc.

It’s very likely that due to one component something goes wrong with another component. In such cases, operational historical data can be used to identify the root cause by investigating through a series of intermediate causes and effects. Machine learning is particularly useful for such problems where we need to identify “what changed”, since machine learning algorithms can easily analyze existing data to understand the patterns, thus making easier to recognize the cause. This is known as unsupervised learning, where the algorithm learns from the experience and identifies similar patterns when they come along again.

Let’s see how you can setup Elastic + X-Pack to enable anomaly detection for your infrastructure & applications.

Anomaly Detection using Elastic’s machine learning with X-Pack

Step I: Setup

1. Setup Elasticsearch:

According to Elastic documentation, it is recommended to use the Oracle JDK version 1.8.0_131. Check if you have required Java version installed on your system. It should be at least Java 8 if required install/upgrade accordingly.

  • Download elasticsearch tarball and untar it
  • It will then create a folder named elasticsearch-5.5.1. Go into the folder.
  • Install X-Pack into Elasticsearch
  • Start elasticsearch

2. Setup Kibana

Kibana is open source analytics and visualization platform designed to work with Elasticsearch.

  • Download kibana tarball and untar it
  • It will then create a folder named kibana-5.5.1. Go into the directory.
  • Install X-Pack into Kibana
  • Running kibana
  • Navigate to Kibana at http://localhost:5601/
  • Log in as the built-in user elastic and password changeme.
  • You will see the below screen:
Kibana: X-Pack Welcome Page

3. Metricbeat:

Metricbeat helps in monitoring servers and the services they host by collecting metrics from the operating system and services. We will use it to get CPU utilization metrics of our local system in this blog.

  • Download Metric Beat’s tarball and untar it
  • It will create a folder
  • By default, Metricbeat is configured to send collected data to elasticsearch running on localhost. If your elasticsearch is hosted on any server, change the IP and authentication credentials in metricbeat.yml file.
Metricbeat Config
  • Metric beat provides the following stats:

-System load

-CPU stats

-IO stats

-Per filesystem stats

-Per CPU core stats

-File system summary stats

-Memory stats

-Network stats

-Per-process stats

  • Start Metricbeat as daemon process

Now, all setup is done. Let’s go to step 2 to create machine learning jobs.

Step II: Time Series data

  • Real-time data: We have metricbeat providing us the real-time series data which will be used for unsupervised learning. Follow the below steps to define index pattern metricbeat-* in Kibana to search against this pattern in Elasticsearch:
    - Go to Management -> Index Patterns
    - Provide Index name or pattern as metricbeat-*
    - Select Time filter field name as @timestamp
    - Click Create

You will not be able to create an index if elasticsearch did not contain any metric beat data. Make sure your metric beat is running and output is configured as elasticsearch.

  • Saved Historic data: Just to see quickly how machine learning detect the anomalies you can also use data provided by Elastic. Download sample data by clicking here.

-Unzip the files in a folder: tar -zxvf server_metrics.tar.gz

-Download this script. It will be used to upload sample data to elastic.

-Provide execute permissions to the file: chmod +x upload_server-metrics.sh

-Run the script.

-As we created an index pattern for metricbeat data, in same way create index pattern server-metrics*

Step III: Creating Machine Learning jobs

There are two scenarios in which data is considered anomalous. First, when the behavior of key indicator changes over time relative to its previous behavior. Secondly, when within a population behavior of an entity deviates from other entities in population over a single key indicator.

To detect these anomalies, there are three types of jobs we can create:

  1. Single Metric job: This job is used to detect Scenario 1 kind of anomalies over only one key performance indicator.
  2. Multimetric job: Multimetric job also detects Scenario 1 kind of anomalies but in this type of job we can track more than one performance indicators, such as CPU utilization along with memory utilization.
  3. Advanced job: This kind of job is created to detect anomalies of type 2.

For simplicity, we are creating following single metric jobs:

  1. Tracking CPU Utilization: Using metric beat data
  2. Tracking total requests made on server: Using sample server data

Follow the below steps to create single metric jobs:

  • Saved Historic data: Just to see quickly how machine learning detect the anomalies you can also use data provided by Elastic. Download sample data by clicking here.
  • Unzip the files in a folder: tar -zxvf server_metrics.tar.gz
  • Download this script. It will be used to upload sample data to elastic.
  • Provide execute permissions to the file: chmod +x upload_server-metrics.sh
  • Run the script.
  • As we created an index pattern for the metricbeat data, in same way create index pattern server-metrics*

Step III: Creating Machine Learning jobs

There are two scenarios in which data is considered anomalous. First, when the behavior of key indicator changes over time relative to its previous behavior. Secondly, when within a population behavior of an entity deviates from other entities in population over a single key indicator.

To detect these anomalies, there are three types of jobs we can create:

  1. Single Metric job: This job is used to detect Scenario 1 kind of anomalies over only one key performance indicator.
  2. Multimetric job: Multimetric job also detects Scenario 1 kind of anomalies but in this type of job we can track more than one performance indicators, such as CPU utilization along with memory utilization.
  3. Advanced job: This kind of job is created to detect anomalies of type 2.

For simplicity, we are creating following single metric jobs:

  1. Tracking CPU Utilization: Using metric beat data
  2. Tracking total requests made on server: Using sample server data

Follow the below steps to create single metric jobs:

Job1: Tracking CPU Utilization

Job2: Tracking total requests made on the server

  • Go to http://localhost:5601/
  • Go to Machine learning tab on the left panel of Kibana.
  • Click on Create new job
  • Click Create single metric job
  • Select index we created in Step 2 i.e. metricbeat-* and server-metrics* respectively
  • Configure jobs by providing the following values:
  1. Aggregation: Here you need to select an aggregation function that will be applied to a particular field of data we are analyzing.
  2. Field: It is a drop down, will show you all field that you have w.r.t index pattern.
  3. Bucket span: It is an interval time for analysis. Aggregation function will be applied on the selected field after every interval time specified here.
  • If your data contains so many empty buckets i.e. data is sparse and you don’t want to consider it as anomalous check the checkbox named sparse data (if it appears).
  • Click on Use full <index pattern> data to use all available data for analysis.
Metricbeats Description
Server Description
  • Click on the play symbol
  • Provide job name and description
  • Click on Create Job

After creating a job the data available will be analyzed. Click on view results, you will see a chart which will show the actual and upper & lower bound of predicted value. If actual value lies outside of the range, it will be considered as anomalous. The Color of the circles represents the severity level.

Here we are getting a high range of prediction values since it just started learning. As we get more data the prediction will get better.
You can see here predictions are pretty good since there is a lot of data to understand the pattern.
  • Click on the machine learning tab in the left panel. The jobs we created will be listed here.
  • You will see the list of actions for every job you have created.
  • Since we are storing every minute data for Job1 using metricbeat. We can feed the data to the job in real time. Click on play button to start the data feed. As we get more and more data prediction will improve.
  • You see details of anomalies by clicking Anomaly Viewer.
An anomaly in the metricbeats data
Server metrics anomalies

We have seen how machine learning can be used to get patterns among the different statistics along with anomaly detection. After identifying anomalies, it is required to find the context of those events. For example, to know about what other factors are contributing to the problem? In such cases, we can troubleshoot by creating multimetric jobs.

*******************************************************************

This post was originally published on Velotio Blog.

Velotio Technologies is an outsourced software product development partner for technology startups and enterprises. We specialize in enterprise B2B and SaaS product development with a focus on artificial intelligence and machine learning, DevOps, and test engineering.

Interested in learning more about us? We would love to connect with you on our Website, LinkedIn or Twitter.

*******************************************************************

--

--

Velotio Technologies
Velotio Perspectives

Velotio Technologies is an outsourced software and product development partner for technology startups & enterprises. #Cloud #DevOps #ML #UI #DataEngineering