ELK stack: Installation and shipping data

Ibrahim Ayadhi
8 min readApr 11, 2020

--

Welcome to our second article of this series. I encourage you all to check the previous article to have a better understanding of what we are going to discuss here.

1- Installing ELK STACK and configuration

1.1- Introduction to ELK

A- What’s ELK?

https://www.elastic.co/fr/what-is/elk-stack

B- Difference between ELK Basic and ELK Oss?

https://www.elastic.co/fr/what-is/open-x-pack

1.2- ELK Installation

In our project we proceeded to setup the ELK Stack Basic (7.6.1) and we referred to official guide provided by elastic.co:

https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html

1.3- ELK Configuration

In this section we will be providing you with the configuration we have made to the ELK Stack.

A- Elasticsearch configuration

All the configurations were made into the elasticsearch.yml file located in /etc/elasticsearch/elasticsearch.yml

To open it use the following command : sudo nano /etc/elasticsearch/elasticsearch.yml

These are the default paths for data and logs for elasticsearch.

Then navigate to the network section of the file.Network section it is very simple , you don’t even have to mention the port if you are using the default one. But you must change http.port and undo the comment. if you are going to use another port.

network.bind_host: 0.0.0.0 will enable remote access to the Elasticsearch server which will help us connect the beats to the ELK Stack later on.

Once this is established we need to restart ElasticSearch service via this command:

sudo systemctl restart elasticsearch

Disclaimer: S​​​​​​​etting network.bind_host to 0.0.0.0 is not recommended due to security issues and should not be used in a production level. We are just at the prototyping phase.

B-Kibana configuration:

All the configurations were made into the kibana.yml file located in /etc/kibana/kibana.yml. To open it use the following command:

sudo nano /etc/kibana/kibana.yml

To make Kibana remotely accessible we must set server.host: “0.0.0.0”. There is no restriction when it comes to the port that it should be on. So, we will leave it to the default setting which is 5601. Now restart your Kibana: sudo systemctl restart kibana

Now you should be able to access your Kibana from the browser.http://Your_Server_IP:5601

While you are at it, enjoy the amazing UX it provide and try some of the dashboards samples and data it provides.

Disclaimer: setting server.host to 0.0.0.0 is not recommended due to severe security issues and should not be used at a production level. We are just at the prototyping phase.

C-Logstash configuration :

Now we will be tackling the configuration of the logstash :

sudo cat /etc/logstash/logstash-sample.conf

This file contains the necessary configuration for the Logstash. So, we need to copy it to the directory /etc/logstash/conf.d/ and change its name to logstash.conf

Don’t forget to restart the service: sudo systemctl restart logstash

D-Checking services :

After setting up the configuration files of logstash, kibana and elasticsearch properly. you can start your services and check them :

You can check if these services are listening on their ports. It doesn’t matter if you have tcp6 instead of tcp.

Kibana : 5601

Elasticsearch : 9200

Logstash : 5044

2-Beats configuration and data shipping :

A- Download and Installation of Winlogbeat:

Donwload URL :

https://www.elastic.co/fr/downloads/beats/winlogbeat

Installation :

https://www.elastic.co/guide/en/beats/winlogbeat/current/winlogbeat-installation.html

B- Winlogbeat configuration:

In our project we used this configuration of winlogbeat.yml:

Understanding winlogbeat.event_logs :

The winlogbeat section in winlogbeat.yml specifies all options that are specific to Winlogbeat. Most importantly, it contains the list of event logs to monitor. We can see that Sysmon module is enabled by default.

To add more modules you can refer this URL :

https://www.elastic.co/guide/en/beats/winlogbeat/current/configuration-winlogbeat-options.html

Understanding number of shards and number of replicas :

- index.number_of_shards :

An index can potentially store a large amount of data that can exceed the hardware limits of a single node. To solve this problem, Elasticsearch provides the ability to subdivide your index into multiple pieces called shards each one saved on a different machine.

-index.number_of_replicas :

Is the number of copies that Elasticsearch will be storing. This feature is useful when you have a network of machine running Elasticsearch. If one machine goes down the data is not lost.

Outputs :

For Elasticsearch output and Logstash output , only one of them must be enabled when starting the service or checking the configuration.

Processors and logging settings :

This section contains the default processors used by winlogbeat and an example of the logging settings :

Index Lifecycle Managment ( ILM ) :

Finally, we had to disable index lifecycle manager. ILM or Index Lifecycle Manager is free x-pack feature integrated with the ELK stack Basic but not with ELK oss version. You could use ILM to automatically manage indices according to your performance, resiliency, and retention requirements. For example: Create a new index each day, week, or month and archive previous ones, spin up a new index when an index reaches a certain size or deletes stale indices to enforce data retention standards.

ILM feature is enabled by default for ELK stack basic version but it requires further configuration specially when your beats are not connected directly to Elasticsearch. ILM feature is beyond the scope of this article, so we will disable it.

Sysmon configuration & integration with MITRE ATT&CK:

We will setup and make the new Sysmon configuration before loading index templates to make sure that the new sysmon fields and configuration are loaded properly in ELK stack.

System Monitor (Sysmon) is a Windows system service and device driver that, once installed on a system, remains resident across system reboots to monitor and log system activity to the Windows event log. It provides detailed information about process creations, network connections, and changes to file creation time. By collecting the events, it generates using Windows Event Collection or SIEM agents and subsequently analyzing them, you can identify malicious or anomalous activity and understand how intruders and malware operate on your network.

MITRE ATT&CK is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations. The ATT&CK knowledge base is used as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community.

I. Download Sysmon :

https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon

II. Download the xml configuration for sysmon containing MITRE ATT&CK references: https://raw.githubusercontent.com/ion-storm/sysmon-config/master/sysmonconfig-export.xml

III. Install Sysmon with the appropriate configuration file: sysmon64 -accepteula -i sysmonconfig-export.xml

IV. Check the current configuration: sysmon64 –c

Setup index template, dashboards and index pattern:

I. Load Index template:

Index templates allow you to define templates that will automatically be applied when new indices are created. The templates include both settings and mappings and a simple pattern template that controls whether the template should be applied to the new index.

A connection to Elasticsearch is required to load the index template. If the output is not Elasticsearch, you must load the template manually. In our case winlogbeat will not be connected directly to Elasticsearch, so we will have to setup the index template manually before starting the service.

Disabling Logstash output and enabling Elasticsearch output temporarily is required.

https://www.elastic.co/guide/en/beats/winlogbeat/current/winlogbeat-template.html#load-template-manually

II. Load Dashboards and Index pattern:

https://www.elastic.co/guide/en/beats/winlogbeat/current/load-kibana-dashboards.html

Changing output:

After loading properly, the index template, index pattern and dashboards, you can check them in Kibana interface.

Index Template, Index Pattern and Dashboards are loaded Successfully:

Now, after opening the configuration file. We will disable the Elasticsearch output by commenting it then we will enable the Logstash output by removing the commenting.

Shipping data to ELK stack :

Now we can start winlogbeat and sysmon services from PowerShell or using the services.msc interface, and check data in Kibana interface.

When starting the winlogbeat. The ELK STACK will use the configuration in Logstash to create the index that will enable the storing of data.

This is the default dashboard for winlogbeat :

Under Discover , we can check sysmon logs with the new configuration ( MITRE references ) :

The rest of the beats does not differ a lot from the winlogbeat whether in configuration or installation.

The beats that we have used are :

  • Winlogbeat
  • Filebeat
  • Packetbeat
  • Metricbeat

We want to mention that certain beats like metricbeat or filebeat have several modules that you can use.

For example, we used the module system in filebeat to monitor the ssh authentication, sudo commands on ubuntu machine and we used the module Suricata to collect logs from Suricata IDS.

Enabling Suricata module :

We have used this command to enable the Suricata module in filebeat :

sudo filebeat modules enable Suricata

to see modules available in filebeat you can check the directory /etc/filebeat/modules.d/

To see active modules, use this command :

filebeat modules list

This is the link we have used to install Suricata on our device:

https://www.alibabacloud.com/blog/594941

You should get a dashboard relatively similar to this. No worries if you don’t get exactly like this, we will be handling dashboard in the upcoming articles.

It is possible also to integrate the Suricata interface in the ELK stack for that you can check this link below:

https://www.howtoforge.com/tutorial/suricata-with-elk-and-web-front-ends-on-ubuntu-bionic-beaver-1804-lts/

--

--

Ibrahim Ayadhi

Penetration Tester | Red Team | OSEP | OSCP | CRTO | CEH Master | LPIC-1