Connecting Openstack to the ELK: Analyzing the Openstack Swift Log

Using Elasticsearch, Kibana, Logstash and Filebeat for monitoring and log analysis of the Openstack Swift

Pedro Emerick
8 min readJun 27, 2019

Read this post in Portuguese-BR

Studying a little about Openstack through the discipline IMD0290 — SPECIAL TOPICS ON THE INTERNET OF THE THINGS “A” given by Carlos Eduardo da Silva in the Bachelor of Information Technology of UFRN, and also on the ELK (Elasticsearch, Kibana, Logstash), wanted to do the merging of the two services, using the ELK to monitor the logs of the Openstack services. At this junction I had some doubts and made some solutions that made me write this here to share with you.

Here we will use the ELK in containers to display some data from the Openstack Swift log, and Filebeat to send the Swift logs to the ELK. Let’s start from the beginning, from the installation of Docker that will manage the containers to the visualization of the data in Kibana with graphics and dashboard, however we consider here that you have the Openstack Swift in operation.

One very important thing is: all the commands I put here are for the CentOS operating system, buuut, even if you do not use CentOS, read through, many parts are configuring ELK components, which are independent of operating system.

Updating the machine and installation the Docker

To get started, upgrade your machine:

# yum update

We will use Elasticsearch, Kibana and Logstash in containers to make the process easier, but it is important to note that if you are going to use the ELK for production you have to do your complete installation without the use of containers. And to manage the containers, we’ll use Docker, here’s a step-by-step guide to installing them.

First install the required packages:

# yum install -y yum-utils device-mapper-persistent-data lvm2

Add the stable repository:

# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Install the latest version of Docker CE and containerd:

# yum install docker-ce docker-ce-cli containerd.io

If prompted to accept the GPG key, verify that it corresponds to 060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35 and, if applicable, accept it.

Start Docker:

# systemctl start docker

Configuration and creation of Elasticsearch, Kibana and Logstash containers

Now with Docker installed, we will download the images that will be used in the containers (the ELK provides several images, you can see a list of them in this link):

# docker pull docker.elastic.co/elasticsearch/elasticsearch:7.1.0# docker pull docker.elastic.co/logstash/logstash:7.1.0# docker pull docker.elastic.co/kibana/kibana:7.1.0

After downloading the images, we are still not going to upload the containers, because we need the configuration files of each of the ELK components that we will use to use volumes in the containers. It is very important to use volumes in our configuration files because if we drop the container or lose it for whatever reason, we still have our files on our machine.

So let’s get into what matters, the configuration files. To facilitate I put the configuration files that we will use in a repository in GitHub, and here it is. Now let’s understand how the organization of the files. In the repository we have 3 folders, each one referring to an ELK component that we are going to use. The folder name already refers to the component, so in the ‘elasticsearch’ folder has the configuration file for Elasticsearch and so on for all others. But what are these files in folders ? I’ll briefly explain what each one is and what you should edit:

  • elasticsearch.yml: Elasticsearch default configuration file, there is no need to change anything.
  • kibana.yml: Kibana’s default configuration file, in the ‘elasticsearch.hosts’ field (line 3), you must modify the IP 10.7.52.36 to the IP of the machine where your Elasticsearch will be running.
  • logstash.yml: Logstash default configuration file, in it in the ‘xpack.monitoring.elasticsearch.hosts’ field (line 2), you also need to modify the IP 10.7.52.36 to the IP of the machine where your Elasticsearch will be running.
  • my_patterns: are the standards that the Grok logstash filter will use to select only the information we want in the received logs.
  • simple.conf: it is where we define how we receive, filter and send the logs in Logstash, it contains 3 fields, the input, which is where we define how we receive the log (in our case with Filebeat), the filter, which is where we filter the log, to get just the information we want, and the output, which is where we define where the ‘filtered’ log will go, in our case is going to both the logstash log (yes, logstash log) and Elasticsearch.

Finally, with all this done, we will upload the Elasticsearch, Logstash and Kibana containers. It is important that I upload the containers in the same order that I will put here, because they have dependence on each other.

Attention: if your machine has a firewall enabled, it may be blocking the ports that we will be using in ELK components. Then, before raising the containers, release the necessary doors. If your firewall is the CentOS standard, just follow the steps below.

Release the doors:

# firewall-cmd --permanent --add-port=9200/tcp 
# firewall-cmd --permanent --add-port=9300/tcp
# firewall-cmd --permanent --add-port=5044/tcp
# firewall-cmd --permanent --add-port=9600/tcp
# firewall-cmd --permanent --add-port=5601/tcp

Recharge your firewall rules:

# firewall-cmd --reload

Now, let’s go up the containers.

Note: note that we have passed the configuration files previously mentioned as volume in the containers, then modify ${PATH}, by the path where each of the files is located.

# docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -v ${PATH}/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml --name "elasticsearch" docker.elastic.co/elasticsearch/elasticsearch:7.1.0# docker run -d -it --name "logstash" -p 5044:5044 -p 9600:9600 -v ${PATH}/logstash.yml:/usr/share/logstash/config/logstash.yml -v ${PATH}/simple.conf:/usr/share/logstash/pipeline/logstash.conf -v ${PATH}/my_patterns:/usr/share/logstash/patterns/my_patterns docker.elastic.co/logstash/logstash:7.1.0# docker run -d -v ${PATH}/kibana.yml:/usr/share/kibana/config/kibana.yml --name "kibana" -p 5601:5601 docker.elastic.co/kibana/kibana:7.1.0

Configuring Openstack Swift and its log

With Elasticsearch, Logstash, and Kibana running, let’s set up a few things on the Openstack machine, specifically on the machine where Swift is installed. First we will edit the Swift Proxy Server configuration file, which is by default in ‘/etc/swift/proxy-server.conf’, so open this file with an editor of your choice and modify or add the following variables:

log_name = swift-proxy-server 
log_facility = LOG_LOCAL5
log_level = INFO
log_headers = true
log_address = /dev/log

In this configuration we are defining the name for this log, the facility, the log level, saying that we want the request header to be in the log, and the location. But here it is sending the log to a system facility, so let’s create a rule in the system rsyslog so that it sends this LOG_LOCAL5 to a specific file. To perform this configuration, open the file ‘/etc/rsyslog.conf’ with an editor of your choice, and add the following rule:

#### RULES #### 
...
#Swift
local5.* /var/log/swift/proxy-server.log

Ready, with this done, the logs generated by Proxy Server will be placed in the file ‘/var/log/swift/proxy-server.log’, it is now much easier to manipulate this log.

Installation and configuration the Filebeat

With the log set up, and the ELK components installed, we’ll start sending logs to them. To do this, we will use Filebeat, it monitors the log files or places that you specify, collects log events and forwards them to Elasticsearch or Logstash for indexing, in which case it will be Logstash.

First, let’s install Filebeat. To do this, just follow these steps:

Download the installation package:

$ curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.1.0-x86_64.rpm

Install the package:

# rpm -vi filebeat-7.1.0-x86_64.rpm

Open the configuration file ‘/etc/filebeat/filebeat.yml’ with an editor of your choice and modify/add the following parameters in the configuration file:

In filebeat.inputs (here is where we define the logs that will be sent to Logstash):

- type: log 
enabled: true
paths:
- /var/log/swift/proxy-server.log
tags: ["swift"]

In setup.kibana (modify the IP 10.7.52.36 by the IP of the machine where Kibana is running):

host: "10.7.52.36:5601"

In output.logstash (again modify IP 10.7.52.36 by the IP of the machine where Logstash is running):

hosts: ["10.7.52.36:5044"]

Now with Filebeat configured and installed, start it:

$ service filebeat start

If you want (or need to) view the Filebeat log, you can use the following command:

$ journalctl -f -u filebeat

Visualization at the Kibana

With the logs being sent by Filebeat to Logstash, Logstash filtering and sending to Elasticsearch and finally Elasticsearch sending to Kibana, we will finally see this data.

Before we begin to actually visualize the data, we will ‘register’ an index pattern in Kibana. To do this, open Kibana, go to Management (1), Index Patterns (2) and click Create index pattern (3), when prompted to set ‘index-name’ enter ‘logstash- *’ (4) and continue the process.

Creation of Indices in Kibana

Now, let’s go to the charts. I have created some that your name is already its meaning, the graphics are (clicking on the link you can see an image of the graphic):

To facilitate, in the Github repository, the ‘kibana’ folder has two ‘.json’ files, where one is the graphics and the other a dashboard with all the graphics that we will have generated. So to have the graphics, just import them into Kibana.

To import, go to Kibana, go to Management (1), then to Saved Objects (2) and click on Import (3), then just select the files ‘Graphics.json and Dashboard.json’ and have it imported.

Importing Graphics and Dashboards into Kibana

Okay, now you have the graphics and dashboard in your Kibana, now just look at the data. Here is the Dashboard that will be created.

Dashboard Available

So guys, I hope you enjoyed the tutorial and have helped you guys out on something. You now have the ELK components working together to monitor the logs generated by the Openstack Swift Proxy Server.

For any questions or suggestions, enter in contact:

A hug to everyone !

--

--