Installing ElasticSearch, Kibana and Filebeat using Docker on Google Cloud Platform

Koray Çağlar
5 min readOct 4, 2022

--

Note: This tutorial is part of a larger tutorial. See the main article here:

In this tutorial, we will build a system that detects when new data gets appended on a log file using Filebeat, saves it to our ElasticSearch database and lets us see the data inside the database using the Kibana user interface.

Installing ElasticSearch and Kibana:

We will install both in a simple way called “Docker Compose”. First, we need Docker Compose for that:

$ sudo apt install docker-compose

Now we need a docker compose yml file that tells Docker how to install and run these containers. You can write your own yml file or you can copy mine using git:

$ git clone https://github.com/koraycaglar/datapipeline.git

Now after cloning the repository, you can find the yml file in this location: datapipeline/ELK/docker-compose.yml

Get to the location where the .yml file is. If you want to take a look at the .yml file write the following command:

$ sudo nano docker-compose.yml

Notice I am installing ES and Kibana versions 7.9.2. You can install more recent versions by changing the version numbers in the yml file.

Before running the ElasticSearch container you need to increase the size of vm.max_map_count as officially explained here:

Virtual memory | Elasticsearch Guide [8.4] | Elastic

Run the mentioned command as root:

$ sudo sysctl -w vm.max_map_count=262144

To download and run the ES and Kibana containers using docker compose run the following command. Make sure you are at the same directory where the yml file is.

$ docker-compose up -d

It will pull the images and run the containers as instructed in the .yml file. It may take some time (few minutes).

After the wait, enter this command to see the running containers:

$ docker ps

You should see our ES and Kibana are running. Note the ports 9200 is assigned for ES and 5601 for Kibana.

Enter this command to see if ES is running:

$ curl localhost:9200

Let’s check if Kibana is also running. We will do that in a cool way. Maybe you know Kibana has a user interface. In Google Cloud Platform we can see program interfaces using web preview feature.

Get back to the Google Cloud page, click the Google Cloud Shell icon at the top right of the page.

Google Cloud Shell icon at the top right

In the opened terminal, enter the following command with instance name and zone code changed to match yours. In my case:

$ gcloud compute ssh instance-1 --zone=us-west4-b -- -L 5601:localhost:5601

You can find your instance name and zone code in this same page where you SSH into the machine.

After entering the command, you should click the “web preview” button on the terminal window:

Web preview button shown with red arrow

Click change port, write 5601 (kibana’s port) and click change and preview. Kibana’s web interface opens in new window, means it is working.

Installing and configuring Filebeat:

Our Filebeat instance will listen to a log file all the time. Let’s create the log file first.

Get back to your default file path (/home/<username>). Create a .log file. I am naming it base64.log because I will put base64 image codes inside it.

$ sudo nano base64.log

In the text editor press Enter once to “write” an empty line so that we can save the file. Press Ctrl+X, Y and Enter to save.

Now that we have our log file ready, we should install and configure the Filebeat instance.

If you have cloned my Github repository before, you can find my yml file for Filebeat in this location:

datapipeline/filebeat/filebeat.docker.yml

Change the <username> part with your username in the yml file!

$ sudo nano filebeat.docker.yml

Run the following command to configure the Filebeat installation. Change the version number 7.9.2 to something recent if you want.

While in the same directory with the yml file, change the <username> part in the following command and run to create a Filebeat container. Change the version number again if you did in the previous step.

You can see the container if you run “docker ps” command.

We have built the whole system and it should work as intended now. Our Filebeat container is listening to any changes made in the base64.log file and saving them to our ES database.

Testing

We run a python script that encodes all image files inside our imageserver directory to base64, writes the code to our base64.log file, then moves the processes images into “done” folder in the imageserver folder.

Get inside the imageserver folder and create a “done” folder. This folder will hold images after we are done with encoding them. This way we can see which images are processed.

$ mkdir done

Get back to the main directory and create a python script:

$ sudo nano encode.py

Copy the following code into the script:

Change the <username> parts with your username (there are 3 parts). Save the file.

For the script to be able to access and add to the log file, set permissions of the base64.log file:

$ sudo chmod 777 base64.log

Run our camera.py script to move images into the imageserver folder.

$ python3 camera.py

Wait a bit for the images to transfer. Remember that one image is transferred every 3 seconds. Ctrl+C to stop the script after waiting.

Run the encode.py script:

$ python3 encode.py

Encode script just wrote our base64 codes in the base64.log file. Filebeat detected it and saved the codes into the ES database. You can check inside the base64.log file to see the codes.

Let's use the Kibana interface to see our codes inside the database (I explained above how to do it).

Click the 3-line button at the top left. Then click Discover under the Kibana tab.
Viola! Our codes are here.

If you can’t see the codes in the Kibana interface, try changing the time interval at the top right. It shows codes recorded in the last hour by default.

Here ends our tutorial. It was quite long but installing the ELK stack and Filebeat can be confusing. I hope it was useful for you.

--

--