Sitemap

Monitoring and Logging Stack with Docker Compose

7 min readOct 26, 2023

In this blog we are going to learn about the Docker Compose while we have more than one containers of different monitoring and logging tools.

Here the monitoring and logging tools we are using are Elastic Search, Logstash, Kibana , Prometheus and Grafana for which we will create a docker container.

Architecture of the project

Summary of the tools we are using.

  1. ELASTIC SEARCH

Elasticsearch is a distributed search and analytics engine built on Apache Lucene. It provide us the solutions for Searching, Observerbility and Security purpose.It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. The “E” in ELK stack stands for Elastic Search. It basically store the data.

Note :- For more details — https://www.elastic.co/

2. LOGSTASH

Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite “stash.” The “L” in ELK stack stands for Logstash. It basically just process the data.

Note :- For more details — https://www.elastic.co/logstash

3. KIBANA

Kibana is an open source browser based visualization tool mainly used to analyse large volume of logs in the form of line graph, bar graph, pie charts , heat maps, region maps, coordinate maps, gauge, goals, timelion etc. The “K” in ELK stack stands for Kibana.It basically just visualize the data on dashboard.

Note :- For more details — https://www.elastic.co/kibana

4. PROMETHEUS

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Prometheus collects and stores its metrics as time series data, i.e. metrics information is stored with the timestamp at which it was recorded, alongside optional key-value pairs called labels.

Note :- For more details — https://prometheus.io/docs/introduction/overview/

5. GRAFANA

Grafana open source software enables you to query, visualize, alert on, and explore your metrics, logs, and traces wherever they are stored.

Note :- For more details — https://grafana.com/docs/grafana/latest/introduction/

So the ELK Stack is basically used for logs monitoring and observing while Prometheus and Grafana is used for metrics monitoring and observing. It can be used vice versa but it is best practice to use it this way.

STEPS

Here we will set up the stack of monitoring and logging tools with the help of Docker Compose with following the below given steps:-

STEPS: —

  1. Creating a instance and setting up Docker Compose
  2. Writing Docker Compose file
  3. Adding Configuration files
  4. Running the Docker Compose File
  5. Edit the Security Group
  6. Verifying Functionality

1. Creating a instance and setting up Docker Compose

Here we will be doing all our task on AWS EC2 of ubuntu operating system.

Lets start by creating an instance and give name to the instance.

Choose ubuntu AMI for a ubuntu server.

Choose an instance type and key value pair.

Edit network settings.

Click on launch instance.

Connect to instance using Instance Connect.

To download Docker Compose you need to first download Docker on your instance.

Run the below given commands.

sudo su
apt update
apt upgrade
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

To check if you have installed docker and docker compose successfully please write below given command.

docker - version
docker-compose --version

Now our Docker and Docker Compose has been installed successfully !!!!!!!

2. Writing Docker Compose file

Docker Compose can automatically pull the container images specified in the docker-compose.yml file, so there is no need to pull the docker images manually.

We will name Docker Compose file as docker-compose.yml

I am adding this code in a directory name DockerCompose on our EC2 server by using the following commands.

mkdir DockerCompose
cd DockerCompose
touch docker-compose.yml
vi docker-compose.yml

A editor will open in the paste the below docker compose file code and exit the editor click on Esc and write :wq to exit.


version: '3'

services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.10.0
container_name: elasticsearch
environment:
- discovery.type=single-node
ports:
- 9200:9200
networks:
- elastic

logstash:
image: docker.elastic.co/logstash/logstash:7.10.0
container_name: logstash
ports:
- 5044:5044
volumes:
- ./logstash-config/:/usr/share/logstash/pipeline/
depends_on:
- elasticsearch
networks:
- elastic

kibana:
image: docker.elastic.co/kibana/kibana:7.10.0
container_name: kibana
ports:
- 5601:5601
depends_on:
- elasticsearch
networks:
- elastic

prometheus:
image: prom/prometheus:v2.26.1
container_name: prometheus
volumes:
- ./prometheus-config/:/etc/prometheus/
ports:
- 9090:9090
networks:
- elastic

grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- 3000:3000
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
depends_on:
- prometheus
networks:
- elastic

networks:
elastic:

3. Adding Configuration files

So to customize our stack we need to add configuration files of the logstash and prometheus.

For Logstash configuration file we need to put it in a directory named logstash-config in the file name logstash.conf , we can create the folder and file with the below commands.

Note: Make the folder inside the DockerCompose folder

mkdir logstash-config
cd logstash-config
touch logstash.conf
vi logstash.conf

The vi editor will open paste the below given logstash.conf configuration file code and press Esc and write :wq to exit the editor.

input {
beats {
port => 5044
}
}


output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}

Same for prometheus we need to put it in a directory named prometheus-config in the file name prometheus.yml , we can create the folder and file with the below commands.

Note: Make the folder inside the DockerCompose folder

mkdir prometheus-config
cd prometheus-config
touch prometheus.yml
vi prometheus.yml

The vi editor will open paste the below given prometheus.yml configuration file code and press Esc and write :wq to exit the editor.

global:
scrape_interval: 15s

scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']

4. Running the Docker Compose File

To start our stack we need one docker compose command and that's the beauty of Docker Compose.

Note: Run the command inside the DockerCompose folder

docker-compose up -d

The above -d means taht container will run in “detached” mode, which means containers runs in the background.

CLI will be as shown in the below image.

After completion of execution the cli will be something like the below given image.

Lets check if all the container is running by the below given command.

docker ps 

5. Edit the Security Group

Now to access the services with the public Ip address of the server we need to edit the inbound rules of the security group associate to the instance.

Add the port that we exposed for our container in the inbound rules with the source from anywhere.

The Inbound rules will be as shown in th ebelow image.

6. Verifying Functionality

Let’s verify that our docker compose has successfully made our logging and monitoring stack successfully.

We will do this by searching with the public ip of instance and the port which we had export the container to.

For Elasticsearch search <your-public-ip>:9200 , it must display like the below given image.

For kibana search <your-public-ip>:5601 , it must display like the below given image.

For grafana search <your-public-ip>:3000 , it must display like the below given image.

For promrtheus search <your-public-ip>:9090 , it must display like the below given image.

HURRAY !!!!! We have successfully created a monitoring and logging stack using Docker-Compose.

GITHUB REPOSITORY LINK FOR REFERENCE — https://github.com/KelviManavadaria-17/Monitoring-and-Logging-Stack-with-Docker-Compose.git

--

--

No responses yet