Setup Prometheus in Amazon EC2 via ElasticBeanstalk and Docker

Michael Nikitochkin
The Incident Commander
8 min readJun 8, 2017

What is ElasticBeanstalk? It is one of Amazon Services that help developers to deploy applications. Now any cloud hosting wants to be modern and support docker, ElasticBeanstalk is one of them. Developers replace chef recipes with Dockerfile and don’t even think why.

Best practice of Docker runs only one process. We often require more than one process like Application, DB, Background Workers etc. Before March 24, 2015 ElasticBeanstalk did not support multi-container on one instance. The Multi-container Docker platform differs from the other platform that ElasticBeanstalk provides. It replaced the custom bash scripts with Elastic Container Service commands. And this is Awesome!

Prometheus is a monitoring system. Why have I chosen Prometheus to describe how ElasticBeanstalk works? Because it contains multiple modules written with the help of different technologies.

Let’s play.

  1. Run Prometheus on local machine¹.
  2. Setup ElasticBeanstalk Application and Environment².
  3. Customize Prometheus³.
  4. Add Prometheus Dashboard(Rails app and Mysql).
  5. Add Nginx with HTTP Basic Auth.
  6. Add Prometheus Pushgateway.

Run Prometheus on a local machine

If you are not familiar with Docker you can find more information in Docker User Guide.

$ boot2docker start 
$ eval "$(boot2docker shellinit)"
$ docker run -d -p 9090:9090 prom/prometheus
$ open http://"$(boot2docker ip)":9090

You should see the Prometheus status page. Try to play with Prometheus graph and queries. By default Prometheus gets own metrics. It was really easy. Don’t forget to stop process after:

$ docker ps
$ docker stop <container id: first column>

Create ElasticBeanstalk Application

$ mkdir prometheus
$ cd prometheus
$ brew install aws-elasticbeanstalk
$ eb init
$ eb create dev-env
$ eb open

It would create a Sample web application prometheus provided by ElasticBeanstalk. More information about eb commands you can find here and installing guid. There are 2 tiers: Web and Worker. I prefer use Worker, even for web applications that do not require to have the same IP address. Web tier creation and assigning Elastic Load Balancer or Elastic IP depends on your choice.

After that you can configure the application via web interface. Save configuration from environment settings menu. To access saved configurations create a command config .

$ eb config list
$ eb config get <configname>
$ vim .elasticbeanstalk/saved_configs/<configname>.cfg.yml
$ eb config put <configname>

It is a useful feature if you want to put configuration in a repo and create new environments based on those configurations. The local copy of config is located in .elasticbeanstalk/saved_configs

$ eb create other-env --cfg <configname>

Sample config:

OK, we have an empty directory, EB application and Environment with sample application. The Multi-Container Docker application required the Dockerrun.aws.json. In this file we describe all our containers and how they are linked. More info about format of Multi Container Docker Configuration.

This is Dockerrun.aws.json to build and run prometheus like we do on local machine:

The config is clear, except the essential. This option required to mark this container like important, and if during deploy any containers with enabled essential exits with non zero, the deploy would be aborted.

Let’s do local test for this configuration. Yep, it is the one of most cool features from eb running containers in local env using the same docker and docker-compose.

It works like a charm. It creates a file .elasticbeanstalk/docker-compose.yml that used to run containers. Here is more help to understand it: Docker Compose.

This file would help us in future to debug. Next step deploy:

$ eb deploy

Wait for a few mins and check that all finished without ERROR.

Customize Prometheus

We setup the sample prometheus with default configuration and settings. Let’s add custom config file, that is stored out of the container with default settings.

Check the default CMD for prometheus in https://registry.hub.docker.com/u/prom/prometheus/dockerfile. In current version it looks like:

We need to change only one parameter -config.file to point to our file and others remain unchanged. Let’s suppose that our config file will be located in /opt/prometheus/prometheus.yml. In a pure docker it would look like:

This file is missing. To share our local folder with docker container we need to add specific option. More information about in Managing data in containers. Simple version is: docker run -v /path/local/folder:/path/container/folder.

It should work. Let’s stop it via docker stop <container id: first column>, change config and start again. We should see the new config in the status page. After we need to change option -storage.local.path to point to host volume to store all data on local machine.

Back to the documentation of Dockerrun.aws.json in Multi Container Docker Configuration. There are volumes and mountPoints options that we will use for mounting folders and command to change the default container CMD with our.

Add to the root of the config a key volumes:

In this example, I added 2 volumes/folders that would be shared with containers. One folder for Prometheus configuration folder, that we keep in the git repository, and data folder would be stored in EBS volume of an EC2 instances. So we will not lose the data after reboot or redeploy. EB use /var/app/current to store current application working directory. Our latest code during deploy is located there.

In the container section we need to use that registered folder to mount in a correct container folder:

Added attribute command to the container option to use new config file and data folder:

The result would be:

Testing on a local machine first via eb local run and you would see something similar to:

Refresh the prometheus status page, and it would use our custom config. There is some trick, EB detect volumes that pointed to /var/app/current and uses the current folder to search for folders, but it does not mount data folder and that’s why your data will be missed after each restart. To verify this you can check .elasticbeanstalk/docker-compose.yml:

So this file is good to debug your configuration file. And you can use it with a pure docker composer tool.

Let’s deploy eb deploy and debug via eb logs.

Add Prometheus Dashboard (Rails app and MySQL)

We have a lot of data, now we need to add simple dashboards. PromDash is nice tiny tool with nice features.

PromDash is a small Rails application and it uses MySQL database. This is a good example how to setup to docker containers and link its.

Every Rails application require to specify environment variable RAILS_ENV to production. For PromSash we also require to specify the MySQL db url. Add a new container specification to our Dockerrun.aws.json for MySQL:

After verification that it works eb local run. Here I used a new option environment. This option is obvious and simple. Next let’s add PromDash container:

Here I have added a key links with an array of container names that will be used in this container. Docker container generates a new ip address on each run. And to figure out what is ip address of linked container is possible only by hostname and environment variables based on a link name. More information in Linking containers together. The example how to get an ip address check PromDash Dockerfile.

After run eb local run there is a lot info from Mysql container and now we can check the page:

$ open http://"$(boot2docker ip)":3000

It will return We’re sorry, but something went wrong. Because the database is not configured and there are no tables. We need to add a migration script to run required commands on deploy and run. For rails we need to run bin/rake db:create db:migrate for initial setup DB. Add a new container based on same PromDash image, but instead of run Web server, going to run this command.

Added an option essential: false to mark this container as should not kill all other container after it finished working. The next problem we would come across, because the mysql initializes few seconds, and our rake task could not connect during setup, and we see Mysql2::Error: Can't connect to MySQL server on '172.17.0.52' (111). It is ok. Let’s add our script to run rake task after some seconds.

bin/delay.sh:

Next we should mount our bin folder and wrap rake command with delay.sh:

Trying again eb local run, it looks like the migration has started, but quits afterwards. EB local does not support an option essential: false because it was related to Docker Compose. Add new wrapper bin/sleep.sh:

And update the command:

Start eb local run and it seems it works. Verify by: open http://"$(boot2docker ip)":3000. Starting deploying to verify our configuration on EB. (PS: Don’t forget to change the Security groups allow you access 3000 port and remove wrapper delay.sh and sleep.sh).

Add Nginx with HTTP Basic Auth

To add nginx container you should go through similar steps as for rails-app. Nginx should open 2 ports: 80 for PromDash and 9090 for Prometheus app to access api from the promdash.

Remove ports from the prometheus-app container and rails-app. Create site config proxy/conf.d/default.conf:

As you see in the upstream configuration I use container names as host. Because docker during the links creates for each linked container a record in /etc/hosts with current ip address.

Also create a file proxy/conf.d/htpasswd to restrict access. How To Set Up HTTP Authentication With Nginx.

Add Prometheus Pushgateway

Prometheus poll applications for metrics, but sometimes it is not possible to reach an application. There is Push Gateway. Container example:

Restart an application on a local machine and it should open the port: open http://"$(boot2docker ip)":9091 with current metrics from applications.

Prometheus application can communicate with gateway via link.

After that we modify prometheus/prometheus.yml add to scrape_configs section:

Restart the local application and check the Prometheus status page. There you should see a new endpoint http://prometheus-gateway:9091/metrics.

The final Dockerrun.aws.json:

Sample project in Github

That’s all Folks
That’s all Folks

Michael Nikitochkin is a Lead Software Engineer. Follow him on LinkedIn or GitHub.

If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories.

--

--

Michael Nikitochkin
The Incident Commander

software engineer. like ruby, crystal and golang. play with containers and kubernetes via terraform.