Manage Docker Containers using CoreOS — Part 2

Mohit Arora
6 min readMay 27, 2014

--

This is part 2 of the series on “How to manage Docker containers using CoreOS”. Please read Part 1 before reading this one and make sure you understand the basic concepts/ideas of CoreOS.

In this part we will launch 5 application docker containers and 1 apache httpd container in our CoreOS cluster as shown below. Apache httpd will act as a load balancer in this case and will use mod_proxy_balancer for load balancing.

Assumptions:

  1. CoreOS cluster (as explained in Part 1) is up and running.
  2. You have fleetctl configured and installed on a separate machine as explained in Part 1. We will be using fleetctl from this separate machine (called controller machine from this point onwards) to manage our CoreOS cluster.

Note:

  1. Application docker container image that we will use is already uploaded to docker index. You can browse it here. Application is a very simple java web application that exposes one rest end point.
  2. HTTPD docker container image is also uploaded to docker index. You can browse it here.

In next part, I will talk about how to create container images.

Start First instance of application docker container in cluster

Let’s start an application docker container in our CoreOS cluster using fleetctl. Launching container guide using fleet is available here. Basic idea is that we need to prepare systemd units combined with a few fleet-specific properties and submit it to CoreOS cluster using fleetctl. For this tutorial, I have checked in the “application systemd unit files” in github repository.

Step 1: Clone github repo on controller machine.

Step 2: Navigate to the application directory in cloned repository.

Step 3: Submit the “application systemd unit” to CoreOS cluster

fleetctl —strict-host-key-checking=false submit sample.application.1@8080.service

Step 4: Start the service

fleetctl —strict-host-key-checking=false start sample.application.1@8080.service
Output: Job sample.application.1@8080.service launched on 4468cdab…/172.17.8.103

Step 5: Check status

fleetctl —strict-host-key-checking=false list-units
UNIT STATE LOAD ACTIVE SUB DESC MACHINE
sample.application.1@8080.service launched loaded active running sample-application 4468cdab…/172.17.8.103

Step 6: Once you see service is active, Hit the service on appropriate host. In this case Hit url http://172.17.8.103:8080/sample and you should see following output.

{“id”:1,”content”:”Hello, Stranger!”}

Impressive right. Our application is up and running inside a docker container on one of the machine in our CoreOS cluster. In case you face any issues, following commands can provide better insight

fleetctl —strict-host-key-checking=false cat sample.application.1@8080.servicefleetctl —strict-host-key-checking=false status sample.application.1@8080.servicefleetctl —strict-host-key-checking=false journal sample.application.1@8080.service (you can pass -f flag to streams the output of the service on terminal)

SystemD unit file Details

Before we move forward and launch second application instance, it is very important to understand following service unit file that we cloned from git repo and submitted. A detailed tutorial of systemd is available here. I have annotated 7 lines below that I am going to explain in detail

[Unit]
Description=sample-application
[Service]
EnvironmentFile=/etc/environment #1
ExecStartPre=/usr/bin/docker pull mohitarora/sample-app:v1.0.1 #2
ExecStart=/usr/bin/docker run —name %p —expose 8080 -p %i:8080 mohitarora/sample-app:v1.0.1 /opt/launch.sh #3
ExecStartPost=/usr/bin/etcdctl set /applications/sample/%p ${COREOS_PUBLIC_IPV4}:%i #4
ExecStop=/usr/bin/docker stop %p #5
ExecStopPost=/usr/bin/etcdctl rm /applications/sample/%p #6
TimeoutSec=120min
[X-Fleet]
X-Conflicts=*@%i.service #7

#1 — EnvironmentFile= allow you to exposes environment variables of a file to the current unit file

#2 — ExecStartPre= allow you to specify any commands that will run before ExecStart. In this case we are pulling application docker image from docker index on CoreOS machine.

#3 — ExecStart= allows you to specify any command that you’d like to run when this unit is started. In this case we are starting application docker container. In most cases, we will be starting docker container as part of Exec Start.

#4 — ExecStartPost= allow you to specify any commands that will run after ExecStart. In this case we are creating a key value pair in etcd. We will use the same key value details when we launch apache server.

#5 — ExecStop= allow you to specify commands that will run when this unit is considered failed or if it is stopped. In this case we are stopping docker container.

#6 — ExecStopPost= allow you to specify any commands that will run after ExecStop. In this case we are removing entry from etcd

#7 — X-Conflicts= X-Conflicts attribute tells fleet that these two services can’t be run on the same machine

Another thing you must be thinking is, what is this %i and %p that we are using. It is a pretty neat feature of systemd and is explained here. Basic idea is that If you create multiple symlinks to the same unit file, the following variables become available to you.

  1. %p Prefix name Refers to any string before @ in unit name.
  2. %i Instance name Refers to the string between the @ and the suffix.

This gives us the flexiblity to use a single unit file to announce multiple copies of the same container on a both a single machine (no port overlap) and on multiple machines (no hostname overlap). This trick is making sure above that only one application service is running with a specific port on same machine.

Without further delay, let’s launch the second application docker container.

fleetctl —strict-host-key-checking=false submit sample.application.2@8080.service
fleetctl —strict-host-key-checking=false start sample.application.2@8080.service
Output: Job sample.application.2@8080.service launched on 6348bfb0…/172.17.8.101

Keep checking the status of second unit, when unit is active, test the service.

Let’s validate our #4 — ExecStartPost as explained above. From any of the machines in cluster execute following command.

etcdctl ls /applications/sample
Output: /applications/sample/sample.application.1
/applications/sample/sample.application.2

At this point you can launch 2 more units sample.application.3@8080.service and sample.application.4@8081.service

Now when you run list-units command, you will see that we have 4 application containers running in our coreOS cluster. Pretty impressive, right.

Start Apache docker container in cluster

Navigate to the httpd directory in cloned repository and Execute following command from controller machine.

fleetctl —strict-host-key-checking=false submit sample.application.httpd.1@80.servicefleetctl —strict-host-key-checking=false start sample.application.httpd.1@80.service
Output: Job httpd.service launched on 0092d846…/172.17.8.101

Now hit http://172.17.8.101/sample, You will see the response from application. At this point of time you have an apache server running inside a docker container in our CoreOS cluster which is routing requests to 4 application instances (in round robin) running in their own respective docker container in the same CoreOS cluster.

At this point, you must be thinking about service discovery

  1. How did apache HTTP server figure out the instances where application instances are running?
  2. What if I launch one more application docker container?
  3. What if I remove one application docker container?

In order to answer above questions, i would like to introduce one more open source project called confd. confd is a configuration management tool focused on

  • keeping local configuration files up-to-date by polling etcd and processing template resources.
  • reloading applications to pick up new config file changes

If you remember whenever we launched/stopped an application docker container in our cluster we added/removed an entry in etcd as a post start step/post stop step. Hence at any point of time, if anyone needs to know how many application instances are alive, etcd will have up to date information. Apache

HTTP server will use confd and information stored in etcd to update its proxy configuration file. That is how it knows on what all instances application is running. Also inside the HTTP server docker container confd is running in background and polls etcd at regular intervals. If any entry is added or removed, confd will update apache configuration file and restart apache.

I am not going into very low level details here because its easy to crack, Please go through the ansible file i used to create Apache HTTP server docker image. Also pay some attention to confd files here and here. Don’t forget the http server docker container boot script.

Now start the sample.application.5@8081.service and make sure apache has started routing traffic to newly started container.

I am pretty impressed by all this and this completely fits into my continuous delivery vision. In next article of this series, i will be writing more about continuos delivery based on docker and CoreOS.

--

--