For some time already we delivered several projects to our clients that consisted of architecture that was very similar to micro-services. We used separation primarily between presentation layer (web site), backend system (REST API) and database.
We were treating every part as an individual unit — website contains only presentation layer, REST API (sometimes several REST APIs) are handling all business logic. Database — sometimes one, sometimes many (depending on how we executed separation of concerns).
During development and testing we are using Docker. Docker is giving us confidence when testing that we are on the same “environment” as on production.
For deployments to our central TEST environment we are using docker-compose to quickly start up whole environment with all services. This method is pretty much straight forward — we specify what services to run (images, mapping volumes, etc.) and we define what ports to expose from docker container to the host system. Then we map it in our internal DNS and we have nice working access to our TEST environment.
Multiple sub-domains with different deployed versions
On our latest project, requirement was to have (after first phase):
- multiple environments running with different versions;
- every environment running on its own sub-domains;
- all of those environments are supposed to be covered with proper SSL/TLS certificate.
For example: env1.coolapp.com would contain release from sprint 10, but env2.coolapp.com would contain release from sprint 11. At one point in time env1 would be updated to sprint11 (or even 12, depending on the need).
Step 1: docker-compose with .env file
When using docker-compose you can create .env file (that is of course just default name, you can use different but then you have to specify it when executing docker-compose command).
.env file can contain “variables” that will be loaded in docker-compose (very similar to “environment” variables).
This gives us an option to have the same docker-compose file but just having different .env file we can point to different image version and give different name to the environment.
Step 2: Reverse proxy
We needed some kind of reverse proxy that will accept traffic coming to specific sub-domain and route that traffic to its appropriate docker environment.
Reverse proxy can be executed in many ways, we can make custom service, we can use NginX, but it would be really cool if there would be some kind of already existing tool that can give us easy configuration, dynamic discovery of new subdomains and if it could also automatically obtain SSL/TLS for every new environment … oh, and when I am writing my wish list, I would also like to include — load balancing. Yeah, that would be nice…
Traefik (traefik.io) is wonderful piece of software written in Go language that gives everything that we need, and it can do much much more.
In its essence it is dynamic reverse proxy. It can connect to many popular deployment platforms (docker, swarm, mezos, kubernetes, etc.) and obtain information about services (containers). It is using .toml file (simple text config file) for configuration.
Traefik is composed of rules that are used to connect “Frontend” with “Backend” In terms of Traefik — “Frontend” is internet domain like api.myapp.com. On the other hand “Backend” is our deployed web service. In this case — we can set a rule in Traefik that for Host:api.myapp.com traffic should be routed to our api service container.
All those rules can be set in .toml file. But here comes a very interesting part — they don’t have to be! Rules can be defined in labels on docker containers and Traefik will pick them up dynamically.
Set docker service labels to “push” rule into Traefik
In our case we have a service defined in our docker-compose and that service will have a docker labels with content like (there are lot more labels, this is just a small sample):
This will mark this docker service as a backend with name “restapi”, it will create a rule in Traefik that all traffic coming from api.coolapp.com should be redirected to this docker service to port 8080.
In some cases you don’t need to expose some docker services to Traefik (for example you can have some backend api that doesn’t need to be exposed) then you just omit “enable” label.
Start up Traefik
We start up Traefik as separate docker container from our environments. When we start it up we map docker.sock so Traefik can communicate with Docker, but also we give it very simple .toml file:
logLevel = "DEBUG"
defaultEntryPoints = ["http", "https"]
# WEB interface of Traefik - it will show web page with overview of frontend and backend configurations
address = ":8080"
# Connection to docker host system (docker.sock)
domain = "mycoolapp.com"
watch = true
# This will hide all docker containers that don't have explicitly
# set label to "enable"
exposedbydefault = false
# Force HTTPS
address = ":80"
entryPoint = "https"
address = ":443"
# Let's encrypt configuration
Beginning of the file is mostly self explanatory — it is setting log level, enabling both http and https, couple of lines that makes it connect to docker host and finally “force to https” (all traffic that comes to port 80 — redirect to 443)
But last part is really neat — acme. This is connection to Let’s encrypt service, but the best part of all — it is completely dynamic. For every Host rule (domain/sub-domain) that appears in Traefik — it will go to Let’s encrypt and obtain key and certificate (store them in acme.json file) for that host configuration.
As you can see here — .toml configuration doesn’t have any configuration about our sub-domains and docker-compose environments. It just simply “waits” for any incoming rule that will be “pushed” when docker-compose starts up new environment. Then it will generate all rules “on-the-fly” and (if necessary) obtain SSL/TLS certificate for every environment.
This is simple — it is just there “out-of-the-box” load balancing works, you can specify different load balancing algorithm to be used, but if you don’t specify anything and you scale up your docker-compose service, load balancing is up and running.
Instead of conclusion
Traefik is solving our problems really good. It is built with dynamic environments in mind, so lots of things just work out of the box without configuring too much. Other features like let’s encrypt, load balancing, circuit breakers etc. just makes it even more appealing.
We still have a lots of ways to go, to explore more features, try it out in docker swarm environment, etc. but for now we are very satisfied.
Give it a try, if nothing else, at least because of cute logo :)