English is not my first language, so the whole story may have some mistakes… corrections and fixes will be greatly appreciated. Also, I’m still a student, so please tweet me your advices/comments !
tl;dr: I’m using CoreOS on RunAbove, using fleet to manage my containers and https://github.com/jwilder/nginx-proxy for automatic proxy stuff.
I just want to start by saying that I love Archlinux. It’s a wonderful distribution with the right documentation, the right tooling, and the right packages. I’ve been using it on my server just because I believe that a up-to-date server is more secure than an old one, and Archlinux provide me that, even if I need to dig around if it’s not working. I’ve been using Docker a lot, to play and discover new tools. So my workflow look like this:
- Find one great open-source tool on the paper
- Use SSH to login
- Upgrade server
- Fix server sometimes
- docker pull
- Write some nginx configuration for reverse proxy to have a sub-domain attached to the container
- iterate over and over
And I was happy with that. But since I hear a lot with Agile deployment at work, or with Quentin Adam about how production should be powered, It just feel wrong now. We can’t stick to the old model with some Debian running some applications. We should be able to run automatic update, and deploy a container without doing some SSH, and the right reverse-proxy configuration should be written automatically. I decided to not renew my kimsufi, and explore the CoreOS stack. I just wanted to discover new things, so I’m pretty sure that my architecture is not the best, but I think it’s a cool one at least, and I’m having fun using this stack!
Why CoreOS and containers?
CoreOS is a linux distribution made for the cloud. It has some great features, but the two major ones are automatic updates, and a distributed systemd called fleet especially made to run containers for example. The best way to get started with CoreOS is to watch this awesome talk.
Let’s get started!
I’m currently an intern at RunAbove, so I’ll deploy my instance there. RunAbove provide what I call “OpenStack As A Service”, and that’s right what I need. First create an account on RunAbove, and retrieve your credentials. So click on your name on the top right corner, and go to OpenStack Horizon, then Access & Security, API access, and download the RC file. It’ll handle all the authentification stuff for you. Just run the file before firing some glance or nova commands.
You also need some OpenStack clients, so you should run something like:
$ sudo pip install python-glanceclient python-novaclient
If it’s not working, brave yourself finding the missing packages…
I also recommand to generate a new SSH Key (and put it on a different file, like ~/.ssh/coreos):
$ ssh-keygen -t rsa -f coreos.key
And upload the public part into OpenStack Horizon.
And you need to install fleetctl to make deployments from your laptop!
You’re now ready to follow the official procedure! You just need to remove “ — is-public True” from the glance command, and change m1.medium to the flavor-id of of a.intel.sb.m for example (nova flavor-list should be useful to list all the available flavor-id). If you can make some SSH to the machine, you can go to the next part.
PS: if like me, you want to pull some private containers, you should have a look here.
Let’s talk about nginx-proxy & fleetctl
OK, so now you have your own cluster for CoreOS! I have only one machine, but it should work the same way. Next task is to deploy some containers, and the most important part: nginx-proxy. I’m the type of person who love using the terminal, but also some nice fancy web UI. The avantage of Docker is to deploy things easily, just to test for example a new fancy SaaS app. But I’m sick of writing some reverse-proxy stuff to have a nice and fancy URL like slides.pierrezemb.fr. So that’s where nginx-proxy comes in. It’ll watch for docker’s socket (ready-only mode), and generate the right conf file for nginx each time there’s some change ie a new container comes in the server. It handles HTTP/HTTPS and everything you need! We’ll now see how to deploy containers with it.
For the example, I’ll be using my own website. It’s available on the Docker Registry with some Automatic build stuff to automate the whole process of creation of the image. I’m using my own Web server: GoStatic. It’s a very light container (5MB) written in Go. So we need 2 containers: the proxy container and my portfolio-container. I’ll use a third container as a data container for my ssl certificates. I’ll use fleet, which is like systemd but in a distributed way.
ExecStartPre=-/usr/bin/docker kill proxy
ExecStartPre=-/usr/bin/docker rm proxy
ExecStartPre=/usr/bin/docker pull jwilder/nginx-proxy:latest
ExecStart=/usr/bin/docker run — name proxy -p 80:80 -p 443:443 — volumes-from ssl-cert -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
ExecStop=/usr/bin/docker stop proxy
Nothing spectacular here, I’m listening to 443 and 80 to handle HTTPS/HTTP traffic. The volumes-from is because my ssl certificates are on another container, so I need to bind it together. Remove it and -p 443:443 if you’re not using SSL.
Description=Portfolio web server
ExecStartPre=-/usr/bin/docker kill portfolio
ExecStartPre=-/usr/bin/docker rm portfolio
ExecStartPre=/usr/bin/docker pull pierrezemb/portfolio:latest
ExecStart=/usr/bin/docker run -e VIRTUAL_HOST=pierrezemb.fr,www.pierrezemb.fr -p 8043:8043 — name portfolio pierrezemb/portfolio — forceHTTP
ExecStop=/usr/bin/docker stop portfolio
Service for my awesome website
Here, the options VIRTUAL_HOST allows the proxy to know the matching URL for this container. Our first container’ll see the docker run command, parse it, and write to good config file according to this options.
Before running this, you need to configure fleetctl. Add this to your .bashrc/.zshrc:
You can now run some fleetctl command! Additionnal tutorial about what fleetctl can do can be found here, but the basics are:
$ ssh-add ~/.ssh/coreos // Add the right SSH Key for fleetctl$ fleetctl list-machines // List all the machines
$ fleetctl start proxy.service // Start using proxy service
$ fleetctl start proxy.service // Start using portfolio service
This command will use SSH to connect, and the run the containers properly somewhere in your cluster. No need for SSH anymore. With this config, I can now scale easily my containers, and not being preoccupied by changing my reverse-proxy anymore. I need a new machine? I wrote a bash script, so it’s juste one command to add a new machine to my cluster. New container? i just need to write a fleet service, start it, update my DNS, and voilà.
I’m really happy with this stack, and it feels really easier than before.
What could be better?
I know that etcd could be a nice feature to use to register which container is using which URL, but I didn’t have the time to work on it yet. Maybe another time!