Sharing Exoscale Docker Machine With Portainer

Luc Juggery
@lucjuggery
Published in
7 min readMar 6, 2018

TL;DR

I use Docker Machine almost every day: to spin up ephemeral Docker hosts for testing purposes, but also to manage hosts dedicated to serve long-running applications. Portainer.io makes it super easy to manage those different hosts through a fancy GUI and it additionally provides access rights so the hosts can easily be shared between team members.

About Docker Machine

If you do not know Docker Machine, in a nutshell it’s a binary which enables to create and manage Docker hosts. By Docker host, we are talking about a physical or virtual machine with the Docker platform installed on it.

Docker Machine is super handy as it provides a lot of drivers so we can choose where we want a host to be created (locally, on a private cloud, on a cloud provider). The list of supported driver is the following one:

- Amazon Web Services
- Microsoft Azure
- Digital Ocean
- Exoscale
- Google Compute Engine
- Generic
- Microsoft Hyper-V
- OpenStack
- Rackspace
- IBM Softlayer
- Oracle VirtualBox
- VMware vCloud Air
- VMware Fusion
- VMware vSphere

Among those drivers, we can see a lot of cloud providers. Let’s focus on Exoscale an European cloud provider which offers a great balance between simplicity of usage and great set of features.

Create a Docker host on Exoscale

On the their official website, there is an interesting comparison between the services offered by Exoscale and the ones offered by DigitalOcean and AWS. If we talk about features set, Exoscale is often compared with DigitalOcean. Just check this out in the following table.

Once we have created an account, we need to add some credits and we can start to play right away. Great thing: we do not even have to use a credit card and can simply provision the account with 5€ using PayPal.

Once the account is set, the next thing is to get a pair of API keys (the process is very intuitive) and then use them in the Docker Machine command line.

In the example below, we create a Docker host named dev.

$ docker-machine create \
--driver exoscale \
--exoscale-api-key=$EXOSCALE_API_KEY \
--exoscale-api-secret-key=$EXOSCALE_API_SECRET_KEY \
--exoscale-availability-zone=ch-dk-2 \
--exoscale-image=ubuntu-16.04 \
--exoscale-instance-profile=Tiny \
dev

As for the other drivers related to cloud providers, we need to specify:

  • the authentication tokens (used for the billing), as you can see, mine are set in the EXOSCALE_API_KEY and EXOSCALE_API_SECRET_KEY environnement variables
  • the name of the machine we want to create

The other parameters are optional and each have their own default value.

Here we create a Tiny machine (5€/months) based on an Ubuntu 16.04. We also specify the datacenter to use: one located in Switzerland among the 4 datacenters available (2 in Switzerland, 1 in Austria and 1 in Germany).

We get an output like the following one which indicates all the steps done by Docker Machine.

Running pre-create checks…
Creating machine…
(dev) Querying exoscale for the requested parameters…
(dev) Generate an SSH keypair…
(dev) Spawn exoscale host…
(dev) Waiting for job to complete…
Waiting for machine to be running, this may take a few minutes…
Detecting operating system of created instance…
Waiting for SSH to be available…
Detecting the provisioner…
Provisioning with ubuntu(systemd)…
Installing Docker…
Copying certs to the local machine directory…
Copying certs to the remote machine…
Setting Docker configuration on the remote daemon…
Checking connection to Docker…
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env dev

Basically it:

  • instantiates a VM on Exoscale
  • installs Docker on it (daemon and client)
  • configures the local client to it communicate in mTLS with the remote daemon

Once this is done, the new Docker host can be seen on Exoscale’s UI

During the creation of the VM, Docker Machine generates a certification authority (CA) and use it to sign the server’s and the client’s certificates. Those files are created in the .docker/machine/machines/MACHINE_NAME file as we can see below.

$ ls -lrt ~/.docker/machine/machines/dev
total 56
-rw — — — — 1 luc staff 887 Mar 5 21:10 id_rsa
-rw-r — r — 1 luc staff 1021 Mar 5 21:11 ca.pem
-rw-r — r — 1 luc staff 1046 Mar 5 21:11 cert.pem
-rw — — — — 1 luc staff 1675 Mar 5 21:11 key.pem
-rw-r — r — 1 luc staff 1094 Mar 5 21:11 server.pem
-rw — — — — 1 luc staff 1675 Mar 5 21:11 server-key.pem
-rw — — — — 1 luc staff 2765 Mar 5 21:11 config.json

They are needed to change the configuration of the local Docker client so it can communicate securely with the Docker daemon running on the newly created machine.

In command line, those certs/keys are used behind the hood when the configuration of the local client is changed with the following command:

$ eval $(docker-machine env dev)

We will now see how those files can be used to manage our new host in Portainer.

Let’s talk about Portainer

When I deliver Docker trainings, I usually dedicate a chapter to present the projects of the container ecosystem. Portainer is one of those projects, some of the other ones being openfaas, dockprom / swarmprom, Portus, …. I can be sure that when I talk about Portainer the participants stop listening to me and start playing with its great interface right away… (not a tool to present in the middle of the training though :) ).

It is official defined as:

an open-source lightweight management UI which allows you to easily manage your docker hosts or Swarm clusters

To run it, we start by creating a volume which will be used to persist the configuration.

$ docker volume create portainer_data

We can then start the Portainer container with the following command:

$ docker run -d \
-p 9000:9000 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer

Note: you can find some more information about the reasons why /var/run/docker.sock is bind-mounted in the container in this blog post.

Once we have created an admin account, and are logged-in, we get this great interface showing all the Docker resources managed on your host, the one used to run Portainer.

Let’s now say we want to manage the Docker host we created above with Docker Machine (the one we named dev). No problem, we just need to go to the Endpoints menu and enter the requested pieces of information:

  • the Endpoint URL, which can be retrieved with docker-machine url dev
  • the CA certificate, client TLS certificate and client TLS key

Once the form is submitted, we can see both Endpoint listed, the local one and the one of our new Docker Machine.

An interesting thing to note here:

  • the local Endpoint uses the /var/run/docker.sock unix socket as in this case the Portainer container communicates with the API of the local Docker host
  • the dev Endpoint, the one used by our new Docker host, uses a tcp socket as in this case the Portainer container communicates with the API of the remote Docker host

From the menu on the left, we can then change the Active Endpoint and select the one named dev. We then see the status of our dev Docker host.

Note: no container, image or volume are listed here as nothing have been deployed on the host. Only the 3 networks created by default by Docker are listed (docker0, none, null).

Heading to the User Management menu, we create a team named dev and 2 users (Calvin & Hobbes) in this team.

Coming back to the Endpoints menu and using the Manage Access link, we can give access to those users to the dev host.

Once we have authorized the users of the dev team, we can logout as the admin and login back as one of our user, let’s go for Calvin.

As expected, Calvin now has access to the dev host.

From now on, both users in the dev team can launch and manage containerized application on the dev host. Pretty handy way to share Docker Machine !

On top of this, and because we did not change the default security policies, both users:

  • cannot run containers with a bind-mount of a local file/folder
  • cannot run privileged containers

Summary

In this quick example, we see how Docker-Machine can be used to create a Docker Host on a cloud provider. We selected Exoscale in our example as it offers a great set of features and has a very ergonomic web interface. In order to share the Docker host, we used Portainer as it allows to add Docker Endpoints very easily. Its team / user management make it super simple to manage access to the Docker hosts.

--

--

Luc Juggery
@lucjuggery

Docker & Kubernetes trainer (CKA / CKAD), 中文学生, Learning&Sharing