Docker containers on Joyent’s Triton Cloud

Luc Juggery
@lucjuggery
Published in
14 min readDec 5, 2016

A couple of weeks ago, Joyent announced the finalists of “Node.js & Docker innovator” program. At TRAXxs, we are really glad to be part of it as Node.js and Docker are 2 technologies we love working with. Be involved in a program leads by Joyent is just… wow !

In 2 words, TRAXxs is a startup developing wearable technology for shoes, building hardware products and software platforms. Its solutions combine geolocation, telecommunication and activity monitoring embedded within comfort insoles. It can turn any type of shoe into a real-time GPS tracking solution.

Innovator workshop

The on-boarding program started a couple of days ago with a workshop dedicated to the creation of an application composed of Node.js micro-services and its deployment to Joyent’s Triton Cloud using Docker Compose. It was a really great way to discover Triton. This workshop is available publicly on Joyent’s Github repository.

The application is built across 14 challenges. Basically, several services collect data (temperature, humidity, motion) from a Samsung’s Smartthing gateway and send it to the serializer service. This last one exposes HTTP endpoints to persist and read data from an underlying InfluxDB database. On top of this, a front-end service is in charge of displaying the data in realtime (through websocket) retrieving them from the serializer service. Each of the defined services are running within a Docker container. The following picture shows how the final application looks like.

Building this application helped to understand a lot about Triton’s and also to discover Node.js libraries we had never used before. We’ll come back to it when we’ll talk about the Autopilot Pattern / ContainerPilot later in this article.

Triton

Triton is an open source cloud management platform defined as “an end-to-end solution that makes running containers at scale easy”. It’s a CaaS (Container as a Service) product that can be compared to solutions like Docker Datacenter, CoreOS Techtonic or AWS Elastic Container Service.

It enables to run container on bare metal and thus eliminate the overhead coming from hardware hypervisor used on most cloud providers (GCE, AWS, DigitalOcean …) where containers run on VMs. This implies that applications run faster and more efficiently on Triton. This might not be noticeable for applications with a “regular” workload, but this will definitely be a great value for resource intensive applications.

Triton’s is divided into 3 layers

Triton SmartOS

SmartOS is a lightweight container hypervisor which inherited Zones technology from its Solaris / Illumos ancestors. It can run OS virtualization (containers) and hardware virtualization (through KVM) in a multi-tenants environment and ensuring security. Containers are ran on bare metal ensuring great performances. It provides each container its secure and isolated filesystems and its own network stack. Containers share the kernel of the host they are running on.

Triton DataCenter

On top of Triton SmartOS, Triton DataCenter ensures cloud orchestration and container orchestration in a single solution. It provides a web portal and an API endPoint (CloudAPI). The API is used by the Triton CLI to manage infrastructure (instances (container or KVM based VM), network, object storage, …) in data centers.

Triton ContainerPilot

ContainerPilot is an application’s orchestrator that makes it easier for an application to use the Autopilot pattern. Application’s containers can be scheduled by any scheduler (Docker Compose, …), ContainerPilot will be in the charge of the orchestration tasks:

  • service discovery
  • making sure those containers are healthy (though healthcheck)
  • reconfiguration in case dependent containers are scaled up/down
  • recovery from failure

It makes the application portable as it does not rely on an external orchestrator. We’ll see an example of the usage of ContainerPilot later in this article.

Joyent’s Triton Cloud

If you do not feel like installing Triton on your infrastructure, you can still use Triton Cloud. On top of Triton, you can also benefit from Manta objects storage solution (which can be compared to Amazon S3).

Once logged-in, we’ll get to the dashboard from where we can manage and monitor our instances (docker containers on bare metal, VM, bundled application), manage object within Manta object store, setup networks, DNS entries, …

All the operations that can be done through the web interface can also be done using Triton’s CLI which is installed as a npm module (make sure to use Node.js LTS, current one is v6.9.1).

$ npm i -g triton
(node:35141) fs: re-evaluating native module sources is not supported. If you are using the graceful-fs module, please update it to a more recent version.
npm WARN deprecated node-uuid@1.4.3: use uuid module instead
/usr/local/bin/triton -> /usr/local/lib/node_modules/triton/bin/triton
/usr/local/lib
└─┬ triton@4.14.2
├─┬ restify-clients@1.1.0
│ └─┬ restify-errors@4.3.0 (git+https://git@github.com/restify/errors.git#a759b94d57eee5dfd554b40da4a6599f521505f2)
│ └── verror@1.9.0
└── smartdc-auth@2.3.0 (git+https://github.com/joyent/node-smartdc-auth.git#05d9077180c4f28dbe48e0561368c75d87003d5b)

Note: for the local CLI to be able to communicate with Triton Cloud, we need to upload an ssh public key or create a new one from the UI and download it on our machine.

Once this step is done, the first thing is to create a profile, that will define the target datacenter (through the api endPoint). Let’s create one.

$ triton profile create
A profile name. A short string to identify a CloudAPI endpoint to the
`triton` CLI.
name: luc
The CloudAPI endpoint URL.
url: https://us-sw-1.api.joyent.com
Your account login name.
account: myaccount
The fingerprint of the SSH key you have registered for your account.
Alternatively, You may enter a local path to a public or private SSH key to
have the fingerprint calculated for you.
keyId: ~/.ssh/id_rsa
Fingerprint: e9:51:d3:e3:ae:6e:c6:1b:61:2d:80:56:23:17:fc:e3
Saved profile “luc”.
Setting up profile “luc” to use Docker.
Setup profile “luc” to use Docker (v1.12.3). Try this:
eval “$(triton env --docker luc)”
docker info

The command to use the newly created profile is displayed at the end of the output. Similar to what we would do if we were using a Docker host created with Docker Machine, this command set the Docker environment variables needed to run Docker related workload on Triton.

# Let's inspect the Docker related environment
$ triton env --docker luc
export DOCKER_CERT_PATH=/Users/luc/.triton/docker/myaccount@us-sw-1_api_joyent_com
export DOCKER_HOST=tcp://us-sw-1.docker.joyent.com:2376
export DOCKER_TLS_VERIFY=1
export COMPOSE_HTTP_TIMEOUT=300
# Run this command to configure your shell:
# eval "$(triton env -d)"
# Source those variables
$ eval “$(triton env --docker luc)”

Docker on Triton

Now that we are in the Docker environment of Triton, let’s check the Docker client and engine versions.

$ docker version
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Thu Oct 27 00:09:21 2016
OS/Arch: darwin/amd64
Experimental: true
Server:
Version: 1.9.0
API version: 1.21
Go version: node0.10.48
Git commit: f3cbcf6
Built: Thu Oct 27 00:58:06 2016
OS/Arch: solaris/i386

Client’s version is 1.12.3 on a darwin/amd64 architecture as the Docker CLI is running on a MacBook Pro. Regarding the server, there is something strange with the Go version (node 0.10.48 is indicated). In fact, Triton Data Center does not run any Go Docker engine but instead implements the Docker Remote API.

What does that means ? It means the Docker CLI (running locally) sends the same HTTP Requests to Triton’s implementation of Docker’s Remote API as it would do to a real Docker engine, but behind the hood, Triton creates containers using its own technology (SmartOS Zones) instead of Docker containers (through libcontainer library).

When issuing Docker commands from the local CLI, Triton appears like a regular Docker host. From the outside it looks like one big Docker host. Also, as everything is abstracted via Triton and SmartOS, there is no need to manage several hosts to setup a cluster.

Notes:

  • some features of the Docker Remote API are not implemented in Triton because they are handled by its own underlying mechanisms (services, networks)
  • check this great presentation from Bryan Cantrill on this Docker Killer Feature

Let’s see, on an example, how to run Docker workload on Triton.

Running a simple Docker container

Everything is now setup to start deploying applications on Joyent’s Triton Cloud. As we’ve seen above, the commands we’ll issue from our local CLI with target Triton Cloud implementation of Docker Remote API.

The list of running containers is empty as we have not ran any yet.

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

Let’s run our first web server (nginx) in detached mode.

$ docker run -d nginx
6f16c831420c4b9baa7771786b18ac65f07c2ebebc784339ba600296bd95d8d2
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6f16c831420c nginx “nginx -g ‘daemon off” 35 seconds ago Up 23 seconds 80/tcp, 443/tcp adoring_cray

From those simple commands, there is no difference between Triton and a Docker host running Docker engine.

The dashboard shows an instance is running and provides a link to get additional details as well as CPU/RAM/Network monitoring.

From this panel, it’s possible to perform a lot of actions on this instance: add labels / metadata, change the network settings, change the size (RAM/CPU), create snapshots, …

Running a Docker Compose application

We’ve just seen that running a container on Triton is really easy, let’s try to deploy the well known voting-app (used in a lot of meetups, conferences, … usually for demo purposes).

This application is made up of 5 services:

  • vote: web interface used to vote between 2 items
  • redis: key-value store where votes are saved
  • worker: process that retrieves the vote from redis service and saves it in postgres service
  • db: postgres database populated by worker
  • result: web interface displaying the results of the votes retrieved from db

The docker-compose file is the following one.

$ cat docker-compose.yml
version: “2”
services:
vote:
image: docker/example-voting-app-vote:latest
ports:
— “5000:80”
networks:
— front-tier
— back-tier
result:
image: docker/example-voting-app-result:latest
ports:
— “5001:80”
networks:
— front-tier
— back-tier
worker:
image: docker/example-voting-app-worker:latest
networks:
— back-tier
redis:
image: redis:alpine
container_name: redis
ports: [“6379”]
networks:
— back-tier
db:
image: postgres:9.4
container_name: db
volumes:
— “db-data:/var/lib/postgresql/data”
networks:
— back-tier
volumes:
db-data:
networks:
front-tier:
back-tier:

What is the result to run this compose file against Triton ?

$ docker-compose up
Creating network “voting_front-tier” with the default driver
ERROR: (ResourceNotFound) /v1.22/networks/create does not exist (d42281f0-ada1–11e6–8a91–950e35936ea4)

As we briefly discussed above, this message comes from the fact some features of the Docker Remote API are not implemented. We’ll need to modify the Compose file a little bit so it can be deployed on Triton.

Voting app on Triton

Below are the changes we need to do in the voting-app original Docker Compose file.

  • remove services, networks and volumes primitives
  • remove publication of ports of both front-ends (vote and result) as Triton will take care of that for us
  • add links so an instance can communicate with the instances it depends on (links inject entries in the /etc/hosts file)
vote:
image: docker/example-voting-app-vote:latest
links:
- redis:redis
ports:
- "80"
result:
image: docker/example-voting-app-result:latest
links:
- db:db
ports:
- "80"
worker:
image: docker/example-voting-app-worker:latest
links:
- redis:redis
- db:db
redis:
image: redis:alpine
db:
image: postgres:9.4

Compose does not complain anymore, and we can see in the logs that all the instances are correctly started and can communicate with each other as a vote get reflected in the result interface (I prefer cats).

We’ve just seen on an example how Docker Compose can be used to deploy an multi containers application on Triton. Compose is in charge of the scheduling (start / stop containers) but it does not orchestrate the whole thing. Coming back to the application we build during the workshop, we’ll explain how Autopilot Pattern / ContainerPilot will come into the picture for the orchestration.

Autopilot Pattern / ContainerPilot

As we’ve seen above, using the Autopilot Pattern means to move the orchestration tasks within the application itself. Changes need to be done in each container so it can

  • advertise itself against a distributed key-value store
  • define the backend services it depends on
  • define tasks to be triggered if dependencies are scaled or if they fail

In challenge 12, except the services receiving data from sensor and the Influx database, all the other services have been put in Autopilot (using ContainerPilot). Let’s see how this is done.

Consul service

From the docker-compose file in challenge 12, we can see there is an additional service named consul.

consul:
image: autopilotpattern/consul:latest
restart: always
dns:
— 127.0.0.1
labels:
— triton.cns.services=consul
ports:
— “8500:8500”
command: >
/usr/local/bin/containerpilot
/bin/consul agent -server
-config-dir=/etc/consul
-bootstrap-expect 1
-ui-dir /ui

This service is based on the autopilotpattern/consul image (this is the original consul image with additional libraries so it’s already in autopilot). Consul is developed by Hashicorp (great company that created other amazing products: Vagrant, Terraform, …), it has several features such as distributed key/value, service discovery, health checking. This service is used to keep the global state of the application.

Moving a service into ContainerPilot

Each service that we put into Autopilot needs to be connected to the consul service so it can periodically:

  • report its status, ip, expose ports, …
  • check for the status of the backends it depends on and which are also registered in consul (so it can reconfigure itself if some changes occur)

ContainerPilot takes all this in charge through a go application wrapping the main process running in the container, a containerpilot.json configuration file and additional methods implemented within the service that ensures the automatic reconfiguration when needed. On top of that, additional npm modules: consulite (service discovery through Consul), piloted (service discovery using ContainerPilot) are provided. Let’s see the temperature service as an example.

1- Service declaration in Docker Compose file

The way the temperature service is declared in the Docker Compose file is quite standard. The interesting bit to note here is the link to the consul service which enables the communication using the service name (“consul”) as it’s added in the /etc/hosts of the temperature container.

temperature:
build: ./temperature
expose:
- "8080"
links:
- consul:consul
env_file:
- sensors.env
restart: always

2- ContainerPilot configuration file

In order to perform the orchestration tasks, ContainerPilot needs a configuration file that provides information related to the service. The containerpilot.json file defined for the temperature service is the following one:

{
"consul": "localhost:8500",
"services": [
{
"name": "temperature",
"health": "/usr/bin/curl -o /dev/null --fail -s http://localhost:{{.PORT}}/heartbeat",
"poll": 3,
"ttl": 10,
"port": {{.PORT}}
}
],
"coprocesses": [
{
"command": ["/usr/local/bin/consul", "agent",
"-data-dir=/data",
"-config-dir=/config",
"-rejoin",
"-retry-join", "{{ if .CONSUL_HOST }}{{ .CONSUL_HOST }}{{ else }}consul{{ end }}",
"-retry-max", "10",
"-retry-interval", "10s"],
"restarts": "unlimited"
}
],
"backends": [
{
"name": "serializer",
"poll": 3,
"onChange": "pkill -SIGHUP node"
}
]
}

Let’s details the key defined in this file:

  • consul

This indicates the location of the consul service the container will connect to. It’s defined as localhost as it targets a consul agent running locally. This agent is in charge of the health check of the local service and it communicates with the consul service defined previously.

  • services

The list of services this container provides to other containers. In this case, only the temperature service is exposed. It also defines the healthcheck command that needs to be ran in order to verify the status of the service and the frequency of the poll as well as the number of sec to wait before considering the service unhealthy.

Note: looking at the code of our temperature service, it’s not obvious that a /heartbeat is exposed. In fact, this is done by the npm module Brule that adds this endpoint to the hapi web server.

  • coprocesses

Processes that run with the current services. In our example a consul agent is running in our container. The command to run this agent uses the “consul” string (unless environment variable defines another value), this string can be used because of the link used in the Docker Compose file. The agent running in each containers are connected to the consul service of our application.

  • backends

Other services the current one depends on. Temperature service depends on Serializer, service in charge of persisting the data into InfluxDB (defined in another service not taken into account in this example).

3- ContainerPilot binary in the service image

In order to have everything working, we need to have consul agent as well as ContainerPilot binaries installed in the temperature service. Those additional pieces of software are installed through the Dockerfile.

Consul

# Install consul (started as an agent within containerpilot.json)
RUN export CONSUL_VERSION=0.7.0 \
&& export CONSUL_CHECKSUM=b350591af10d7d23514ebaa0565638539900cdb3aaa048f077217c4c46653dd8 \
&& curl --retry 7 --fail -vo /tmp/consul.zip "https://releases.hashicorp.com/consul/${CONSUL_VERSION}/consul_${CONSUL_VERSION}_linux_amd64.zip" \
&& echo "${CONSUL_CHECKSUM} /tmp/consul.zip" | sha256sum -c \
&& unzip /tmp/consul -d /usr/local/bin \
&& rm /tmp/consul.zip \
&& mkdir /config

ContainerPilot

# Install ContainerPilot
ENV CONTAINERPILOT_VERSION 2.4.1
RUN export CP_SHA1=198d96c8d7bfafb1ab6df96653c29701510b833c \
&& curl -Lso /tmp/containerpilot.tar.gz \
"https://github.com/joyent/containerpilot/releases/download/${CONTAINERPILOT_VERSION}/containerpilot-${CONTAINERPILOT_VERSION}.tar.gz" \
&& echo "${CP_SHA1} /tmp/containerpilot.tar.gz" | sha1sum -c \
&& tar zxf /tmp/containerpilot.tar.gz -C /bin \
&& rm /tmp/containerpilot.tar.gz

# COPY ContainerPilot configuration
ENV CONTAINERPILOT_PATH=/etc/containerpilot.json
COPY containerpilot.json ${CONTAINERPILOT_PATH}
ENV CONTAINERPILOT=file://${CONTAINERPILOT_PATH}

An important thing to note is the way this image is ran. ContainerPilot is a wrapper of the node process of the service.

# With ContainerPilot => /bin/containerpilot node /opt/app
ENTRYPOINT [“/bin/containerpilot”, “node”]
CMD [“/opt/app/”]
# Without ContainerPilot => node /opt/app
CMD [“/opt/app/”]

4- Service source code

The last piece of the puzzle is to understand how this is taken into account in the source code of the temperature service.

Note: as briefly discussed above, Piloted is a Node.js module that helps to setup ContainerPilot. While not mandatory, this module is really helpful.

Within the Node.js code: Piloted reads the consul and backends configurations (it could be read from containerpilot.json file instead of being directly provided to the library). The main method (“readData” that collects data from the sensor service) is called when both consul and backends are available. Within “readData”, the availability of the Serializer service is checked before actually pushing data.

5- workflow

When temperature service starts, ContainerPilot binary is started and wrap the Node.js code of the service. ContainerPilot reads containerpilot.json configuration file and regularly send the service health to the local consul agent (in charge of sending those information to the consul service). If the Serializer backend service changes it sends a SIGHUP signal to the node process.

This is a summary of the flow within the temperature service, I still need to go deeper to better understand the whole thing.

Conclusion

In this article, I hope I managed to give a comprehensive overview of Triton and it’s cloud version and how it is used to run Docker related workload.

Looking at a part of the application developed in the innovator workshop we also got an overview of the Autopilot Pattern using ContainerPilot. Porting an app to use this pattern is not a difficult task once we’ve understood the underlying architecture. I definitely need to have a closer look at this approach soon in order to better understand the details though.

An interesting exercise could be to move Docker’s voting-app into Autopilot using ContainerPilot. I just started it for the result service but still a lot needs to be done (https://github.com/lucj/example-voting-app/tree/containerpilot). Pull Requests are welcome.

Do you use Autopilot Pattern with ContainerPilot in your distributed containerized applications ?

--

--

Luc Juggery
@lucjuggery

Docker & Kubernetes trainer (CKA / CKAD), 中文学生, Learning&Sharing