A basic developer workflow with Docker-EE

TL;DR

It does not require many things for a developer / operator to get started with Docker-EE. The UCP GUI provides all the information needed to configure everything.

The cluster

In a previous post we setup a simple UCP cluster composed of 3 Ubuntu nodes. We will use this same cluster to go deeper in our exploration of Docker-EE.

As said in this post, the configuration is not production ready as this would require at least 3 managers for the UCP cluster and 3 replicas for DTR.

Create teams

We start by creating a Developers team from the User Management tab. From the Permissions Panel we can also create some labels to allow fine grained access towards resources with the same labels. We will not create any label at this stage though.

Creation of a Developer team

Following the same process, we create a team dedicated to Operators. We end up with 2 teams, no user are currently assigned to them yet.

Developers and Operators teams created

Create users

Let’s create a couple of users:

  • Moby with View Only permission
  • Gordon with Full Access permission
Moby and Gordon users created

Once the users are created, we assign them in their respective team

  • Moby to the Developers team
  • Gordon to the Operators team
Moby added to Developers team / Gordon added to Operators team

The differences between the permissions of each user is justified by their respective role:

  • a developer usually does not need administrative options
  • an operators needs permissions to fine tune applications

Client Bundles

To manage applications, a user can use the UCP GUI or the local Docker CLI. To go with the second option, a public/private key pair and the CA root certificate of the Swarm first needs to be downloaded and provided to the client so it can securely communicate with the Docker daemon running on the Swarm manager.

From the user’s details, the administrator can create and download the client bundle containing the root CA and the user’s keys, and then provide the bundle to the user.

Creation of a client bundle for Moby and Gordon users

Bundle usage

Let’s consider the user moby and see how the client bundle needs to be used. As the commands below illustrate, on a Linux environment, the env.sh shell script needs to be sourced, it defined the Docker host to target and specify the location of the certificates.

# Unzip the client bundle
$ unzip ucp-bundle-moby.zip
Archive: ucp-bundle-moby.zip
extracting: ca.pem
extracting: cert.pem
extracting: key.pem
extracting: cert.pub
extracting: env.sh
extracting: env.ps1
extracting: env.cmd
# Let's inspect what is inside the shell script
$ cat env.sh
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=”$(pwd)”
export DOCKER_HOST=tcp://node-01:443
#
# Bundle for user moby
# UCP Instance ID HDOK:24LN:JIRJ:TECT:TRBK:UZ4G:CP6J:VL2I:EPYF:FW75:CR6U:GSNC
#
# Run this command from within this directory to configure your shell:
# eval $(<env.sh)
# Set the environment variable defined in env.sh
$ eval $(<env.sh)

Once the local Docker CLI is configured to target the Swarm cluster, each Docker command is issued on node-01 (the Swarm Leader).

$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
46.101.7.50/admin/www <none> 46102226f2fd 6 days ago 109MB
docker/dtr-jobrunner 2.2.4 cc30992c847a 2 weeks ago 573MB
docker/dtr-nginx 2.2.4 9fd5b8c16b46 2 weeks ago 117MB
docker/dtr 2.2.4 734d732895fc 2 weeks ago 124MB
docker/dtr-rethink 2.2.4 3929fa9df10b 2 weeks ago 99.8MB
docker/dtr-notary-server 2.2.4 0d581eb8ac3b 2 weeks ago 122MB
docker/dtr-notary-signer 2.2.4 f23316ab4219 2 weeks ago 121MB
docker/dtr-registry 2.2.4 c1145457a464 2 weeks ago 119MB
docker/dtr-api 2.2.4 f928c75ce284 2 weeks ago 448MB
docker/dtr-garant 2.2.4 8d87da5597ab 2 weeks ago 114MB
docker/dtr-postgres 2.2.4 63410769785c 2 weeks ago 56.7MB

The images listed above are the ones available on the Swarm Manager.

Developer view

Let’s login to the UCP GUI with the moby user.

Developer (Read Only) view of the UCP cluster

From the screenshot above, we can see that moby, who is in the Developers team, does not have the same view of the system than the previous user (admin) we used before.

We will see in the following how moby can push images to the registry (remember the DTR we setup in the previous post ?), but we first need to setup an organization and create a repository inside of it.

Setup an organization in DTR

From the DTR GUI we create an organization named whalecorp and add 2 teams, Developers and Operators, inside of it. To perform those steps, the interface is really intuitive.

Creation of the whalecorp organization and two teams within it

The Moby and Gordon users we created in earlier are visible from within DTR. We can now add them to the whalecorp’s Developers and Operators teams respectively.

Adding Moby and Gordon users to Developers and Operators teams within Whalecorp organization

Let’s now create a repository within this organization.

Create a repository

The new project within the organization is to create an API, we thus need to setup a repository in charge of storing all the images related to our API. From the organization menu, we create a new repository named api.

Creation of the whalecorp/api repository

In order for the users in Developers and Operators teams to have access to the repository, this one needs to be added at the team level.

Adding whalecorp/api repository for the Developers team

Putting everything into action

moby user develops the API from his development machine.

Development of the API

This first (and very simple) version of the API needs to

  • implement a HTTP Post endpoint on /data
  • listen on port 1337 unless the PORT environment variable is provided
  • reply with a 201 HTTP Status Code (creation)
  • displays the received data on the standard output

It is developed in Node.js and is composed of several files:

  • app.js: defines the HTTP Post endpoints
  • index.js: wrapper calling the app
  • test/functional.js: simple test
  • package.json: lists the dependencies of the application
// app.js
// Load dependencies
const express = require('express'),
bodyParser = require('body-parser'),
winston = require('winston');

// Create express application
let app = module.exports = express();

// Body parser configuration
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));

// Handle inconming data
app.post('/data',
(req, res, next) => {
winston.info(req.body);
return res.sendStatus(201);
});
// index.js
// Load dependencies
const util = require('util'),
winston = require('winston'),
app = require('./app');

// Define API port
let port = process.env.PORT || 1337;

// Run API
app.listen(port, function(){
winston.info(util.format("server listening on port %s", port));
});
// test/functional.js
const request = require('supertest'),
app = require('../app');
winston = require('winston'),
util = require('util'),
port = 3000,
baseURL = util.format('http://localhost:%s', port);

before(function(){
app.listen(port, function(){
winston.info(util.format("server listening on port %s", port));
});
});

describe('Creation', function(){
it('should create dummy data', function(done){
request(baseURL)
.post('/data')
.set('Content-Type', 'application/json')
.send({"ts":"2017-03-11T15:00:53Z", "type": "temp", "value": 34, "sensor_id": 123 })
.expect(201)
.end(function(err, res){
if (err) throw err;
done();
})
});
});
// package.json
{
"name": "iot",
"version": "1.0.0",
"description": "IoT example project",
"main": "index.js",
"scripts": {
"start": "node index.js",
"test": "./node_modules/.bin/mocha test/functional.js"
},
"author": "Luc Juggery",
"license": "MIT",
"dependencies": {
"body-parser": "^1.17.1",
"express": "^4.15.2",
"winston": "^2.3.1"
},
"devDependencies": {
"mocha": "^3.2.0",
"supertest": "^3.0.0"
}
}

The API can be ran with the following command

# Install the dependencies (read from package.json)
$ npm install
# Run the web server
$ npm start
info: server listening on port 1337

In order to test the API manually, we send a POST request using curl. We provide a json payload that simulates a temperature sent by an IoT device.

$ curl -XPOST -H “Content-Type: application/json” -d ‘{“ts”:”20170501T231254", “type”: “temp”, “value”: 23, “sensor_id”: 123 }’ http://localhost:1337/data
Created

Test can be ran in a more automated way with the following command.

$ npm test
> iot@1.0.0 test /Users/luc/UCP/api
> mocha test/functional.js
Creation
info: server listening on port 3000
info: ts=2017–03–11T15:00:53Z, type=temp, value=34, sensor_id=123
✓ should create dummy data (50ms)
1 passing (64ms)

Creation of the image

To containerize the simple Node.js API, the following Dockerfile is used

FROM mhart/alpine-node:7.7.1
ENV LAST_UPDATED 20170501T231500

# Copy list of server side dependencies
COPY package.json /tmp/package.json

# Install dependencies
RUN cd /tmp && npm install

# Copy dependencies libraries
RUN mkdir /app && cp -a /tmp/node_modules /app/

# Copy src files
COPY . /app/

# Use /app working directory
WORKDIR /app

# Expose http port
EXPOSE 1337

# Run application
CMD ["npm", "start"]

The creation of the image is done with the following command

$ docker image build -t api:1.0 .
Sending build context to Docker daemon 5.048MB
Step 1/9 : FROM mhart/alpine-node:7.7.1
7.7.1: Pulling from mhart/alpine-node
0a8490d0dfd3: Pull complete
d4c7568ed38f: Pull complete
Digest: sha256:be8543a6ff29c78b69fda79034d60a9ed3171bd29df3a420cdf387312f1b1df7
Status: Downloaded newer image for mhart/alpine-node:7.7.1
---> e1a533c514f2
Step 2/9 : ENV LAST_UPDATED 20170501T231500
---> Running in 0b3da4e83e02
---> c0cb69c4b816
Removing intermediate container 0b3da4e83e02
Step 3/9 : COPY package.json /tmp/package.json
---> ef704b83a50d
Removing intermediate container 5c7743dadd5f
Step 4/9 : RUN cd /tmp && npm install
---> Running in 53effdf575b9
...
---> a846edc98905
Removing intermediate container 53effdf575b9
Step 5/9 : RUN mkdir /app && cp -a /tmp/node_modules /app/
---> Running in e95127d3795b
---> 441dd410b751
Removing intermediate container e95127d3795b
Step 6/9 : COPY . /app/
---> 409a902f658f
Removing intermediate container dd9591370775
Step 7/9 : WORKDIR /app
---> 7006e50f281f
Removing intermediate container 23b470437e94
Step 8/9 : EXPOSE 1337
---> Running in e0e8d9195b9c
---> 26351fdc1a17
Removing intermediate container e0e8d9195b9c
Step 9/9 : CMD npm start
---> Running in 1c50279b8aee
---> a15644ebbd7f
Removing intermediate container 1c50279b8aee
Successfully built a15644ebbd7f

Let’s check the image is correctly created.

$ docker image ls api
REPOSITORY TAG IMAGE ID CREATED SIZE
api 1.0 a15644ebbd7f 2 minutes ago 72.8MB

Push the image in DTR

As the image will be stored in DTR, the tag needs to be in the correct format

REGISTRY_URL/REPOSITORY:VERSION

We can modify this by creating a new tag for the image created above (api:v1.0).

$ docker tag api:1.0 $NODE2_IP/whalecorp/api:1.0

Before pushing an image to the repository, moby needs to login into DTR (installed on node-02).

$ docker login $NODE2_IP
Username (moby): moby
Password:
Login Succeeded

The image can now be pushed

$ docker push $NODE2_IP/whalecorp/api:1.0
$ docker image push 46.101.7.50/whalecorp/api:1.0
The push refers to a repository [46.101.7.50/whalecorp/api]
19799f270776: Pushed
883c042359f6: Pushed
631467961ca8: Pushed
a7c0b56247c7: Pushed
8e254b51dfd6: Pushed
60ab55d3379d: Pushed
1.0: digest: sha256:5a45bf92f557febb3ad34346404d31bfd981915bc93aa6379298a3db735fc859 size: 1580

If we login using moby user to DTR, we can see the new image has correctly been pushed.

Running the application in UCP

Let’s connect with Gordon user on UCP and run the API based on version 1.0 (the one pushed just before). We create a service and specify the image tagged with 46.101.7.50/whalecorp/api:1.0, we also publish the API’s port 1337 to a port of the cluster (automatically selected).

Creation of a service based on whalecorp/api:1.0

Once the service is created, we can see the port assigned is 30000 (first port within the 30000–32767 range)

Service successfully deployed and exposed on port 30000

Note: instead of using the UCP web interface to create the service, Gordon could have used his local client CLI with the UCP bundle to target the Swarm Manager.

Thanks to the Ingress routing mesh, a request towards the API can be sent to any node of the cluster. In the current status, the API is running only one task, scheduled on node-01. As we can see below, we can consume the API endpoint from each node of the cluster.

# From node-01
$ curl -XPOST -H “Content-Type: application/json” -d ‘{“ts”:”20170501T231254", “type”: “temp”, “value”: 23, “sensor_id”: 123 }’ http://46.101.7.54:30000/data
Created
# From node-02
$ curl -XPOST -H “Content-Type: application/json” -d ‘{“ts”:”20170501T231254", “type”: “temp”, “value”: 23, “sensor_id”: 123 }’ http://46.101.7.50:30000/data
Created
# From node-03
$ curl -XPOST -H “Content-Type: application/json” -d ‘{“ts”:”20170501T231254", “type”: “temp”, “value”: 23, “sensor_id”: 123 }’ http://46.101.7.53:30000/data
Created

The logs from UCP show the 3 calls above.

Summary

I hope this overview of a simple Docker-EE setup helped to grasp some basic concepts. Obviously, UCP / DTR have many more features that we have not covered in this article.

A single golf clap? Or a long standing ovation?

By clapping more or less, you can signal to us which stories really stand out.