Docker Application Packages in a Nutshell

Alexander Akhmetov
Elements blog
Published in
6 min readJan 22, 2019
Photo by Tom Fisk on Pexels

A few months ago, at DockerCon US 2018, an experimental tool was announced: docker-app. This tool, as Docker describes it, is built to make docker-compose files reusable and shareable.

In this article, we will create a Docker application package with a back-end service and a database and share it on Docker Hub. Also, we will see how it works internally. As an example, I use the flask-graphql-neo4j application created by my colleagues Charles and Yahia.

docker-compose

First, let’s have a look at a simple docker-compose file.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services.

There is a configuration file:

The file describes two services: an application and a database. To start these services on another computer, you must have a copy of the file on it.

If you need different parameters for different environments or customize some service, you have to extend the configuration file and maintain multiple compose files. Alternatively, you could use Ansible, but that would deserve an article of its own.

Another problem is when we have a few containers in one application, we can’t just run docker pull or docker push anymore. docker-compose.yml describes relations between containers, stores configuration and you have to have a copy of the file to start the service. However, this can be done much easier and that is where the docker-app comes to help us.

docker-app

This example uses Docker Swarm and assumes that you have installed and configured it.

docker-app (installation instructions) can build an image from a docker-compose file, push it into a registry and then you can share it as a single image URL.

Also, it allows to define variables and metadata for the application and works with Docker Swarm or Kubernetes. It even can create helm charts for your Kubernetes cluster!

docker-app goals:

  • Make Compose-based applications shareable on Docker Hub and Docker Trusted Registry
  • Support a stronger separation of concerns between the application description and per-environment settings

Let’s create our docker application package:

$ docker-app init flask-graphql-neo4j-app

The command above creates a directory flask-graphql-neo4j-app with three files inside:

$ tree flask-graphql-neo4j-app.dockerapp

flask-graphql-neo4j-app.dockerapp
├── docker-compose.yml
├── metadata.yml
└── settings.yml

The first one is a regular docker-compose file, let’s write our configuration there:

flask-graphql-neo4j-app.dockerapp/docker-compose.yml

As you can see, this docker-compose file has variables: ${app.host}, ${app.port}, etc. They look almost like environment variables which you can use in any docker-compose file, but docker-app gets values for them from the settings.yml file.

Since docker-app deploys your application to Docker Swarm or Kubernetes, you can use the deploy attribute in the compose file to set limits for your containers.

A default value for each variable must be defined in the settings.yml (later we will have a look how to override them):

flask-graphql-neo4j-app.dockerapp/settings.yml

The medatata.yml file contains primary information about, the application:

flask-graphql-neo4j-app.dockerapp/metadata.yml

Validation

So, we have a flask-graphql-neo4j-app.dockerapp directory with three files inside. It’s our application package description, and we can check if it has any errors and inspect what’s inside:

$ docker-app validate; echo $?
0
$ docker-app inspectflask-graphql-neo4j-app 0.1.0Maintained by: Alexander Akhmetov <aleksandr.akhmetov@elements.nl>Services (2) Replicas Ports Image
------------ -------- ----- -----
app 3 8080 elementsinteractive/flask-graphql-neo4j:latest
neo4j 1 neo4j:3.5.0
Volume (1)
----------
neo4j-data
Settings (13) Value
------------- -----
app.deploy.replicas 3
app.host 0.0.0.0
app.limits.cpu 0.25
app.limits.memory 50M
app.maintainers.0 map[name:Alexander Akhmetov email:aleksandr.akhmetov@elements.nl]
app.name flask-graphql-neo4j-app
app.namespace elementsinteractive
app.port 8080
app.version 0.1.0
neo4j.auth none
neo4j.limits.cpu 0.25
neo4j.limits.memory 50M
neo4j.port 7687
------------------------------------------------------------

Deployment

Now let’s start the application and make some requests to it:

$ docker-app deploy  Creating network flask-graphql-neo4j-app_default
Creating service flask-graphql-neo4j-app_app
Creating service flask-graphql-neo4j-app_neo4j

deploy command creates all necessary resources for the app: network, services, volumes, etc.

orchestrator parameter of the docker-app deploy command allows you to choose between Kubernetes and Swarm for the deployment process

The application is running!

You can see this with docker stack ls and docker service ls commands:

$ docker stack ls  NAME                      SERVICES            ORCHESTRATOR
flask-graphql-neo4j-app 2 Swarm

We just created one service with four containers inside: three back-end containers and one DB container according to docker-compose.yml (you can check this with docker service ls), and we can make an HTTP request to the application to be sure that it works fine:

$ curl -X POST http://127.0.0.1:8080/graphql \
-H "Content-Type: application/json" \
--data '{ "query": "mutation {c1:create_customer(name: \"aleksandr\" email: \"aleksandr.akhmetov@elements.nl\") {customer{name} success}}"}' | python -m json.tool

{
"data": {
"c1": {
"customer": {
"name": "aleksandr"
},
"success": true
}
}
}

Settings

docker-app allows us to override default settings. Just create a new yml file and put new values inside, for example, production.yml:

$ cat > production.yml
app:
port: 9090
$ docker-app -f production.yml inspect | grep app.port
app.port 9090
$ docker-app inspect | grep app.port
app.port 8080

You don’t need to copy all data from the settings.yml, just override necessary variables.

Sharing

It’s time to push the application to the Docker Hub. Run docker-app push command, which uses namespace, name and version parameters from the manifest file to build and push an image.

In our case it’s elementsinteractive/flask-graphql-neo4j-app:0.1.0:

$ docker-app pushsha256:c4c602cd06c34c6b440897d49abc489d015e96b3cca05e68139dd5557f57456e

You can use docker-app without Kubernetes or Swarm too. Just run docker-app render command and it will return a rendered content for a docker-compose.yml:

$ docker-app render | docker-compose -f - up

How does it work?

OK, we’ve just created and pushed to the registry a new Docker application package. We have only one image, but it starts many containers and even with some default parameters. To investigate how it is possible, let’s go a little bit deeper.

Image layers

ubuntu:18.04 layers inspected by dive

A Docker image has a bunch of layers. For each command in the Dockerfile, Docker generates a layer with all new files. So, each layer is a set of differences from the previous one. Then docker push command uploads them to a registry if it does not have them yet. You can read more about layers in the documentation.

Let’s download our new image and see what’s inside.

$ docker pull elementsinteractive/flask-graphql-neo4j-app.dockerapp:0.1.00.1.0: Pulling from elementsinteractive/flask-graphql-neo4j-app.dockerapp
386325240c45: Pulling fs layer
operating system is not supported

Oops, something is wrong! If you enable experimental features (put "experimental": "enabled" to ~/.docker/config.json) you can list all supported operating systems and architectures for the image:

$ docker manifest inspect -v elementsinteractive/flask-graphql-neo4j-app.dockerapp:0.1.0{
...
"Descriptor": {
...
"platform": {
"architecture": "config",
"os": "config"
}
},
"SchemaV2Manifest": {...}
}

We see that value for the os and architecture keys is config. That’s why docker can’t pull this image and returns the operating system is not supported error.

To download an image docker pull command retrieves a manifest for it. It contains information about the image and its layers. Using this information it downloads and unpacks all necessary data.

The same mechanism is used to store your application package. When you run a docker-app deploy {image_name} command, it retrieves the manifest for the image. In contrast to a regular image, this one is incompatible with docker commands and works only with docker-app. It also has a special layer, which is a tar.gz archive with your application package files (docker-compose.yml, manifest.yml, settings.yml), and it uses these files to start your services.

You can download the first layer of the image with this script and check this by yourself:

$ python3.7 download_blob.py  Getting content for elementsinteractive/flask-graphql-neo4j-app.dockerapp:0.1.0
File downloaded: blob.tar.gz
$ tar -tf blob.tar.gz
metadata.yml
docker-compose.yml
settings.yml

That’s it! You can try to deploy the example application using docker-app right now from the Docker Hub :-)

$ docker-app deploy elementsinteractive/flask-graphql-neo4j-app:0.1.0

--

--