Edge of Tomorrow. Rita Vrataski: We should just reset.

Life and death of a container

Self-healing resets, health-check probes, and live restore

Luis Herrera Benítez
DevOpsion
Published in
15 min readJul 11, 2016

--

Neo: And why would a program be deleted?
Oracle: Maybe it breaks down. Maybe a better program is created to replace it — happens all the time, and when it does, a program can either choose to hide here, or return to The Source.
— The Matrix

Docker containers are prepared to die at any time: you can stop, kill and destroy them quickly. And when you do it, all data created during its existence is wiped out by default. But, like Agent Smith’s in Matrix, containers can be reloaded shortly after their termination in no-time, within milliseconds. It is in this sense that we could say containers, like Agent Smith, are transient and disposable. It’s no surprise then that code and data pipelines, which require the execution of temporal tasks, are one of the most common use cases for containers. Now, this doesn’t mean they are only suitable for running short-lived commands. They can perfectly run long-running daemons like web servers or application servers. They can also be used for databases and persist data with native I/O performance through volumes. In fact, MongoDB, mySQL and Postgres are among the most popular images in the Docker Hub.

Containers life story

So, what causes differences in life expectancy of containers? Let’s have a closer look to containers existence to find out. How do we do that? The Docker Engine records the events of containers lifetime, among other crucial information, in /var/log/docker.log. But we can also use the lesser known and cleaner client command “docker events” to get a glimpse at the life of a container. This command queries the Docker Engine for the main events since or until a particular point in time. And that’s what we need to understand containers life story.Let’s see it in action by firing up a container first:

$ docker -v
Docker version 1.12.0-rc3, build 91e29e8, experimental
$ t0=$(date "+%Y-%m-%dT%H:%M:%S")
$ docker run --name=ephemeral -t lherrera/cowsay 'I am ephemeral'

Unable to find image 'lherrera/cowsay:latest' locally
Pulling repository docker.io/lherrera/cowsay
— — — — — — — —
< I am ephemeral >
— — — — — — — —
\ ^__^
\ (oo)\_______
(__)\ )\/\
|| — — w |
|| ||

and then, running “docker events” to see what events we can capture:

$ t1=$(date "+%Y-%m-%dT%H:%M:%S") 
$ docker events --since $t0 --until $t1
2016-07-11T14:00:21.860900801+02:00 image pull lherrera/cowsay:latest (name=lherrera/cowsay)
2016-07-11T14:00:21.947182894+02:00 container create 177c8c368a6ad874d7de65fbdc2bb79d508b2b64345228e1617159829ec810c1 (image=lherrera/cowsay, name=ephemeral)
2016-07-11T14:00:21.949485645+02:00 container attach 177c8c368a6ad874d7de65fbdc2bb79d508b2b64345228e1617159829ec810c1 (image=lherrera/cowsay, name=ephemeral)
2016-07-11T14:00:21.960760784+02:00 network connect 43d00f1672ec045e393f278d50e897c9ed8c6a0ddb03db5c5bbbf3b431d332b2 (container=177c8c368a6ad874d7de65fbdc2bb79d508b2b64345228e1617159829ec810c1, name=bridge, type=bridge)
2016-07-11T14:00:22.175532881+02:00 container start 177c8c368a6ad874d7de65fbdc2bb79d508b2b64345228e1617159829ec810c1 (image=lherrera/cowsay, name=ephemeral)
(height=27, image=lherrera/cowsay, name=ephemeral, width=144)
2016-07-11T14:00:22.338433205+02:00 container die 177c8c368a6ad874d7de65fbdc2bb79d508b2b64345228e1617159829ec810c1 (exitCode=0, image=lherrera/cowsay, name=ephemeral)
2016-07-11T14:00:22.544153008+02:00 network disconnect 43d00f1672ec045e393f278d50e897c9ed8c6a0ddb03db5c5bbbf3b431d332b2 (container=177c8c368a6ad874d7de65fbdc2bb79d508b2b64345228e1617159829ec810c1, name=bridge, type=bridge)

…and voilá!. Now we know what’s happening behind the scenes when we launch a container from the command line. Initially, the Docker client is using the Docker Remote API to pull the image from the Docker Hub as it couldn’t find it locally. Secondly, it creates the container, then attaches the stdout/stderr streams to our terminal. Next, the new container is connected to the default bridge network and the engine proceeds to start it. When our primary process within the container finish his work (“cowsay” prints our message), so does the container, and dies. In his last breath, the Docker Engine disconnects the container from the default bridge network.

The container is dead, long live the container!

Ready or not, our container is dead. But, if we type the command “docker ps -a”, we can still learn the lasts words of our deceased friend:

$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
177c8c368a6a lherrera/cowsay "/entrypoint.sh 'I am" About a minute ago Exited (0) About a minute ago ephemeral

In our example, it seems that our container died peacefully, with an ok (zero) exit code. Troubled containers end their existence with non-zero exit codes.These exit codes make debugging a bit easier since you can inspect the final state of the container and its primary process.

Even when our friend is no longer alive, we can ensure it can leave a lasting legacy, since all container data persist by default until the container is finally destroyed with “docker rm.” The command “docker export” lets you save a container’s filesystem as a tar archive. Later on, you could the create another one in same docker host or a new one using its counterpart, “docker import”.

$ docker export -o ephemeral.tar ephemeral
$ tar tvf ephemeral.tar
....drwxr-xr-x 0 0 0 0 8 jun 18:28 var/spool/
lrwxrwxrwx 0 0 0 0 8 jun 18:28 var/spool/mail -> ../mail
drwxrwxrwt 0 0 0 0 11 jul 14:00 var/tmp/
-rw-r--r-- 0 0 0 178 11 jul 14:00 var/tmp/legacy
$ tar xvf ephemeral.tar var/tmp/legacy
$ cat var/tmp/legacy

Please note that “docker export” will not preserve the history of the container. In fact, when we import our tar file back to an image, it will flatten and shrink the resulting image into a single layer.

$ docker history lherrera/cowsay
IMAGE CREATED CREATED BY SIZE COMMENT
47e12946765b 5 hours ago /bin/sh -c #(nop) ENTRYPOINT ["/entrypoint.s 0 B
<missing> 5 hours ago /bin/sh -c #(nop) COPY file:4150d31823cecdea0 185 B
<missing> 5 hours ago /bin/sh -c apt-get update && apt-get inst 60.43 MB
<missing> 4 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0 B
<missing> 4 weeks ago /bin/sh -c #(nop) ADD file:76679eeb94129df23c 125.1 MB
$ docker import ephemeral.tar lherrera/cowsay:2.0
sha256:866e2c1515a9b35d19a6c44b6a5b7a755b47878c96733acdf8900b4c275ddb8f
$ docker history lherrera/cowsay:2.0
IMAGE CREATED CREATED BY SIZE COMMENT
866e2c1515a9 59 seconds ago 184.1 MB Imported from -

If you are running short-term foreground containers, all these data can pile up and could become a problem. “docker run — rm” automatically cleans up the containers status and remove image layers when the container exits:

$ t3=$(date "+%Y-%m-%dT%H:%M:%S")
$ docker run --rm --name=ephemeral2 -t lherrera/cowsay 'I am going to disappear'
$ t4=$(date "+%Y-%m-%dT%H:%M:%S")
$ docker events --since $t3 --until $t4
2016-07-11T21:55:02.217178856+02:00 container create 29c5a483bc3b64292add69be2f2b544801a7669aa76fad44fc5962261246cc1c (image=lherrera/cowsay, name=ephemeral2)
2016-07-11T21:55:02.219351261+02:00 container attach 29c5a483bc3b64292add69be2f2b544801a7669aa76fad44fc5962261246cc1c (image=lherrera/cowsay, name=ephemeral2)
2016-07-11T21:55:02.234063488+02:00 network connect 43d00f1672ec045e393f278d50e897c9ed8c6a0ddb03db5c5bbbf3b431d332b2 (container=29c5a483bc3b64292add69be2f2b544801a7669aa76fad44fc5962261246cc1c, name=bridge, type=bridge)
2016-07-11T21:55:02.472410696+02:00 container start 29c5a483bc3b64292add69be2f2b544801a7669aa76fad44fc5962261246cc1c (image=lherrera/cowsay, name=ephemeral2)
2016-07-11T21:55:02.655638142+02:00 container die 29c5a483bc3b64292add69be2f2b544801a7669aa76fad44fc5962261246cc1c (exitCode=0, image=lherrera/cowsay, name=ephemeral2)
2016-07-11T21:55:02.862385394+02:00 network disconnect 43d00f1672ec045e393f278d50e897c9ed8c6a0ddb03db5c5bbbf3b431d332b2 (container=29c5a483bc3b64292add69be2f2b544801a7669aa76fad44fc5962261246cc1c, name=bridge, type=bridge)
2016-07-11T21:55:02.944440485+02:00 container destroy 29c5a483bc3b64292add69be2f2b544801a7669aa76fad44fc5962261246cc1c (image=lherrera/cowsay, name=ephemeral2)
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ceff5ee74cef lherrera/cowsay "/entrypoint.sh 'I am" 43 minutes ago Exited (0) 43 minutes ago ephemeral

Is there life after death?

Researchers have discovered that an animal’s genes can ‘live’ on for up to four days after its body has died. Some genes even became more active after death. Ok, there’s little hope right now that this could help us to be immortals. But, if it’s any consolation, with containers we could go beyond four days and get them active again:

$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ceff5ee74cef lherrera/cowsay "/entrypoint.sh 'I am" 43 minutes ago Exited (0) 4 days ago ephemeral
$ docker start -a ephemeral
________________
< I am ephemeral >
----------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||

Containers entire life flashes in front of your eyes the second before they die

The command “docker events” also streams them in real time giving us not only a lot of visibility into the inner workings of the Docker Engine but also the possibility to automatically react to container events. Tools like “Registrator” use this mechanism to register and de-register services with Consul automatically as they come online or go offline, for example, making it easier to manage container-based setups. Why is this streaming a big deal? Check the following article to get a sense how you could leverage this directly o through third-party tools. In particular, Graham Jenson describes how to use “Registrator”, “Consul” and “Consul-template” to alert the load-balancer that new instances are available or no longer providing service.

This is your life

Matt Good from GliderLabs has put together a helpful chart illustrating the main events triggered throughout the lifecycle of a Docker container and the commands associated with them:

Docker Events Explained ( Image from GlidderLabs)

You may feel like you’ve seen this chart before. Indeed, if you have worked with Linux signals, you’ll agree that it resembles Linux processes lifecycle. And this similarity should not be a surprise because containers are essentially Linux processes. Docker Engine is just making use of the Linux Kernel’s features to construct fences around key O.S. resources so that processes running inside containers can interact then with the filesystem or the network as if they were they only process in the system. Signals provide a way of handling asynchronous events and can be used by processes running in Docker containers. If you want to learn more on this, check “Trapping signals in Docker Containers” from Grigoriy Chudnov.

The unbearable lightness of being a container

Like Linux, Docker comes with a kill command to terminate stalled or unwanted containers without having to log out or restart the underlying server. The kill command sends by default an SIGKILL Linux signal to the main process running on a container only!. Likewise, the stop command sends SIGSTOP Linux signal. The “kill” and “stop” commands and their parameters are well covered in this article from Brian DeHamer or , so we’ll focus instead on the remaining, and lesser known, commands and events are shown in the chart above: pause, OOM, restart and destroy.

Making a pause

Why “pause” a container? Well, you may need to suspend a container that is slowing things down, or you may want to take a backup without a live process writing files interfering:

$ docker run -d -p 80:80 --name web nginx:alpine
e10c71e23fc7fb3b876d43f4cb2ac732c2f82887d3b13237a180b65154efb13d
$ docker pause web
web
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e10c71e23fc7 nginx:alpine "nginx -g 'daemon off" 40 seconds ago Up 39 seconds (Paused) 0.0.0.0:80->80/tcp, 443/tcp web
$ docker export -o web.tar web
$ curl -v 0.0.0.0:80
Rebuilt URL to: 0.0.0.0:80/
* Trying 0.0.0.0...
* Connected to 0.0.0.0 (0.0.0.0) port 80 (#0)
> GET / HTTP/1.1
> Host: 0.0.0.0
> User-Agent: curl/7.43.0
> Accept: */*
(in another terminal type in 'docker unpause web')
< HTTP/1.1 200 OK
< Server: nginx/1.11.1
< Date: Mon, 11 Jul 2016 12:24:19 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Thu, 23 Jun 2016 20:12:30 GMT
< Connection: keep-alive
< ETag: "576c42ae-264"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host 0.0.0.0 left intact

Some people predict that “docker pause” could be used in the future to support “live” migration of containers between Docker Engines. In principle, talking about live migration makes little sense with containers since they are stateless and disposable, and you could fire up a new one in milliseconds but…

Defending yourself from being choked

By default, all containers are created equal; they all get the same proportion of CPU cycles and block IO, and they could use as much memory as they need. But we could, and should under certain circumstances, change this treatment with the runtime constraints parameters. For example, to prevent buggy applications going haywire in production and start requesting memory like there’s no end, choking the server. Docker comes to the rescue again: as the previous diagram implies, OOM (Out Of Memory) events could cause the container to die. But! We need to set the memory limits first.

Let’s simulate OOM scenarios to show how the Docker Engine could help us if we set memory hard limits:

$ docker run -it -m 4m ubuntu:14.04 bash
root@cffc126297e2:/# python3 -c 'open("/dev/zero").read(5*1024*1024)'
Killed
root@cffc126297e2:/# exit
$
$ docker run -m 4m ubuntu:14.04 python3 -c 'open("/dev/zero").read(5*1024*1024)'

$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
95a939421187 ubuntu:14.04 "python3 -c open(\"/de" 3 seconds ago Exited (137) 2 seconds ago dreamy_rosalind

(This example is based primarily on the technique #94, page 301 from Docker in Practice). Notice how, in the first case, it’s a process -but not the main one- that is killed, whereas in the second case, it’s the container!. Why?

Health checks that could save your container

But why wait until is too late, why not to check proactively whether the application is misbehaving before it hits some hard limit? Take into consideration that the Linux kernel would only kill a process under exceptional circumstances such as extreme resource starvation, and that could be too late in the game.

Well, as of Docker 1.12, rather than just using running constraints, you could also launch containers with user-defined health-check probes. For example, we could proactively use these health-checks to validate periodically that a web server can handle new connections rather than just avoiding a race condition or a memory overflow.

When we specify health check as a “docker run” option or in the Dockerfile, the “docker ps” command will show a “health condition” in addition to its regular status. As you´ll see in the following command sequence, when we first issue a “docker ps” the health probe is still “starting” When the first health check probe passes, it becomes healthy. Rather than using “docker ps”, we´ll use “docker inspect” this time to show the container health status. Finally, we´ll simulate “illness”, by deleting a configuration file. After a certain number of probe failures, we´ll see how the container is marked as “unhealthy.”

$ docker run --name=web -d \
--health-cmd='stat /etc/nginx/nginx.conf || exit 1' \
--health-interval=2s \
nginx:alpine
623ced89802a1b5d3701fc37b9b19d30c934190914febf46f58225bf376dcf75
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
623ced89802a nginx:alpine "nginx -g 'daemon off" 2 seconds ago Up 1 seconds (health: starting) 80/tcp, 443/tcp web
$ sleep 2; docker inspect --format='{{.State.Health.Status}}' web
healthy
$ docker exec web rm /etc/nginx/nginx.conf
$ sleep 2; docker inspect --format='{{json .State.Health}}' web
{"Status":"unhealthy","FailingStreak":13,"Log":[{"Start":"2016-07-11T12:52:17.309171219Z","End":"2016-07-11T12:52:17.393844997Z","ExitCode":1,"Output":"stat: can't stat '/etc/nginx/nginx.conf': No such file or directory\n"},{"Start":"2016-07-11T12:52:19.395771191Z","End":"2016-07-11T12:52:19.498027171Z","ExitCode":1,"Output":"stat: can't stat '/etc/nginx/nginx.conf': No such file or directory\n"},{"Start":"2016-07-11T12:52:21.503197391Z","End":"2016-07-11T12:52:21.595534917Z","ExitCode":1,"Output":"stat: can't stat '/etc/nginx/nginx.conf': No such file or directory\n"},{"Start":"2016-07-11T12:52:23.600639914Z","End":"2016-07-11T12:52:23.684597248Z","ExitCode":1,"Output":"stat: can't stat '/etc/nginx/nginx.conf': No such file or directory\n"},{"Start":"2016-07-11T12:52:25.688110246Z","End":"2016-07-11T12:52:25.77508238Z","ExitCode":1,"Output":"stat: can't stat '/etc/nginx/nginx.conf': No such file or directory\n"}]}
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
623ced89802a nginx:alpine "nginx -g 'daemon off" 3 minutes ago Up 3 minutes (unhealthy) 80/tcp, 443/tcp web

As of the current release (1.12r3), the Docker Engine doesn’t restart automatically the container when its health status changes to “unhealthy”, so you have to check its health and explicitly restart the container.

Reboot your life or your container

The panacea was supposed to be a remedy that would cure all diseases and prolong life indefinitely. If we ask any engineer if there´s such a thing as a universal cure for application woes, a “reset or reboot” will be top of the list. Why this works most of the time? A reboot wipes away the current state of the application, including any code that’s stuck in a misbehaving state.

With Docker, we could automate the relaunch of containers in trouble using the run command restart flag. So, we could use it as a self-healing mechanism (like Tom Cruise in the “Edge of Tomorrow”). But, you are warned, this approach is not the panacea. This feature allows you to apply a set of rules when the container or main process terminates. By default, containers don’t resuscitate; they are not restarted when their main process exits. But we could instruct the Docker Engine to relaunch the container no matter what or restart only when the primary process exits with an error code. Not only that, when we use the option “always”, the Docker Engine will also run the container after a Docker engine restart. Let’s see this with an example:

$ docker-machine create --driver virtualbox sandbox
Running pre-create checks...
Creating machine...
...
Docker is up and running!
$ eval $(docker-machine env sandbox)
$ docker run -d -p 80:80 nginx:alpine
Unable to find image 'nginx:alpine' locally
alpine: Pulling from library/nginx
6c123565ed5e: Pull complete
8380043c1909: Pull complete
bc193245541a: Pull complete
f6058f41c33e: Pull complete
Digest: sha256:23f809e7fd5952e7d5be065b4d3643fbbceccd349d537b62a123ef2201bc886f
Status: Downloaded newer image for nginx:alpine
24c6dfe7947324472ec17147721d885192d343df96f37543b99d99112bb67d49
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
24c6dfe79473 nginx:alpine "nginx -g 'daemon off" 22 seconds ago Up 22 seconds 0.0.0.0:80->80/tcp, 443/tcp awesome_payne
$ docker-machine restart sandbox
Restarting "sandbox"...
(sandbox) Check network to re-create if needed...
(sandbox) Waiting for an IP...
Waiting for SSH to be available...
Detecting the provisioner...
Restarted machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
$ eval $(docker-machine env sandbox)
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker run -d -p 80:80 --restart always nginx:alpine
b1c6f8105519f46b8d31d6dfd2ab184dbd86115a6a484a7819c67aa39b77923d
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b1c6f8105519 nginx:alpine "nginx -g 'daemon off" 33 seconds ago Up 32 seconds 0.0.0.0:80->80/tcp, 443/tcp zen_golick
$ docker-machine restart sandbox
...
$ eval $(docker-machine env sandbox)
$ docker-machine ssh sandbox uptime

13:35:07 up 1 min, 1 users, load average: 0.60, 0.43, 0.17
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b1c6f8105519 nginx:alpine "nginx -g 'daemon off" 3 minutes ago Up About a minute 0.0.0.0:80->80/tcp, 443/tcp zen_golick
$ docker-machine rm sandbox

We should just reset

As Rita Vrataski would say, sometimes we should just reset. Changes or temporary glitches in our environment could cause applications to die. The on-failure policy restarts containers could help us here as it only relaunch a container when they return a non-zero exit code (which typically signals distress in our application).

$ docker run -d --restart=on-failure:5 alpine ash -c 'sleep 2 && /bin/false"
5205db79fdba912b98c9aa7ae923fa3bcdf63487e677e75afaade222ae00fb6d
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5205db79fdba alpine "ash -c 'sleep 2 && /" 3 seconds ago Up Less than a second boring_cray
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5205db79fdba alpine "ash -c 'sleep 2 && /" 5 seconds ago Restarting (127) Less than a second ago boring_cray
$ now=$(date "+%Y-%m-%dT%H:%M:%S")
$ docker events --until $now --filter 'container=boring_cray' | egrep 'die'

2016-07-13T11:44:11.508580478+02:00 container die 5205db79fdba912b98c9aa7ae923fa3bcdf63487e677e75afaade222ae00fb6d (exitCode=127, image=alpine, name=boring_cray)
2016-07-13T11:44:14.110147436+02:00 container die 5205db79fdba912b98c9aa7ae923fa3bcdf63487e677e75afaade222ae00fb6d (exitCode=127, image=alpine, name=boring_cray)
2016-07-13T11:44:16.811377480+02:00 container die 5205db79fdba912b98c9aa7ae923fa3bcdf63487e677e75afaade222ae00fb6d (exitCode=127, image=alpine, name=boring_cray)
2016-07-13T11:44:19.704255278+02:00 container die 5205db79fdba912b98c9aa7ae923fa3bcdf63487e677e75afaade222ae00fb6d (exitCode=127, image=alpine, name=boring_cray)
2016-07-13T11:44:22.968814678+02:00 container die 5205db79fdba912b98c9aa7ae923fa3bcdf63487e677e75afaade222ae00fb6d (exitCode=127, image=alpine, name=boring_cray)
2016-07-13T11:44:27.052078561+02:00 container die 5205db79fdba912b98c9aa7ae923fa3bcdf63487e677e75afaade222ae00fb6d (exitCode=127, image=alpine, name=boring_cray)

Notice above how the Docker Engine is increasing the restart delay until it hits the on-failure maximum number of restarts.

Untroubled containers

As of Docker 1.12, you can run again “daemonless” containers. That is: you could stop, upgrade, restart the Docker Engine without affecting or restarting the containers on the system, without service interruption. This feature was there before under the name of “standalone mode”, but was apparently discarded because it created some confusion among early Docker adopters.

To enable this functionality, you need to add the “live-restore” flag when launching the Docker Engine, to ensure that Docker does not kill running containers on graceful shutdown or during a restart. Let’s see this with an example, using docker-machine to pass the live-restore flag:

$ docker-machine create sandbox --driver virtualbox --engine-opt live-restore
Running pre-create checks...
Creating machine...
...
Checking connection to Docker...
Docker is up and running!
$ eval $(docker-machine env sandbox)
$ docker run -d -p 80:80 nginx:alpine
Unable to find image 'nginx:alpine' locally
alpine: Pulling from library/nginx
6c123565ed5e: Pull complete
8380043c1909: Pull complete
bc193245541a: Pull complete
f6058f41c33e: Pull complete
Digest: sha256:23f809e7fd5952e7d5be065b4d3643fbbceccd349d537b62a123ef2201bc886f
Status: Downloaded newer image for nginx:alpine
79b2437867a0c30b91eb087d86ec735e2224d51cebd8545e98abce6a6fec4110
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
79b2437867a0 nginx:alpine "nginx -g 'daemon off" 16 seconds ago Up 15 seconds 0.0.0.0:80->80/tcp, 443/tcp sleepy_saha
$docker-machine ssh sandbox
docker@sandbox:~$
ps -ef | grep -E ‘docker-|dockerd|nginx’
root 2654 1 0 13:41 ? 00:00:02 dockerd -D -g /var/lib/docker -H unix:// -H tcp://0.0.0.0:2376 --label provider=virtualbox --live-restore --tlsverify --tlscacert=/var/lib/boot2docker/ca.pem --tlscert=/var/lib/boot2docker/server.pem --tlskey=/var/lib/boot2docker/server-key.pem -s aufs
root 2661 2654 0 13:41 ? 00:00:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc --debug
root 2799 2654 0 13:42 ? 00:00:00 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.17.0.2 -container-port 80
root 2804 2661 0 13:42 ? 00:00:00 docker-containerd-shim 79b2437867a0c30b91eb087d86ec735e2224d51cebd8545e98abce6a6fec4110 /var/run/docker/libcontainerd/79b2437867a0c30b91eb087d86ec735e2224d51cebd8545e98abce6a6fec4110 docker-runc
root 2813 2804 0 13:42 ? 00:00:00 nginx: master process nginx -g daemon off;
dockrem+ 2834 2813 0 13:42 ? 00:00:00 nginx: worker process
docker 2941 2889 0 13:46 pts/0 00:00:00 grep -E docker-|dockerd|nginx
docker@sandbox:~$ pgrep dockerd
2654
docker@sandbox:~$ sudo kill -9 2654
docker@sandbox:~$ docker ps
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
docker@sandbox:~$ pgrep nginx
2813
2834
docker@sandbox:~$ exit
$ curl $(docker-machine ip sandbox):80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

A eulogy

Containers can have a challenging and eventful life. But with features like restart policies, health-checks probes or live restore, you shouldn’t fear it. Paraphrasing Mark Twain: “The fear of death follows from the fear of life. A container who lives fully is prepared to die at any time”.

--

--

Luis Herrera Benítez
DevOpsion

AI & Big Data aficionado. Redis enthusiast. Xoogler. Fomer Docker Captain and AWS Ambassador. Everybody has a plan until they get punched in the face.