Create a Docker-based Service on DigitalOcean with Docker Compose

Quickly Build and Deploy your Web Application environment with Docker and Compose

Not this kind of composer

Last year, I wrote a piece about manually putting together a Web environment with Docker-ized LAMP applications, and it was pretty well received, but like many things these days, it quickly fell out of date, and despite my best efforts to update it, I’m prepared to declare it a lost cause and, for my loyal readers, prepare a more modern, fun, and useful version of that process.

Incidentally, this process will be a lot faster, a lot more efficient, and probably something you can do at parties to impress your friends (please don’t).

What you need

  1. You’ll need a Docker host. I recommend using DigitalOcean’s One-Click Docker Image:

2. You’ll need Docker Compose on said Docker host:

3. You’ll need to install Go, to build the sample project, or you can just create a simple HTML file with whatever you want (and skip those portions of the guide), but I’ll be using Go for this example for reasons that will become clear later.

You can install it on your Docker host using this guide:

Or build it locally, and just upload the file to your Docker host (or whatever Docker client you plan to use to build your Docker images, etc.)

Building an Application and Docker-izing it

For the sake of simplicity, you’re just going to be hosting a very simple Go-based web application.

package main
import (
“fmt”
“net/http”
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, “<h1>Welcome, %s</h1>”, r.URL.Path[1:])
fmt.Fprintf(w, “\n<strong>Served by %s</strong>”, r.Host)
}
func main() {
http.HandleFunc(“/”, handler)
http.ListenAndServe(“:8080”, nil)
}

The app takes a single URI argument, a string, that just get’s printed out onto the page, so if you make a request for `http://localhost:3000/dr%20steve%20brule`, you’ll see:

To build the application, save the above Go code into a file called `greetings.go`, and then run:

go build greetings.go

to run the application:

./greetings

then make a request (via curl, or in your browser):

curl http://localhost:8080/medium_user

Perhaps, you noticed that your response had the following line in it:

<strong>Served by localhost:8080</strong>%

Back in your greetings.go file, we interpolated `r.Host` into that print statement, and the reason being we want the sample project to print out the address the request was made to (in this case, localhost). The reason we did this will be so, once we build the service, we can demonstrate which container is serving the request as part of your service, which we’ll build in a moment.

Create a file called `Dockerfile`, and populate it with:

FROM alpine
MAINTAINER Medium Reader "<medium_reader@gourmet.biz>"
ADD ./greetings /usr/bin/greetings
ENTRYPOINT [“greetings”]

So, what you’re doing here is adding the application you built (greetings) to your executable path inside the container, and then specifying your ENTRYPOINT, or the command the container will run by default (starting your application inside the container).

You can proceed to build your image now:

jmarhee@iampizza ~/repos/greetings $ docker build -t greetings . 
Sending build context to Docker daemon 7.641 MB
Step 1 : FROM alpine:3.1
— -> 1b088884749b
Step 2 : MAINTAINER Joseph D. Marhee <>
— -> Using cache
— -> 65e84c353c2c
Step 3 : ADD ./greetings /usr/bin/greetings
— -> Using cache
— -> 3d30e9ccf6fc
Step 4 : ENTRYPOINT greetings
— -> Using cache
— -> 72d3ac464a1c
Successfully built 72d3ac464a1c

Grab your successfully built image ID, and so you don’t have to remember that long string, tag it with an image alias:

docker tag 72d3ac464a1c greetings

Now, rather than creating each instance of this container using a long, boring command like:

docker run -d -p 8080:8081 --restart=always --name greetings1 greetings
docker run -d -p 8080:8082 --restart=always --name greetings2 greetings
...
docker run -d -p 8080:8089 --restart=always --name greetings9 greetings

We can do this with Docker compose in a `docker-compose.yml` file:

greetings1:
image: greetings
expose:
— 8080
greetings2:
image: greetings
expose:
— 8080
greetings3:
image: greetings
expose:
— 8080

You’ll see each container we’re creating is identified by a name, in this case, greetings{1..3}, but this can be extended as far as you want, or using any names that’d like. We specified an Image (in this case, our greetings image we just created), and we are exposing port 8080 for the application, this will be accessible as http://greetings{1..3}:8080 or using the container IP address on port 8080.

Next, run:

docker-compose up -d

to bring the whole service up at once. You can do this as many times as you’d like, managing the containers in the compose file as a unit, rather than instantiating them one by one; you can manage them as a fleet with many of the docker-compose commands available:

Expanding your service

You have a few options moving forward to expose the service, and make it usable. One example is exposing your application containers as a web service, and there are a very pretty simple, quick ways to do that:

You can use your port-mappings to put the containers behind a load balancer or a proxy, for example, or create a service for your proxy and link the containers to that service. You can ingest requests through that container and respond from your fleet of containers in that example.

To demonstrate using docker-compose to spin-up such an environment, the latter example will be used here:

  1. Let’s create a simple HTTP router to receive and distribute the request psuedo-randomly (you can, of course, implement your own more sophisticated distribution methods later, and particularly if you later employ Docker Swarm, etc.) that we’ll call Gourmet:

https://gist.githubusercontent.com/jmarhee/b3ba1332e18c1c499326d2ce5512dc36/raw/cdc10d294279299e31ed7c04ddcc71a482b52033/gourmet.go

2. Use the same build process to build this binary as you did with your Greetings application:

go build gourmet.go

3. Create a text file in the same directory, you can call it whatever you’d like (I named mine `gourmet.backends` for example), but make not of the file path and populate it with a list of your backends (in this case, the container names):

http://greetings1:8080
http://greetings2:8080
http://greetings3:8080

and so on for each of the containers you plan to link to the router.

4. You can create a Docker image just like you did with the greetings container:

FROM alpine
MAINTAINER Joseph D. Marhee
ADD gourmet.backends /root/gourmet.backends
ADD gourmet /root/gourmet
WORKDIR /root/
ENTRYPOINT [“./gourmet gourmet.backends”]

I named my backends file `gourmet.backends`; update the above Dockerfile to reflect the path to your text file to copy it into the container, and change the ENTRYPOINT to reflect starting the router with your backends loaded.

This will build an image with your list of backends, and start the router package as the default command. Proceed to build, and tag the image, just as we did earlier:

docker build -t gourmet .
docker tag <ID> gourmet

then add the new container to your compose file:

greetings1:
image: greetings
expose:
— 8080
greetings2:
image: greetings
expose:
— 8080
greetings3:
image: greetings
expose:
— 8080
gourmet:
image: gourmet
expose:
— 3000
links:
— greetings1
— greetings2
— greetings3

We have the original set of application containers, but now we have one we’re calling `gourmet`, which will be outwardly facing, and we will expose it on Port 3000, but to ensure that our `gourmet.backends` file makes sense to the router application, we’ll LINK our greetings containers to gourmet. This will ensure that greetings1, for example, resolves to the Docker address for that container, for example.

Once you’re ready to do, spin up your compose file:

docker-compose up -d

Checking your Work

Once you ran the compose command above, you should see an output like:

jmarhee@iampizza ~/repos/greetings $ docker-compose up -d 
Starting greetings_greetingsb_1
Starting greetings_greetingsc_1
Starting greetings_greetingsa_1
Starting greetings_gourmet_1
jmarhee@iampizza ~/repos/greetings $ docker ps 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2cc7942c08bb gourmet “./gourmet” 55 minutes ago Up 1 seconds 3000/tcp greetings_gourmet_1
a16c77678d43 greetings “greetings” 55 minutes ago Up 1 seconds 8080/tcp greetings_greetingsa_1
d2ca702c10c0 greetings “greetings” 55 minutes ago Up 2 seconds 8080/tcp greetings_greetingsc_1
8a75857fddba greetings “greetings” 55 minutes ago Up 2 seconds 8080/tcp greetings_greetingsb_1

and in your `ps` output, see all of your containers running.

To test that the Gourmet container is serving your requests, try `curl`ing or requesting in your Browser once more:

$ curl http://localhost:3000/joseph%20marhee 
<h1>Welcome, joseph marhee</h1>
<strong>Served by greetings1:8080</strong>%
$ curl http://localhost:3000/joseph%20marhee 
<h1>Welcome, joseph marhee</h1>
<strong>Served by greetings3:8080</strong>%
$ curl http://localhost:3000/joseph%20marhee 
<h1>Welcome, joseph marhee</h1>
<strong>Served by greetings2:8080</strong>%

You’ll notice in the response that your “Served by” line will change depending upon which if your containers on the host was called to serve the response.

When you’re done, you can tear-down the environment to make changes, scale, etc. using various Docker compose commands.

Further Adventures

Many scheduling and orchestration systems like Kubernetes, Docker Swarm, Mesos, etc. have native load balancer and router services built into them, so I do not recommend using this method in production, however, hopefully you’ve seen how versatile a tool Compose can be for your application environment.

If you use tools like Rancher, for example, or Docker Swarm, your Compose files will usually work with parity, or as a big stepping stone towards scripting and automating your environment as these tools also use the Docker API for their extended feature sets (the Swarm API, extends the Docker API, and has almost complete parity in instructions you can use; Rancher Compose can take a docker-compose config and supercharge it with the orchestration services Rancher’s APIs have to offer with rancher-compose).

Serverlessness, a phrase I think betrays exactly how cool the technology is, can also be approached using a method not unlike the one you looked at here (on a base level); using a router package like the I jimmied together for the purpose of this tutorial, you can manage container life-cycles pretty flexibly, and spin up containers as a request needs to be served, or with more granularity to automate resource management in the same manner.

For a short video of the tasks covered in this guide, you can view one right here: