Running a Go API with Hot Reloading and Docker

Zach Johnson
4 min readJul 29, 2018

--

This is a quick discussion of how to set up a local development environment for a Go API running inside of a Docker container with hot reloading. I’ve found this to be an effective way to develop locally. The full source code can be found on GitHub.

API

We’ll first set up a dummy API in cmd/api/main.go:

package main import (
"fmt"
"log"
"net/http"
)
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, "Hello from the api!")
}
func main() {
http.HandleFunc("/", handler)
log.Println("listening on 5000")
log.Fatal(http.ListenAndServe(":5000", nil))
}

This API simply listens on the root path and returns a hello world back out. Good enough for this example!

Dockerfile

The next thing we need is a Dockerfile to define how we want to build our container for this Go API:

FROM golang:1.10

WORKDIR /go/src/github.com/Zach-Johnson/go-docker-hot-reload-example

COPY . .

RUN ["go", "get", "github.com/githubnemo/CompileDaemon"]

ENTRYPOINT CompileDaemon -log-prefix=false -build="go build ./cmd/api/" -command="./api"
  • First we set the base image for the Dockerfile to Go 1.10.
  • WORKDIR sets the current working directory within the container to the path for this example repository.
  • Then the COPY command copies everything in the current context directory to the container. We’re going to use a docker-compose file for this, so the context directory will be whatever we define it to be in the docker-compose file or the directory we run the docker-compose command from by default.
  • I’m using a hot reloader called CompileDaemon. There are a variety of hot reloaders for Go apps — I chose to use CompileDaemon because of the simplicity and it played nice with Docker out of the box.
  • The ENTRYPOINT defines the command that will be ran when the container starts up. Here we run the CompileDaemon command — it has a variety of optional flags, but runs a go build by default. I explicitly specify a go build command here to build from the directory I want and then execute the binary so that the server starts up. CompileDaemon handles the rest - anytime any .go file changes in the directory, it will terminate the server and re-run the build steps. The file types that it watches for can also be modified as needed with the -pattern flag. Sweet!

docker-compose

A docker-compose file can simplify orchestration between Docker containers. For this example it’s a bit contrived since we’re only running one service, but often times it makes things much nicer when running multiple microservices and perhaps a database locally.

version: '3.6'

services:
api:
image: api:latest
ports:
- 5000:5000
volumes:
- ./:/go/src/github.com/Zach-Johnson/go-docker-hot-reload-example

We first specify the image to use; the latest image of the API that will be created when we build the Docker image from our Dockerfile. We expose port 5000 and map it to port 5000 on the Docker container so that we can reach our API from outside of the Docker container. The last line is a volume mount. This is what makes the hot reloading work inside of a Docker container! This links our current working directory to the directory inside of the Docker container so that any file changes that happen on our local machine also happen inside of the Docker container. Since CompileDaemon is running inside of the Docker container, when it sees a file change inside the container, it can then re-build and re-run the server.

Makefile

Lastly we write a little Makefile to simplify server start up and shutdown:

default:
@echo "=============building Local API============="
docker build -f cmd/api/Dockerfile -t api .

up: default
@echo "=============starting api locally============="
docker-compose up -d

logs:
docker-compose logs -f

down:
docker-compose down

test:
go test -v -cover ./...

clean: down
@echo "=============cleaning up============="
rm -f api
docker system prune -f
docker volume prune -f

Our default command simply builds the Dockerfile for the API. The up command starts up the API and runs it in the background. logs tails the logs on the Docker container. down shuts down the server. test runs any tests in the current directory tree. clean shuts down the API and then clears out saved Docker images from your computer. This can be useful when running another image like MySQL which writes information to your local machine and doesn’t clean it up when the container shuts down.

Closing Thoughts

I’ve found this to be an effective way to develop locally when running multiple APIs that are interacting with a DB of some sort. It can be a simple way to run integration tests as well: by keeping your infrastructure running locally inside of containers, you can run make test and execute all of your integration tests against your local infrastructure. Then any code changes you make get hot reloaded, but the Docker images don’t have to rebuild, so it is much quicker than having a separate test suite that has to build and run all of your infrastructure each time you want to change something.

Originally published at www.zachjohnsondev.com.

--

--