My baby steps with Go — Creating and Dockerizing a REST API

Mahjoub Saifeddine
CodeShake
Published in
10 min readSep 7, 2020

Dockerizing my first REST API written with Go

Postgres’ Slonik Docker’s Moby Dock Go’s Gopher

This is my second post about Go where I share my experience with learning it. This time, we will create a REST API on top of a Postgres database and we will dockerize it to run within a Docker container.

The Goal

Through this post, we will start by creating an Event management API system, with seven endpoints, covering all sorts of basic activities involved like creating, listing, rescheduling, updating, canceling, and deleting events. Then, we take care of the necessary configuration in order to dockerize it.

Prerequisites:

Same as my first post, this one is accessible for beginners and I’m assuming that you have basic knowledge with SQL or PostgreSQL database, REST APIs and, of course, Docker !

If it’s not the case, I encourage you to check these learning resources first :

The REST application:

From this point on, I will assume that you have installed all necessary tools on your computer.

So, Let’s begin !

Project structure:

Let’s start by creating the structure of our project. Setup a new directory for our project, let’s name it events-apiand change the working directory.

> mkdir events-api
> cd events-api

Now, we need to initialize a new Go module, to manage the dependencies of the project.

You can choose any module path you want, even if it doesn’t use the naming convention “github.com/<username>/<reponame>”.

We are all set to start coding our application. But before doing that, let’s divide our project into small components.

Basically, our API requires some route handlers to handle the HTTP requests, a domain layer that represents our events and a persistence layer that helps us interact with the database.

So, our solution should like the following by the end of this post:

.
├── bin/
├── errors/
│ └── errors.go
├── handlers/
│ └── handlers.go
├── objects

│ ├── event.go
│ └── requests.go
├── store
│ ├── postgres.go
│ └── store.go
├── .gitignore

├── docker-compose.yml
├── Dockerfile
├── go.mod
├── LICENSE
├── README.md
├── main.go
├── main_test.go
└── server.go

Now, we have 4 sub-packages, a directory for binaries, and our root package. Obviously, bin/ will be git ignored.

  1. The errorspackage will contain all the errors encountered while processing any request.
  2. The handlerpackage is straightforward, it will contain the code for all API route handlers, which will process the request.
  3. The objects package will define our Event’s object along with some other objects.
  4. The storepackage will have our database interaction code. You can see we have 2 files in the package store, store.go defines the interface of all the methods required for interacting with the database or any other storage unit, we would like to use, link an in-memory implementation or a Redis implementation. And hence, postgres.go will implement the store interface.

Also, we have 2 files in the root directory, main.go and server.go. The first one will be the entry point of our project and hence, will have the main() function, which will invoke the server runner implemented in the other one. The second one, will create a server and routing handler for application endpoints.

As you might guess, the Dockerfile and docker-compose.yml will be used to dockerize our API, discussed later in the next section.

Error handling

You might not be expecting this, but we’re going to start by adding some tools to our arsenal. We’re going to create some error objects that we’re going to use later in our application. Basically, the error object is a readable message with an HTTP status code.

./errors/errors.go

The API Specification

As we discussed in the ‘The Goal’ section, the idea is simple so is the specification :

……an Event management API system, with seven endpoints……like creating, listing/getting, rescheduling, updating, canceling, and deleting events.

The first thing we’re going to do is to create the Eventobject.

./objects/event.go

For now, please ignore the gorm tags in Id and Slot fields, we will discuss them in the next sections.

The next thing we’re going to do is to create the first version of the handlerobject that implements the IEventHandlerinterface.

./handlers/handlers.go — v1

Store implementation

Before going through the implementation of the IEventHandler, we will need to have a store layer first. So, let’s create an IEventStore interface with Postgres implementation. Each method in this interface will take the execution Context and a request object.

So let’s have a look into the request/response objects that we’re going to use in the IEventStore.

./objects/requests.go

Now, let’s define the IEventStore.

./store/store.go

As you can see, we’ve a helper method GenerateUniqueId that creates a time based sortable unique id to a precision of up to a fraction of NanoSeconds, we will use this method to set the Id of the event.

Now, let’s implement the store interface for Postgres database. For this we will be using GORM — ORM library for Golang, with it’s Postgres driver, so let’s install our first dependency.

> go get gorm.io/gorm
> go get gorm.io/driver/postgres

Now, in the below file, we will implement the interface IEventStore over a struct pg which have a *gorm.DB connection pool.

Also, we have a NewPostgresEventStore constructor that takes the Postgres connection string, sets up the GORM connection pool, with a logger attached to it, that will logs all the queries executed.
And returns the PostgreSQL implementation of IEventStore instead of the pg struct. It is the best way to abstract the logic behind the interface, so that only the store is exposed.

Earlier, we had seen that the Id field has a gorm tag specifying primary key, which instructs GORM that ourId field is the primary key in our Events schema. And the Slot field has a gorm:"embedded" tag specifying that the StartTime and EndTime fields of the TimeSlot object should be directly used as the fields of Events schema in the database.

./store/postgres.go

pg — Methods

In Get method:

p.db.WithContext(ctx).Take(evt, “id = ?”, in.ID).Error

This statement extract the event with the provided identifier inin.ID and map it to the provided object evt. And returns a custom-defined error ErrEventNotFound, defined in the errors package, see the import. (It will be discussed in the next section)

In the List method, we have created a custom query using Where and Limit clause and Find all the matching Events mapped in list variable.

Create method is pretty straightforward, it takes the pre-filled events object and adds it’s entry in the database with CreatedOn set to current time using the database’s NowFunc.

UpdateDetails updates the general detail fields specified in Select using the Id field specified in the object, along with the UpdatedOn field, being set to the current time.
Similarly, Cancel and Reschedule will update the Event object accordingly.

Delete will also work, similar to that of Update the only difference is it will remove the entry from the database, using the Id field.

Server and Routes

Now, let’s set up our main server and register all its routes. Of course, before that, we will have to add our next dependency gorilla/mux.

Let’s check this.

./server.go

The Run function creates a mux router with /api/v1/ path prefix defining the version of our API, so that in the future if we want to upgrade the version, we can do it directly from here, instead of changing it everywhere.

Also, we have created a new store using the constructor in store/postgres.go and a new handler from the constructor in handlers/handlers.go. And then all the routes for the methods in IEventHandler are registered in the function RegisterAllRoutes.

./main.go

The main.go defines the arguments and environment variables required for our project, the Postgres connection string conn and the port over which the server will be running port, which will eventually be passed to the runner Run(args Args) error in server.go.

Now, let’s get back to the Handler implementation part. Each of the methods of IEventHandler performs a set of simple operations involving at most 4 to 5 steps:

  1. Extract data from the request body or query parameters
  2. Validate the request objects.
  3. Check if the event exists in case of an update or a delete.
  4. Final database store call regarding the method.
  5. And at last returning the response.

In order to achieve this, we’re going to need some helpers functions to validate the requests, and write responses.

./handlers/helpers.go

Thus, completing our API implementation.

./handlers/handlers.go

Testing

We will be using the default Golang HTTP testing packagenet/http/httptest.

For a matter of readability and considering that most tests are similar. We’re going to focus on the most important ones. But feel free to look for the complete file in my Github repository.

./handlers_test.go

As you can see, we have a set of test cases with different possibilities, a setup function to return our new http.Request and we use httptest.NewRecorder()to execute it in our API code. You can try to run test by yourself with an active Postgres instance.

Dockerization

Before we start, let’s first answer, why Docker instead of setting up Postgres and Golang on our machine and start using & testing our application? Well, the question itself has the answer, for anyone to use or try our API, they will have to set up their machine accordingly, which might result in some or any configuration problem or any setup issue. Therefore, to avoid such problems Docker comes into play.

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package.

Dockerfile

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble the deployment. So, let’s dive into the Dockerfile for our API deployment.

./Dockerfile
  1. Each Dockerfile starts with some base image, and as we need Golang for our API, so we are starting with golang:alpine image and naming it with an alias: builder (Line: 1-2)
  2. To set any environment variable in Dockerfile we use ENV name=value syntax. And hence, enabling the Go modules in our image. (Line: 4–5)
  3. Now, as golang:alpine image doesn’t come with git installed, and we need git to download our dependencies. So, we are including git in the image, using RUN apk update && apk add — no-cache git (RUN command is used to run any command in the terminal in our image). (Line: 7–8)
  4. Changing the current working directory to /app directory in the image. (Line: 10–11)
  5. To avoid downloading dependencies every time we build our image. Here, we are caching all the dependencies by first copying go.mod and go.sum files and downloading them, to be used every time we build the image if the dependencies are not changed. (Line 13–24)
  6. And now, copying our complete source code. (Line 26–27)
  7. Creating the binary for our API using the Go build command. Note: we have disabled the CGO integration using CGO_ENABLED flag for the cross-system compilation and it is also a common best practice. Binary will be created in ./bin/ directory as the main file. (Line 29–33)
  8. To create a small image size we are using docker multi-stage build, which involves starting a new image from scratch (an explicitly empty image) and just copying our binary into it from the builder image tag specified on line 2.
  9. And executing it using theCMD command.

Now, we have our Docker image ready but our image needs Postgres database service, so let’s create a composure file for our deployment.

docker-compose.yml

Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, we can use a YAML file (default name of the file is docker-compose.yml) to configure our application’s services. Then, with a single command, you create and start all the services from your configuration.

So, let’s start with our compose file.

  1. Specify the docker-compose version. (Line 1)
  2. Our deployment requires two services, and hence, our file is divided into two services, one named as app and the other as db (Line 3, 5, and 21)
  3. app service starts with naming its container to events_api (Line 6), followed by using our current directory to find the build image, which will eventually use our Dockerfile (Line 7).
  4. Now, we are exposing port 8080 from our service to our local machine in lines 8 and 9, where the syntax for specifying ports is HOST:CONTAINER
  5. We are setting a restart policy for our API on any failure in line: 10.
  6. Setting up the environments: PORT and DB_CONN as required in our main.go Note: that instead of the IP for our Postgres instance we are using the name of the Postgres service, the service name is already mapped with the service’s container IP in Docker. (Line 11–13)
  7. Though not necessary but we are linking a volume space for our API in lines 14–15. It is one of the best practices.
  8. As our API service depends on and is linked with our Postgres service, we are specifying the connection in lines 16–19.
  9. Moving to our second service configurations. We will be using the default postgres image (it will be automatically downloaded if you don’t have it in your system) — Line 21–22
  10. Line 23 specifies our database container name events-db
  11. Line 24 exposes 5432 port to our host machine so that we can check the database.
  12. And lines from 26 to 31 sets the environment variable for database configuration. TZ and PGTZ are the default time zone setting variables for our database, which is set to UTC.

Now, our API is completely ready to be built and tested. You can build the image using the command in our root directory i.e. ./events-api:

docker-compose up

Note: that the up sub-command will automatically build the image if it is not present, & if does not specify the build flag instead docker-compose up --build

Testing the application:

Now that we have everything set up, let’s hit our list events endpoint to have an empty result: http://localhost:8080/api/v1/events.

Not very convincing. Right? Let’s try the following requests :

Also, we have our Postgres instance running at this stage, so we can also run our test file. Run the following command to test our package:

go test -v server.go main.go handlers_test.go -covermode=count  -coverprofile=./bin/coverage.out

Conclusion

This blog completes the journey of creating and dockerizing any API system with a concrete example and the complete step-by-step instructions, using Gorilla Mux and GORM with Postgres, along with Docker to set up and run our service.

Thank you for your time. Feel free to leave a reply or to check the source code on my Github.

--

--