Dockerizing my first REST API written with Go
This is my second post about Go where I share my experience with learning it. This time, we will create a REST API on top of a Postgres database and we will dockerize it to run within a Docker container.
Through this post, we will start by creating an Event management API system, with seven endpoints, covering all sorts of basic activities involved like creating, listing, rescheduling, updating, canceling, and deleting events. Then, we take care of the necessary configuration in order to dockerize it.
Same as my first post, this one is accessible for beginners and I’m assuming that you have basic knowledge with SQL or PostgreSQL database, REST APIs and, of course, Docker !
If it’s not the case, I encourage you to check these learning resources first :
The Go Programming Language
Go is an open source programming language that makes it easy to build simple, reliable, and efficient software.
Learn PostgreSQL - Best PostgreSQL Tutorials | Hackr.io
Learning PostgreSQL? Check out these best online PostgreSQL courses and tutorials recommended by the programming…
Orientation and setup
Estimated reading time: 4 minutes Welcome! We are excited that you want to learn Docker. This page contains…
What is REST
REST is acronym for REpresentational State Transfer. It is architectural style for distributed hypermedia systems and…
The REST application:
From this point on, I will assume that you have installed all necessary tools on your computer.
So, Let’s begin !
Let’s start by creating the structure of our project. Setup a new directory for our project, let’s name it
events-apiand change the working directory.
> mkdir events-api
> cd events-api
Now, we need to initialize a new Go module, to manage the dependencies of the project.
You can choose any module path you want, even if it doesn’t use the naming convention “github.com/<username>/<reponame>”.
We are all set to start coding our application. But before doing that, let’s divide our project into small components.
Basically, our API requires some route handlers to handle the HTTP requests, a domain layer that represents our events and a persistence layer that helps us interact with the database.
So, our solution should like the following by the end of this post:
│ └── errors.go
│ └── handlers.go
│ ├── event.go
│ └── requests.go
│ ├── postgres.go
│ └── store.go
Now, we have 4 sub-packages, a directory for binaries, and our root package. Obviously,
bin/ will be git ignored.
errorspackage will contain all the errors encountered while processing any request.
handlerpackage is straightforward, it will contain the code for all API route handlers, which will process the request.
objectspackage will define our Event’s object along with some other objects.
storepackage will have our database interaction code. You can see we have 2 files in the package
store.godefines the interface of all the methods required for interacting with the database or any other storage unit, we would like to use, link an in-memory implementation or a Redis implementation. And hence,
postgres.gowill implement the store interface.
Also, we have 2 files in the root directory,
server.go. The first one will be the entry point of our project and hence, will have the
main() function, which will invoke the server runner implemented in the other one. The second one, will create a server and routing handler for application endpoints.
As you might guess, the
docker-compose.yml will be used to dockerize our API, discussed later in the next section.
You might not be expecting this, but we’re going to start by adding some tools to our arsenal. We’re going to create some error objects that we’re going to use later in our application. Basically, the error object is a readable message with an HTTP status code.
The API Specification
As we discussed in the ‘The Goal’ section, the idea is simple so is the specification :
……an Event management API system, with seven endpoints……like creating, listing/getting, rescheduling, updating, canceling, and deleting events.
The first thing we’re going to do is to create the
For now, please ignore the
Slotfields, we will discuss them in the next sections.
The next thing we’re going to do is to create the first version of the
handlerobject that implements the
Before going through the implementation of the
IEventHandler, we will need to have a store layer first. So, let’s create an
IEventStore interface with Postgres implementation. Each method in this interface will take the execution Context and a request object.
So let’s have a look into the request/response objects that we’re going to use in the
Now, let’s define the
As you can see, we’ve a helper method
GenerateUniqueId that creates a time based sortable unique id to a precision of up to a fraction of NanoSeconds, we will use this method to set the Id of the event.
Now, let’s implement the store interface for Postgres database. For this we will be using GORM — ORM library for Golang, with it’s Postgres driver, so let’s install our first dependency.
The fantastic ORM library for Golang, aims to be developer friendly. Full-Featured ORM Associations (Has One, Has Many…
> go get gorm.io/gorm
> go get gorm.io/driver/postgres
Now, in the below file, we will implement the interface
IEventStore over a struct
pg which have a
*gorm.DB connection pool.
Also, we have a
NewPostgresEventStore constructor that takes the Postgres connection string, sets up the GORM connection pool, with a logger attached to it, that will logs all the queries executed.
And returns the PostgreSQL implementation of
IEventStore instead of the
pg struct. It is the best way to abstract the logic behind the interface, so that only the store is exposed.
Earlier, we had seen that the
Id field has a
gorm tag specifying
primary key, which instructs GORM that our
Id field is the primary key in our
Events schema. And the
Slot field has a
gorm:"embedded" tag specifying that the
EndTime fields of the
TimeSlot object should be directly used as the fields of
Events schema in the database.
pg — Methods
p.db.WithContext(ctx).Take(evt, “id = ?”, in.ID).Error
This statement extract the event with the provided identifier in
in.ID and map it to the provided object
evt. And returns a custom-defined error
ErrEventNotFound, defined in the
errors package, see the import. (It will be discussed in the next section)
List method, we have created a custom query using
Limit clause and
Find all the matching Events mapped in
Create method is pretty straightforward, it takes the pre-filled events object and adds it’s entry in the database with
CreatedOn set to current time using the database’s
UpdateDetails updates the general detail fields specified in
Select using the
Id field specified in the object, along with the
UpdatedOn field, being set to the current time.
Reschedule will update the
Event object accordingly.
Delete will also work, similar to that of
Update the only difference is it will remove the entry from the database, using the
Server and Routes
Now, let’s set up our main server and register all its routes. Of course, before that, we will have to add our next dependency
https://www.gorillatoolkit.org/pkg/mux Package gorilla/mux implements a request router and dispatcher for matching…
Let’s check this.
Run function creates a mux router with
/api/v1/ path prefix defining the version of our API, so that in the future if we want to upgrade the version, we can do it directly from here, instead of changing it everywhere.
Also, we have created a new store using the constructor in
store/postgres.go and a new handler from the constructor in
handlers/handlers.go. And then all the routes for the methods in
IEventHandler are registered in the function
main.go defines the arguments and environment variables required for our project, the Postgres connection string
conn and the port over which the server will be running
port, which will eventually be passed to the runner
Run(args Args) error in
Now, let’s get back to the Handler implementation part. Each of the methods of
IEventHandler performs a set of simple operations involving at most 4 to 5 steps:
- Extract data from the request body or query parameters
- Validate the request objects.
- Check if the event exists in case of an update or a delete.
- Final database store call regarding the method.
- And at last returning the response.
In order to achieve this, we’re going to need some helpers functions to validate the requests, and write responses.
Thus, completing our API implementation.
We will be using the default Golang HTTP testing package
For a matter of readability and considering that most tests are similar. We’re going to focus on the most important ones. But feel free to look for the complete file in my Github repository.
As you can see, we have a set of test cases with different possibilities, a setup function to return our new
http.Request and we use
httptest.NewRecorder()to execute it in our API code. You can try to run test by yourself with an active Postgres instance.
Before we start, let’s first answer, why Docker instead of setting up Postgres and Golang on our machine and start using & testing our application? Well, the question itself has the answer, for anyone to use or try our API, they will have to set up their machine accordingly, which might result in some or any configuration problem or any setup issue. Therefore, to avoid such problems Docker comes into play.
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package.
Empowering App Development for Developers | Docker
The world's leading service for finding and sharing container images with your team and the Docker community. For…
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble the deployment. So, let’s dive into the Dockerfile for our API deployment.
- Each Dockerfile starts with some base image, and as we need Golang for our API, so we are starting with
golang:alpineimage and naming it with an alias:
- To set any environment variable in Dockerfile we use
ENV name=valuesyntax. And hence, enabling the Go modules in our image. (Line: 4–5)
- Now, as
golang:alpineimage doesn’t come with
gitinstalled, and we need
gitto download our dependencies. So, we are including
gitin the image, using
RUN apk update && apk add — no-cache git(
RUNcommand is used to run any command in the terminal in our image). (Line: 7–8)
- Changing the current working directory to
/appdirectory in the image. (Line: 10–11)
- To avoid downloading dependencies every time we build our image. Here, we are caching all the dependencies by first copying go.mod and go.sum files and downloading them, to be used every time we build the image if the dependencies are not changed. (Line 13–24)
- And now, copying our complete source code. (Line 26–27)
- Creating the binary for our API using the Go build command. Note: we have disabled the
CGO_ENABLEDflag for the cross-system compilation and it is also a common best practice. Binary will be created in
./bin/directory as the
mainfile. (Line 29–33)
- To create a small image size we are using docker multi-stage build, which involves starting a new image from
scratch(an explicitly empty image) and just copying our binary into it from the
builderimage tag specified on line 2.
- And executing it using the
Now, we have our Docker image ready but our image needs Postgres database service, so let’s create a composure file for our deployment.
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, we can use a YAML file (default name of the file is
docker-compose.yml) to configure our application’s services. Then, with a single command, you create and start all the services from your configuration.
So, let’s start with our compose file.
- Specify the docker-compose version. (Line 1)
- Our deployment requires two services, and hence, our file is divided into two
services, one named as
appand the other as
db(Line 3, 5, and 21)
appservice starts with naming its container to
events_api(Line 6), followed by using our current directory to find the build image, which will eventually use our
- Now, we are exposing port 8080 from our service to our local machine in lines 8 and 9, where the syntax for specifying ports is
- We are setting a restart policy for our API on any failure in line: 10.
- Setting up the environments:
DB_CONNas required in our
main.goNote: that instead of the IP for our Postgres instance we are using the name of the Postgres service, the service name is already mapped with the service’s container IP in Docker. (Line 11–13)
- Though not necessary but we are linking a volume space for our API in lines 14–15. It is one of the best practices.
- As our API service depends on and is linked with our Postgres service, we are specifying the connection in lines 16–19.
- Moving to our second service configurations. We will be using the default
postgresimage (it will be automatically downloaded if you don’t have it in your system) — Line 21–22
- Line 23 specifies our database container name
- Line 24 exposes 5432 port to our host machine so that we can check the database.
- And lines from 26 to 31 sets the environment variable for database configuration.
TZ and PGTZare the default time zone setting variables for our database, which is set to UTC.
Now, our API is completely ready to be built and tested. You can build the image using the command in our root directory i.e.
Note: that the
up sub-command will automatically build the image if it is not present, & if does not specify the build flag instead
docker-compose up --build
Testing the application:
Now that we have everything set up, let’s hit our list events endpoint to have an empty result: http://localhost:8080/api/v1/events.
Not very convincing. Right? Let’s try the following requests :
Also, we have our Postgres instance running at this stage, so we can also run our test file. Run the following command to test our package:
go test -v server.go main.go handlers_test.go -covermode=count -coverprofile=./bin/coverage.out
This blog completes the journey of creating and dockerizing any API system with a concrete example and the complete step-by-step instructions, using Gorilla Mux and GORM with Postgres, along with Docker to set up and run our service.
Thank you for your time. Feel free to leave a reply or to check the source code on my Github.