How I write (Go) backend systems

Yuri Brito
Wildlife Studios Tech Blog
8 min readMar 4, 2020

At Wildlife, we have an incredible variety of backend systems with all sorts of complexities. We are also heavy users of Golang, and over time I had the chance to be exposed to brilliant ideas from several of my coworkers. What you’ll read in the following lines is a subset of the ones I like the most regarding project organization and code style.

The title of this post has the word “Go” inside parenthesis because most of these ideas are language agnostic.

Blank slate

When creating something new, the best thing to do is not get too concerned about almost all the best practices you know about. Mainly because the way you arrange your entities, files, packages, pretty much anything has a high chance to change drastically. There’s one good practice that is valid probably at any time: keep it simple, stupid, or KISS.

So I collected several insights and came up with a simple boilerplate and abstractions that I’m comfortable with for pretty much any kind of project. It’s simple, lean, and expandable. Essential concepts are borrowed from Robert C. Martin’s Clean Architecture.

The principle behind these layers is to define a single data flow direction, so no cyclic dependencies are allowed, and to separate concerns, meaning that each layer is responsible for a subset of operations on each known entity. For example, reading and understanding user intents through an HTTP API (delivery layer), and storing or retrieving entity data from a storage device (data access layer).

Each layer can be comprised of one or more Go packages. I tend not to name the packages after the layers to prevent the potential of getting stuck with a massive package later on.

Boilerplate organization

Packages and their layers

You don’t need all of these packages, or maybe you’re better off with even more. It all depends on which stage of development your project is and on its inherent complexity.

What usually goes in each package

  • API

If you expect to receive external requests, often there’s an HTTP or gRPC server in place listening to them. These go here.

The delivery layer is the closest one to clients. It receives inputs, knows how to format them in the best way possible, and passes it on to the correct handler in the business logic layer (BLL).

In our abstraction, what that means is receiving an almost raw input, parsing it to an entity on models, and calling some function from the BLL.

  • CMD

All main packages are here. Yes, you can, and sometimes ought to, have more than one.

Main packages should handle initialization for executables. They parse all configurations, like command-line flags, and build entities for the delivery, business logic, and data access layers that will be present at runtime.

  • Models

Hold business objects that are frequently used by other packages. Not only know about entities members but also have methods to ease, or forbid, access to their internal structure, and to apply transformations.

Even though models are part of the data access layer, I don’t ever make calls to storage devices from here, and this is why I use a storage package.

  • Storage

Define interfaces to run data operations over storage engines. Their implementations go in sub-packages named after the storage engine used.

Examples: storages/pg, storages/s3, storages/ram.

  • Usecases

Define interfaces for desired operations over entities. These are the features of your software.

Interfaces are used both in storage and here for two purposes: mainly mocking on unit tests, and extensibility in case you need a wrapper over an existing use-case. This is an idiomatic way of proofing yourself for the unknown without paying a considerable development time cost.

  • Makefile

The purpose of a Makefile is to serve as an easy and flexible collection of commands to all relevant development life-cycle steps.

What should be covered by a Makefile? Testing, complex tooling operations, building, and packaging.

Conventions I follow:

Namespace commands per goal or tool. Example: build, build/binaries, build/mocks.

Use of environment variables to add flexibility where it’s due, instead of duplicating commands with just minor changes.

Simple and powerful Makefile

Actually writing code

With all that out of the way, I’ll tell you how I approach writing code. The essential thing productivity-wise is to have the feature itself in place. From the blank file to the moment of grace, you’ll probably make mistakes, regret decisions, rewrite many things many times.

The less you have to regret, the faster you can push a functional feature branch to git. Until this point, do repeat yourself, cut corners, write magic numbers, hard code configurations.

Is it working now? Reflect, refactor, don’t repeat yourself, add more tests, commit, push, smile.

Let’s write a file storage service called “drove,” just because we have to call it something. Our users should be able to upload, list, and download their files. Our client app interacts with an http server.

api/http_server.go
api/http_server_test.go

For now, all code lives in those two files, one with the actual types and implementations and the other for unit testing. There are a couple of HTTP handlers, and an in-memory store for users files, all still in the api package that belongs to the delivery layer.

You can imagine that handling the download part would be pretty similar to what we did already, so I’ll leave it for your imagination.

We have two passing tests, but no executable to start an HTTP server to listen for requests. So, the next step is creating a cmd package with an http_server executable that will initialize all entities to achieve this goal.

cmd/backend/main.go

When I need some part of the code to be configurable, I prefer to use flags. To make it work on local and testing environment, it’s just a matter of setting good defaults that many times can end up still being the ones used in production.

The best part is that their keys are typo prone. Wrong flags mean quick crashes. Wrong environment variable names mean you’re running something different than you think you are.

Another way to handle configurations is by using files that often can have their values changed by using environment variables with keys in a specific pattern. Now you have multiple places to look for defaults, potentially many files, and environment variables.

Flags are simple and provide the same benefit than the alternatives.

Back to our code: If that was all we needed, we could live with just those two packages, looks like reorganizing this would be wasting time.

New requirements

What if users love our client app so much, it’s so responsive and easy to use, they want to access files from several other file storage services through “drove.”

We’ll have to support listing, uploading, and downloading files from different sources. Each one with their respective API. Looks like our handlers are going to get more complex. Users also need to be able to connect to these sources, and provide us with some authentication mechanism, let’s assume they all work with tokens.

Now let me pause for a minute. For educational purposes, I’ll have to leap the code we have so far, and for the sake of brevity, many important details are ignored. Usually, as long as we’re still following a unidirectional flow of data, I’m happy with adding packages and complexity as I see fit instead of many at once.

The current code will be ported to the api-models-storage-usecases format. Then Sources are added as a new interface in usecases. Our Files interface needs to accommodate the integration with many sources, so the current code becomes our file storage, SourceDrove, and Files will be an all-new bridge between user operations and all sources.

models/sources.go
storage/storage.go
storage/connections.go
usecases/usecases.go
usecases/sources.go

Documentation

When your code is done, and you’re ready to push a feature branch, always document your public entities. There should be no ambiguity in how to use them, and if they need any assumptions on the input or output, it should be clear to other engineers.

Back to the code

As you can see, this would be too much to live under the umbrella of a single api package. A project like this could end up having many new features, and at some point, it could even make sense to transform sources in its own BLL belonging package.

Since HTTP handlers in the API depend only on usecases.FilesI, changes to this package are minimal.

Where to go from here

There are important things lacking in the code we went through above. Some of these things I have the chance to cover below, but surely we could still go as horizontal or vertical as we wanted to on the art of backend systems engineering, with topics like monitoring, deployments, subtleties on organizational dilemmas, and others.

  • Error Handling

There are two things of significant importance on error handling. One, only handles the error once in a code block, either pass it along, log it, or crash your application when it’s unrecoverable.

You don’t need to log in and then return it still, because returning should be understood as a signal that the error should be taken care of by the caller. If you decide to crash, be sure that the error is unrecoverable, and do it in the closest function as possible to the cmd package.

Two, write meaningful error messages to your users. Add context to it whenever it is possible. Here is a good read in the official Golang blog about errors in Go 1.13.

  • Distributed tracing

At Wildlife, we have many systems that play a part in the player journey throughout our games. With this level of dynamic, many things can happen and it’s not an easy task to pinpoint issues.

Distributed tracing with Jaeger and OpenTracing libraries have become part of the way we do the backend.

  • Case against ORMs

Being consistent with the KISS principle is not an easy thing. ORMs usually is a magical way of talking with persistent storage. Each has its idiosyncrasies, with slightly different APIs. Different enough on naming and inner workings to make using them problematic.

They add complexity and remove flexibility, not the trade-off I look for when acquiring complexity. If I’m using a SQL database like Postgres, I find myself barebones and well-tested libraries and go with raw queries. In Golang, that means lib/pq and database/sql.

Wrapping up

Code form and behavior evolve over time. If you follow KISS, as we did in this simple example, not being overly concerned with all the clever abstractions I know you can put in place, and just focusing on achieving whatever goal you set for yourself at each point in time, you’ll get there faster with code that will most often than not be easier to reason about and evolve.

When starting projects, you’ll hardly need more than one or a couple of packages. When it gets intricate, it’s ok to take some time to rearrange it. Literature usually shows the end result of many iterations, not rarely from many years of experience in large codebases. It’s ok to pick your subset of good ideas and ignore many of the rest, revisiting them from time to time.

--

--