Go Best Practices: How to code comfortably

Kenta Kudo
Thirdfort
11 min readMar 28, 2022

--

In this post, I’d like to introduce three Go best practice based on my experience in writing Go in the last 3–4 years.

This is a quick peek at what I’m going to write about:

  • What’s the “Best” Practice?
  • Practice 1: Package Layout
  • Practice 2: Get Familiar with context.Context
  • Practice 3: Know Table Driven Test
  • Try Them Out!

Before jumping into the main topic, perhaps it’s useful to clarify the criteria with which I used to pick up the practices I’m going to introduce.

What’s the “Best” Practice?

There are a lot of practices: you can come up with on your own, find on the internet, or bring from other languages, but it’s not always easy to say which one is better than the other because of its subjective nature. The meaning of “best” is different from one person to another, and also depends on its context for example best practice of web application might not be the same for that of middlewares.

To write this post, I looked at Go idioms and practices with one question in mind, that is “how much did it make me feel comfortable with writing Go?” When I say “what is the language’s best practice?”, that’s when I’m new to the language and not fully comfortable with writing the language yet like when I started learning Go 4 years ago.

Of course, there are many more idioms and practices that I do NOT introduce here yet very useful if you know them when writing Go, but these three practices were the most impactful ones for me to be confident in Go.

That’s how I chose the “best” practices. Now the time to get onto it.

Practice 1: Package Layout

One of the most surprising things I found when I started learning Go was that there’s no web framework for Go like Laravel is for PHP, and Express is for Node. This means it’s completely up to you how to organise your codes and packages when writing web apps. While having freedom on how to organise code is a good thing in general, without guidelines it’s easy to get lost how to go about it.

Also, this is one of the hardest topics to agree on which one is the best; the meaning of “best” can easily change depending on the business logic the programme deals with or size/maturity of the code base. Even for the same code base, the current package organisation might not be the best in 6 month time.

While there’s no single practice that rules all, to remedy the situation, I’m going to introduce some guidelines hoping they make decision making process easier.

Guideline 1: Start from Flat Layout

Unless you know the code base is going to be big and will need some kind of package layout upfront, it’s good to start with a flat layout which simply places all the go files in the root folder.

This is a file structure from the github.com/mitchellh/go-ps package.

$ tree
.
├── LICENSE.md
├── README.md
├── Vagrantfile
├── go.mod
├── process.go
├── process_darwin.go
├── process_darwin_test.go
├── process_freebsd.go
├── process_linux.go
├── process_solaris.go
├── process_test.go
├── process_unix.go
├── process_unix_test.go
└── process_windows.go

It has only one domain concern: to list currently running processes, and for packages like this, package layout is not even needed. Flat structure fits best in this case.

But as the code base grows, the root folder is going to get busy, and you’ll start feeling the flat structure is not the best anymore. It’s time to move some files into their own package.

Guideline 2: Create Sub Packages

There are mainly three patterns as far as I know: (a) directly in the root, (b) under the pkg folder, and (c) under the internal folder.

(a) Directly in the root folder
Create a folder with package name in the root directory and move all the related files under that folder. The advantages of this are (i) no deep hierarchy/nested directories, and (ii) no clutter in import path; I’m going to give a bit more detail on this in a bit. A disadvantage is that the root folder gets a bit messy especially when there are other folders like scripts, bin, and docs.

(b) Under the pkg folder
Create a directory named pkg and put sub packages under it. Good points are (i) the name clearly suggests the directory contains sub packages, and (ii) you can keep the top level clean, while a bad side is you need to have pkg in your import path, which doesn’t mean anything because it’s evident that you are importing packages. I personally don’t care much about having /pkg in the import path, but it’s not ideal.

However, there’s a bigger issue with this pattern and also with the previous one: it’s possible to access sub packages from outside the repository.

It’d be acceptable for private repositories as it’d be noticed during a review process if it happens, but it’s important to be aware of what’s publicly available especially in the context of open source where backward compatibility is important. Once you make it public, you can’t easily change it.

There’s a third option to handle this situation.

(c) Under the internal folder
If /internal is in the import path, Go handles the package a bit differently: If packages are placed under the /internal folder, only packages that share the path before /internal can access to the packages inside.

For example if the package path is /a/b/c/internal/d/e/f, only packages under the /a/b/c directory can access to packages under /internal directory. That means if you put internal in the root directory, only packages inside that repository can use the sub packages, and no other repository can access. This is useful if you want to have sub packages while keeping their APIs internal.

Guideline 3: Move main Package under cmd Directory

It’s also a common practice to put main package under the cmd/<command name> directory.

Let’s say we have an API server written in Go that manages personal notes, it would look like this with this pattern:

$ tree
.
├── cmd
│ └── personal-note-api
│ └── main.go
...
├── Makefile
├── go.mod
└── go.sum

The cases to consider using this pattern are:

  • You may want to have multiple binaries in one repository. You can create as many folders under cmd as you want.
  • Sometimes it’s necessary to move main package somewhere else to avoid circular dependency.

Guideline 4: Organise package by its responsibility

We’ve looked at when and how to make sub packages but a big question remains: how should they be grouped? And I think this is the trickiest part and takes some time to get used to mostly because it’s hugely affected by application’s domain concern and functionality. Deep understanding of what the code does is necessary to make decision.

The most common advice for this is to organise them by responsibility.

For those who are familiar with MVC frameworks, it may feel natural to have packages like “model”, “controller”, “service”, etc. They are advised against using in Go.

Instead it’s recommended to have more responsibility/domain-oriented package name like “user” or “transaction”.

Guideline 5: Group sub packages by dependency

Naming packages based on the dependency they have for example “redis”, “kafka” or “pubsub” gives clear abstraction in some situations.

Imagine you have an interface like this:

Definition of UserService interface

And you have a service in the redis sub package that implements it like so:

Implementation of UserService with Redis backend

If the consumer (presumably main function) depends only on the interface, it can be easily replaced with alternative implementations such as postgres or inmemory.

Additional tip 1: Give package a short name

A couple of points on naming packages:

  • Short but representative name
  • Use one word
  • Use abbreviation but don’t make it cryptic

What if you want to use multi words (e.g. billing_account)? Options I could come up with are:

  1. have a nested package for each word: billing/account,
  2. simply name it account if there’s no confusion, or
  3. use abbreviation: billacc.

Additional tip 2: Avoid repetition

This is about how to name contents (struct/interface/function) inside the package. Go’s advice is to try to avoid repetition when consuming the package. For example, if we have a package and contents like this:

A package with repetitive API

A consumer of this package is going to call this function like this: user.GetUser(ctx, u.ID)

There is a word user twice in the function call. Even if we remove the word user from the function: user.Get, it’s still clear it returns a user as it’s indicated from the package name. Go prefers simpler name.

I hope these guidelines are helpful when making decision on package layout.

Let’s move on to the second practice about context.

Practice 2: Get familiar with context.Context

In 95% of time, the only thing you need to do is just pass the context provided by caller to the subroutine calls that require context as its argument.

A typical use-case of context.Context

Still, because context is used everywhere in Go programmes, it’s very important to understand (1) when it’s needed, and (2) how to use it.

Three Usages of context.Context

First and foremost, it’s important to be aware that there are three different usages the context can serve:

  1. Send cancel signal
  2. Set timeout/deadline
  3. Store/retrieve request associated values

1. Send Cancel Signal

context.Context provides a mechanism to send a signal to tell processes that receive the context to stop.

e.g. Graceful Shutdown

When a server receives a shutdown signal, it needs to stop “gracefully”; if it’s in the middle of handling a request, it needs to serve it before shutting down. context package provides context.WithCancel API, which returns a new context configured with cancel and a function to cancel it. If you call the cancel function, the signal is sent to processes that receive the context.

In the example below, after it calls context.WithCancel, it passes it to the server when spinning it up. cancel is called when the programme receives OS signal.

main function of server programme with cancel context

Let’s have a look at the “pseudo” server implementation; it actually does nothing, but has enough features for the sake of demonstration.

“pseudo” server process

It first goes into an infinite loop. Inside the loop it checks if the context is already cancelled using select on the ctx.Done() channel. If cancelled, it cleans up the process and return. If not, it handles a request. Once the request is handled, it comes back to the loop and check the context again.

The point here is by using context.Context you can allow processes to return whenever they are ready.

2. Set Timeout/Deadline

The second usage is to set timeout to the operation. Imagine you’re sending HTTP request to the third-party. If the request takes longer than expected for some reason such as network disruption, you may want to cancel the request to prevent the entire process from hanging. With context.WithTimeout you can set timeout for these cases.

main function of HTTP request with timeout context

In the SendRequest method, after sending request in a different goroutine, it waits on both ctx.Done() channel and the response channel. When timeout happens, you get a signal from the ctx.Done() channel so you can exit from the function without waiting for the response.

“pseudo” HTTP request

context package also has context.WithDeadline(); the difference is while context.WithTimeout takes time.Duration, context.WithDeadline() takes time.Time.

3. Store/Retrieve Request Associated Values

The last usage of context is to store and retrieve request associated values in the context. For example if server receives a request, you may want all log lines produced during the request to have request information such as path and method. In that case you can create a logger, set request associated information, and store it in the context using context.WithValue.

A request handler that attach logger to the context

Somewhere down the line, you can take the logger out from the context using the same key. For example if you want to leave a log in the database access layer, you can do so as such:

A subroutine that takes logger out from the context

This produces the following log line that contains request method and path.

{"level":"debug","method":"GET","path":"/v1/todo","time":"2022-01-18T15:44:53Z","message":"accessing database"}

Like I said, the situation you will need to use these context APIs are not very often, but it’s really important to understand what it does so you know in which case you actually need to pay attention to it.

Let’s move on to the last practice.

Practice 3: Table Driven Test

Table driven test is a technique to organise tests focusing more on input data/mock/stub and expected output than on assertions, which can be repetitive sometimes.

The reason why I chose this is not only because this is a commonly used practice but also this made writing tests much more fun for me. Having a good motivation in writing test is a great deal to have a happy coding life needless to say to write reliable code.

Let’s take a look at an example.

Let’s say we have a restaurant data type, and it has a method that returns true if it’s open at a given time.

Test target struct & method

Let’s write some tests for this method.

If we visited the restaurant on the time it opens, we expect it to be open.

Test for opening time

So far so good. Let me add more tests for boundary conditions:

Test for boundary conditions

You may have noticed that the differences between these tests are very small, and I see this as a typical use case for the table driven test.

An Introduction to Table Driven Test

Now let’s see how it looks like if they are written in the table driven style.

Table driven test

Firstly, I declared the test target. Depending on the situation it can be inside each test case.

Next, I defined test cases. I used map here so I can use test name as a map key. The test case struct contains input and expected output for each scenario.

Lastly, I looped over the test cases and run the sub test for each test case. Assertions are same as the previous examples but here I’m taking input and expected values from test case struct.

Tests written in the table driven style are compact and less repetitive, and if you want to add more tests, you just need to add a new test case, no more assertions.

Try them out!

On one hand, it’s important to know idioms and practices shared in the community. The Go community is big enough to find them easily. You can find the blog posts, talks, YouTube videos and so on and so forth. Also, when it comes to Go, many practices came from Go standard libraries. Table driven test is a good example. Go is an open source language. It’s a good idea to read standard package codes.

On the other hand, just knowing them doesn’t make you feel comfortable. By far the best way to learn best practice is to use them in the real code base you are working on now and see how well they fit, and this actually is how I learned Go practices. So, write more Go, and don’t be afraid of making mistakes.

I hope this post helps you to have happy Go coding life. Enjoy!

References

At Thirdfort, we’re on a mission to make moving house easier for everyone involved. We use tech, design, and data to make working with clients secure and friction-free for lawyers, property, and finance professionals. Since launching in 2017, we’ve grown from a single London office to an international team with sites in Manchester and Sri Lanka. We’re backed by leading investors like Alex Chesterman (Founder of Zoopla and Cazoo) and HM Land Registry.

Want to help shape Thirdfort’s story in 2022? We’d love to hear from you. Find your next role on our Careers Hub or reach out to careers@thirdfort.com.

--

--