When to use buffered channels in go.

Sergei Loginov
4 min readSep 7, 2020

--

Channels together with goroutines are the core of Go’s concurrency mechanism which is based on CSP. While this mechanisms offer many convenient ways to handle concurrency, they also could be used sub-optimally and potentially bring problems to the developers.

If the channel is unbuffered, the sender blocks until the receiver has received the value. If the channel has a buffer, the sender blocks only until the value has been copied to the buffer; if the buffer is full, this means waiting until some receiver has retrieved a value. So it could be very tempting to use buffered channel with the false sense of security of not getting a deadlock.

When not to use buffered channels, even if you really want to.

First, Let’s consider the next example:

In this example we pretend to receive messages from external source. For each new message there is a new goroutine that tries to push the message into our ch channel.

In the main loop, we read the new message and pretend to have a slow operation using time.Sleep().

What is wrong with this approach? When we put it like this we can easily see that the number of goroutines in the application will be constantly rising. Of course in real world messages won’t be coming in so regularly so the problem would not be so obvious. That means the programmer could hope that eventually the number of messages per second will go down and main loop could be able to work through all the messages.

Let’s say that the team with such a code finally notices that something is wrong with their application. It could be constantly rising number of goroutines on their metrics graph, it could be constantly rising cpu and memory allocations or the fact that they lose many messages when they restart the app.

So one of the developers receives a task of fixing the problem and it’s very tempting for him to do something like this:

With just one line the problem seems to be fixed. The application works happily for some time, but eventually, when the application reaches certain uptime the problem start to arise again. But this time it’s not so easy for the developer to reproduce the problem on their local machine or even the test stand.

Do you see why using buffered channel in this case was not the best idea? The only way to know how many objects there are in the buffer is len(channel), but since we are working in concurrent environment this number could change really quickly. That means that we can’t really know the state of the buffer and even if we knew it, we still couldn’t changed it since buffer capacity can’t be extended.

So what was the better solution? We will make receiver non-blocking.

Now instead of doing all the work in the main loop, we transfer the load to multiple go routines. The channel is never blocked.

We could also add graceful shutdown to the application, so we won’t lose the work that already started. It’s also a good practice to add timeout to slow operations so we won’t be waiting for them forever.

If you are afraid that there will be too many slow operation goroutines, you can use a worker pool.

Of course in the real world pipelines could consist of more than two pieces and slow operations could be less obvious.

When to use buffered channel

Good rule of thumb with buffered channels is “Only add buffer to a channel if you know what buffer size you should use”. If you are unsure if buffer should be 100 or 1000 you are probably better of with unbuffered channel that will block fast and you will be able to quickly find and fix the problem.

But sometimes you know exactly what buffer size you need. For exemple when you are testing and have exactly X number of test cases you could do chan with buffer X, push the cases there and close the channel.

Another interesting thing is when you only want X goroutines running simultaneously. For example you have 16 core processor and your task is heavy so it only makes sense to run 16 goroutines in parallel. In similar cases you can use semaphore pattern for throttling goroutines:

Also there is leaky buffer from effective go example. It’s used when you want to reuse already allocated buffers to reduce garbage collectors work.

The tools of concurrent programming can even make non-concurrent ideas easier to express. Here’s an example abstracted from an RPC package. The client goroutine loops receiving data from some source, perhaps a network. To avoid allocating and freeing buffers, it keeps a free list, and uses a buffered channel to represent it. If the channel is empty, a new buffer gets allocated. Once the message buffer is ready, it’s sent to the server on serverChan.

Closing statement

While it’s very tempting to put buffers in every channel, there is a big chance that your application does not need it. When working with channels you can remember the rule of thumb.

Only add buffer to a channel if you know what buffer size you should use.

Special thanks to Kabisovvaleriy for reviewing this article.

--

--