Get water out of the go-pool

A little while ago I wrote about go-pool, and here I am again to write about it some more.

It works well enough when you just need to perform the work and don’t need to get data back out. When the result of each work unit is a side effect, like saving to the filesystem or uploading to a remote server.

Look who’s come crawling back

But what happens when you want to get the water back out of the pool?

Let’s say our time-consuming work is that we’re generating a bunch of random numbers and we want to get a sorted list of them. We make our work unit and our pool:

type RandomWorkUnit struct {
Max int

func (u RandomWorkUnit) Perform() {
log.Println(rand.Intn(u.Max)) // side effect only

p := pool.NewPool(20, 20)

for i := 0; i < 100; i++ {
Max: 100,


That generates all those random numbers in parallel, but we don’t have a list.

The way to get the data back out is with two channels:

  • One for each individual result
  • One for the final list of all results

The work units send their results to the first channel, and we need a separate goroutine to consume that channel and then, once it’s finished, send the final list to the second channel so we can get it back on the main thread.

type RandomWorkUnit2 struct {
Max int
Ch chan int

func (u RandomWorkUnit2) Perform() {
u.Ch <- rand.Intn(u.Max) // send the result back to the channel

p2 := pool.NewPool(20, 20)

randCh := make(chan int, 20)
finalCh := make(chan []int, 1) // there's going to be just 1 result

go func() { // separate goroutine to consume the individual results
all := make([]int, 0)
for n := range randCh { // consume everything on that channel
all = append(all, n)
sort.Slice(all[:], func(i, j int) bool {
return all[i] < all[j]
finalCh <- all // send the total list back to main thread

for i := 0; i < 100; i++ {
Max: 100,
Ch: randCh, // each work unit gets the first channel

close(randCh) // this is how we know there won't be more results

sortedInts := <-finalCh // block here until all results are ready

We start a separate goroutine that consumes the first channel and builds up a list of all the results. Once that channel is exhausted (this is why we need to close it further down), we’re able to sort it and send it back to the second channel, where we now have access to it on the main thread. That secondary goroutine compiles the total list in parallel as individual results become available, and once it sends its data back it finishes and goes away. And because you’re sending data through channels, you’re still not sharing memory anywhere.

You don’t always need to get data back out of the pool, but when you do, this is how.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.