An Early Attempt at Concurrency in Go

Durant Schoon
6 min readDec 17, 2018

--

This is the background article for Testing Batch Sizes for Concurrency in Go (Parameter Searching). Because sometimes the lessons learned from failure are as important as the resulting success.

The Problem: How do you learn about / explore concurrency?

I recently learned Go (the programming language, specifically from the Go Tour). I’ve been interested in getting a software engineering job in Blockchain. Two favored languages seem to be Go and Rust¹ ². I learned Go because I was applying for a job at a particular Blockchain company and there seemed to be other Blockchain companies using Go out there if this one didn’t pan out.

I come from years of programming python and although I love python, I also now really like Go Lang³. The way I think about Go is that it’s stripped down to the essentials so you can focus on one really difficult paradigm to master: concurrency⁴.

When I received the take home problem, I had only taken the Go Tour
and watched two of the recommended videos on concurrency at the end:
Go Concurrency Patterns and Advanced Go Concurrency Patterns. Both of
these videos were fascinating, excellent presentations. But they
didn’t help me prepare specifically for optimizing the problem I was
about to face.

The interviewer gave me a take home test⁵ that they give everyone at the company and told me that most finish it in three days and occasionally someone will finish it in two days. It was a really juicy problem. I’m not going to give specifics because it would be a spoiler. I wouldn’t want job candidates to just search for the solution online. Nor would I want the company to have to come up with a new, similar problem when this one is so great. I’ll just call it a “MerkleTree with a twist” and any references to it here or in my public repo will just call it a MerkleTree. You can read about MerkleTrees here and why they are so important to Blockchain.

I submitted a serial version, but now that I was thinking about a concurrent version, I decided to try to write one. Over Thanksgiving with family visiting, I first wrote a total concurrency failure, but I soon realized what I’d done wrong and was able to come up with a faster version than my original serial version. Along the way I also had to organize my work into a series of tests and trials.

As I mentioned before I don’t want to spoil the juicy problem, but if you’re a (Blockchain or not) company hiring Go Lang programmers and want to look at my code, contact me on github or linkedin and I’ll send you a link to my private repo.

My First Attempt: Failure with a Lesson

To make my program concurrent, I started off thinking vaguely about creating batches of jobs and also putting all my processing into channels using lots of goroutines. I had the benefit of test cases so I would know when I get the same output again after refactoring my code. My original code could handle a tree up to a specific small test size, so the first thing I did was increase my test
tree size to 1M leaves or about 2M total nodes assuming a longer run time would give more accurate results (less sensitive to perturbations). At this larger size, each run took between 4 and 5 seconds.

It was actually hard to convert everything over to channels and goroutines because I was new to thinking about the concepts. I changed my idea of “batches” a couple times. The first time I got everything working, I was thinking that I could put all my smallest operations into a gazillion different goroutines — like many, many horses racing across many, many bridges bringing their parcels to the other side, finishing faster then trying to slog it all over at once.

I came across a copy of The Go Programming Language and looked up the section on concurrency. I decided to try something similar to an example in it where a batch would mean limiting the maximum number of goroutines (let’s say you wanted to run max N at a time) so that the code wouldn’t create too many simultaneous goroutines and run out of control. In the analogy I’d be limiting the code to 100 bridges to send individual horses over them. I was thinking my challenge would be to not build too many bridges.

After about considerable time refactoring (avoiding deadlock and race conditions one painful case at a time — remember I’m new at this), I finally had the code working again … only to find that it was running 20 to 30 seconds to produce the same result. That’s 4 to 5 times SLOWER!

“How could that be?” I wondered. “All these operations are running in their own goroutine, how could that be slower?” But then it dawned on me: each of those goroutines has some overhead. So basically, I created a lot of extra cost with no extra benefit. I soon decided the right way to do it would be get that ratio going the other way: use as low overhead as possible for as much performance gains for that unit of overhead.

I decided I should create multiple big-not-small batches all running at once! In this new concept of “batches”, I’m splitting up a bunch of work, say there are 110 jobs, if my batch size is 20, I’ll have 5 batches of 20 jobs with batch of 10 jobs left over. Now I’d only have 5 bridges with a 20 horse caravan marching across them (and a 10 horse caravan on a sixth bridge). The idea is that once the first caravan of 20 start on the first bridge, we can start setting up the second bridge and so on to the third. This would be in contrast to setting up
all those bridges and getting one horse to go across it.

Despite my failure I was intrigued enough to determine if my new idea would work. And there was another somewhat tantalizing question I could see on the horizon: assuming I get this refactor working with my new notion of batches, how does one decide the optimal size of those batches? That sounded like it was worth exploring!

If you want to read about the technique that lead me to improved performance times, see Testing Batch Sizes for Concurrency in Go (Parameter Searching)

Footnotes

  1. Quora: “Why is Go a good language for developing blockchain technology?” If you’re looking for a Rust/Go comparison, the 2nd half of this article shows code comparisons and has further links.
  2. I also liked this tidbit as a pithy summary: “I think of Rust as a safe C++, and Go as a fast Python (or a vastly simplified Java).” source
  3. Here’s a great video by Peter Bourgon, comparing go lang and python. But the real reason I mention this video is that it’s awesome. The actual comparison to python doesn’t persuade me that much away from python, but the second half of the video is gold for functional programming at the architecture level if you haven’t seen this pattern: Onion Model.
  4. There is another cool merit to go lang which is the approach their take to interfaces which is really interesting if you’re into that kind of thing.
  5. I actually think take home interview problems are a great first filter because 1) the project can be made specific to what the employee will actually do and 2) the perform-on-demand pressure of whiteboard probelm solving is off. The drawback is that take home problems require more work from the candidate, so unless s/he really wants the job and has the time to spare, s/he might move on to the next company. Also if you’re a candidate and you don’t get the job, you’ve used up a lot of free time … but in this case sometimes you can turn that into something else beneficial ;) Obviously, there are lots of opinions on the internet about this.

--

--