A quick case study.
Special thanks and credit to Dan McClure for implementation of RunAsync
First and foremost, I’d like to say that working with Go the past six months at Compass has been nothing short of a joy. The language’s uncompromisingly simple feature set and familiar syntax has made it the most legible, learnable, and maintainable language I’ve had the pleasure of working with. From type inference, to first-class functions, to duck-typing, Go picks its design battles wisely. While Go has many compelling features, today I’m going to touch on Go’s concurrency model, and one example of how we’re harnessing its power to build better software at Compass. As a disclaimer, this article does assume an introductory level knowledge of Go.
Before getting into the how and why, let’s briefly cover the what. There are two primary elements of Go’s concurrency model:
A Goroutine is a concurrent thread of execution. Compared to traditional threads, goroutines are computationally much cheaper to utilize. They have much faster startup times and take up minimal space on the stack. To spawn a new goroutine, you simply precede a function call with the keyword go.
A Channel is the conduit through which goroutines communicate with one another via message passing. Channels typically use blocked message passing, although non-blocking channel operations are possible as well. Channels are how goroutines share data with one another without the complexity and safety concerns that come with sharing memory. In terms of usage, a channel can essentially be thought of as a typed queue with optional capacity.
A Case for Concurrency
In a microservice architecture such as Compass’, each microservice owns its own data exclusively and provides an API for other services to query this data. While defining the scope of a microservice is often a point of contention, for us it boils down to a service that does one thing, and one thing well. In practice, this often takes the form of feature-based microservices.
Our team was recently tasked with building the web API for an analytics application — one that would require data pertaining to customer usage of roughly a dozen different features on a per-customer basis, in real time. In terms of technical implications, this meant that to serve a request, we’d need to query the APIs of many disparate microservices, process their data into a consistent format, and aggregate all of this data into a single response payload. With performance at the front of our minds, we knew that the only way for this map-reduce operation to be feasible would be to make the microservice API calls concurrently.
Luckily for our team, our need to execute functions concurrently was hardly a novel one. In fact, the gophers at Compass had already implemented an extremely helpful utility function titled RunAsync for this exact purpose (huge shoutout to Dan McClure, the author of this very clever utility). RunAsync had a simple and elegant objective; to execute any list of functions concurrently, exiting and returning on the first encountered error.
RunAsync alone got us 95% of the way there. However, our use case did not require such a strict approach to error handling. For this reason, we created RunAsyncAllowErrors. As opposed to exiting early and returning the first error, RunAsyncAllowErrors returns an indexed list of errors encountered, always waiting to return until all functions have finished executing.
Enough talk, let’s look at some code. Note that early exit conditions and other optimizations have been removed for readability sake.
There’s quite a bit to unpack in this code. While the core concurrency logic is fairly straight forward (spawning goroutines in a for loop, capturing their errors, joining them), there are some interesting things that take place in regards to error handling.
This utility is powerful in that it handles both explicitly thrown errors as well as runtime panics. For the latter type of error, the helper function formatStack is used to parse the ugly stack trace thrown by the panic error and extract only the line that we care about.
While many gophers would simply write their concurrent logic inline, we’ve found that for safer concurrency, a single generic concurrency utility like RunAsyncAllowErrors can be advantageous. Of course, the tradeoff is that this utility becomes a single point of failure, so you better get it right (took us a couple of patches). Now that we’ve walked through the code and discussed its functionality, let’s take a look at a simple usage example.
That’s pretty cool, but only useful for executing asynchronous tasks with no associated state. If you refer back to the code for RunAsyncAllowErrors, you’ll recall that a GenericFunction is a function that takes no arguments and returns an error (or nil). But what if we want to concurrently execute functions that don’t conform to the signature of a GenericFunction? Let’s start by seeing how we can concurrently execute functions that take different arguments.
Go’s support for closures allows us to wrap a GenericFunction in the context of a scope that has access to values not explicitly passed into the GenericFunction as arguments. In this example, we’re getting our GenericFunction as the return value of a higher-order function that wraps it, but the same effect can be achieved inline as well. Now that we can pass arguments in, let’s see how we can capture return values.
Once again, closures save the day. Just like how closures allow us to preserve values, they also allow us to preserve references to objects outside of the current scope using pointers. To scale our example, we’d probably want to define a structure to hold all of the state we want to pass around to each function. We’d also want to handle and propagate errors of course.
Go Forth and Prosper
In the context of a larger paradigm shift towards concurrent programming, Go’s concurrency features are a great tool for both learning about concurrency and using it to optimize your codebase. If you enjoy Go as much as we do, make sure to keep an eye out for more content from Compass. Oh, and we’re hiring! We hope this article was helpful, thanks for reading.