dm03514
dm03514
May 22 · 6 min read

Channel based synchronization vs mutex based synchronization is a polarizing topic within the go community. This post discusses a pattern named “monitor goroutine” which guards access to memory, or an operation, and exposes a channel as an interface. This post explains what a monitor goroutines are and scenarios where this pattern can be useful. Since monitor goroutines are built on channels they are a compelling choice for engineers that are new to concurrent programming. This post then compares monitor goroutines to mutexes and explores some situations where one approach may be more favorable than the other.

Guarding Access Through Monitors

A monitor goroutine guards access to memory or an operation through a channel. A simple example is shown below:

func NewCounterMonitor(ctx context.Context) chan <- int {
ch := make(chan int)

go func() {
counter := 0
for {
select {
case i, ok := <-ch:
if !ok {
return
}
counter += i
case <-ctx.Done():
fmt.Printf("final_count: %d\n", counter)
return
}
}
}()

return ch
}

(There are a couple of useful patterns to make testing monitor routines deterministic but for right now we’ll keep it as an async operation.) . We’ll create a “test” just to drive these examples for this post:

func TestNewCounterMonitor(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
in := NewCounterMonitor(ctx)
in <- 1
}

And to see it in action:

$ go test ./... -v
=== RUN TestNewCounterMonitor
--- PASS: TestNewCounterMonitor (0.00s)
PASS
final_count: 1
ok github.com/dm03514/grokking-go/monitor-goroutines 1.093s

There’s a lot going on in the NewCounterMonitor routine which is one of the reasons channels are so controversial. The memory that is “monitored” (or guarded) is

counter := 0

The rest of the code is boiler plate around setting up the interface (the channel):

ch := make(chan int)
...
case i, ok := <-ch:
...
return ch

kicking off the goroutine, and consuming the channel:

go func() {
for {
select {
case i, ok := <-ch:
}
}
}()

and ensuring that the monitor completes:

case <-ctx.Done():

And when the channel is closed:

i, ok := <-ch

The interface that is used into the monitor is exposed through a channel. A client can send only access the counter through a channel. Go ensures that channels are synchronized and thread-safe by default.

Guarding Access Through Mutexes

One approach that may initially seem like an alternative is to synchronize access using a mutex. Mutexes are a primitive that allow an engineer to ensure that only a single goroutine is executing at a single time. The api requires setting up a lock around the critical section of code which requires mutual exclusion. In our case this is the incrementing of the counter variable:

type SafeCounter struct {
mu *sync.Mutex
count int
}

func (s *SafeCounter) Inc() {
s.mu.Lock()
defer s.mu.Unlock()
s.count++
}

This example encapsulates the synchronization (through mutex mu) and the operation that needs to be guarded (incrementing count) in a single data-structure to help protect the end user. Mutexes differ from the monitor pattern in that instead of just a channel being shared the full data structure is being shared and guarded with a mutex. The interface to the counter IS the full data structure. This means that in order to use a SafeCounter a reference to the safe counter is passed to each goroutine:

sc := &SafeCounter{
mu: &sync.Mutex{},
}
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
defer wg.Done()
sc.Inc()
}()
}
wg.Wait()
// sc.count == 10

Topology

Although both examples accomplish the same thing their structures differ significantly. First consider the monitor, which only shares a reference to a channel between goroutines:

Monitor Goroutine Topology

Monitors provide strict isolation of the counter at the level of a goroutine. This means that only a single goroutine (the monitor) has access to the counter variable. This significantly minimizes the chances of concurrent synchronization errors around accessing it.

Consider the mutex which shares a reference to memory:

Mutex Based Counter

The Mutex requires sharing a much larger segment of memory (compared to only a channel). This means that the mutex, by default, exposes shared mutable state which is one of the core challenges with concurrent programming. The above examples illustrate how go channels achieve sharing memory by communication, vs how mutexes share memory.

Sequence

The monitor as implemented has no capabilities to return data to a caller, it only functions as an asynchronous data sink, making returning data to the caller impossible:

Since the mutex is shared memory it can trivially implement a synchronous (request/response|dispatch/return) API (as seen above).

Performance

Each solution was benchmarked in order to compare its performance against a pool of goroutines. The code and benchmarks are available in github. Each solution was benchmarked against 0, 1, 10, 100, 1000, and 10000 goroutines. In each benchmark the counter was incremented a total of b.N times. The Mutex based approach significantly outperformed the monitor goroutine:

This specific test does not exercise contention among the goroutines. I’m assuming that the mutex performance is due, at least in part, to not having to context switch between goroutines. Though out of the scope of this post, pprof could be used to find out where the source of latency of each approach).

When To Use

Are there any sorts of strengths or insights that can be derived from the analysis above to help inform decision of which approach to use?

async/sync

Monitor goroutines (and channel based concurrency) is best suited for asynchronous computation. Channel based communication and program flow can quickly become unwieldy (concurrent spaghetti) when multiple routines need to communicate synchronously across channels (think sending data on channel and reading response from another channel).

Mutexes can trivially support synchronous call/response style apis. I have found that much of the benefit of channel safety is lost when trying to orchestrate complex flows

buffering/timing/thresholds

Monitor goroutines are well suited for flushing data, buffering, timing because for/select is at their foundation. Consider the case where the count should be logged at a certain interval (every 10 seconds). Using a monitor routine this becomes trivial:

func NewCounterMonitor(ctx context.Context) chan <- int {
ch := make(chan int)

go func() {
counter := 0
ticker := time.NewTicker(10 * time.Second
defer ticker.Stop()
for {
select {
case i, ok := <-ch:
if !ok {
return
}
counter += i
case <-ticker.C:
log.Printf("NewCounterMonitor.counter: %d\n", counter)
case <-ctx.Done():
fmt.Printf("final_count: %d\n", counter)
return
}
}
}()

return ch
}

Since the monitor is the owner of counter it can be accessed anywhere in the loop without being concerned about thread-safety. Imagine adding ticker logic to the mutex based count: another goroutine would have to be spawned which would complicate the implementation.

Correctness over Performance

Because the monitor based approach guards memory or an operation and doesn’t even expose it for concurrent access it may be a better choice for default correctness, when the above constraints are met.

Pipelines

Monitor goroutines are extremely well suited for pipelines because they can trivially implement publishing to another channel as output, and by default they are a sink stage. Because of how mutexes rely on shared memory they are best suited for enhancing a stage in a pipeline.

Conclusion

For trivial concurrent memory access either solution is more than suited for the task. Go provides the built in race detector to help identify race conditions with either approach. The choice of mutex vs channel begins to split along the performance dimension with mutexes outperforming channels. Async/Sync is another important dimension of convergence, monitors are much better suited for async processing, whereas mutexes are much better suited for synchronous processing. Finally, if a pipeline architecture is being used Monitors fit very well as sinks or intermediary states, (filters, transformers, etc).

References

Dm03514 Tech Blog

Dm03514’s Tech Blog

dm03514

Written by

dm03514

https://stackoverflow.com/users/594589/dm03514

Dm03514 Tech Blog

Dm03514’s Tech Blog

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade