Handmade mutex in 24 lines of go

Alexander Borisov
4 min readSep 20, 2018

--

We’re going to create our own mutex using CAS(CompareAndSwap) and Goshed. Why? Doesn’t matter. Fun is a good enough reason to do strange things.

Mutexes and other lockers is used to protect access to data from different computation threads. Without protection you can get inconsistent state called race condition. So if we want to create application that will work fine in multithreading environment we need an instrument to synchronize access to the data.

Should we create our own mutex instead of using existed from standard library? Yes! (Of course not.)

Where do we start?

Our mutex must match the sync.Locker interface to make it possible to replace any existed mutex or other locker with our implementation.

type Locker interface {
Lock()
Unlock()
}

Ok, here it is:

type Lock struct {}func (l *Lock) Lock() {}func (l *Lock) Unlock() {}

What’s next?

Goroutines that are trying to obtain lock must “sleep” until it will be possible.

We have several possibilities to satisfy this requirement. One of them is an atomic.CompareAndSwap operation. Let’s look closer to it.

CAS compares the contents of a memory location with a given value and, only if they are the same, modifies the contents of that memory location to a new given value. This is done as a single atomic operation. © Wikipedia

Sounds like what we need.

First of all we need values that we will swap later.

const (
unlocked int32 = iota
locked
)

Then we need swap them.

type Lock struct {
state int32
}
func (l *Lock) Lock() {
for !atomic.CompareAndSwapInt32(&l.state, unlocked, locked) {
}
}
func (l *Lock) Unlock() {
atomic.StoreInt32(&l.state, unlocked)
}

This approach is called spinlock:

Spinlock is a lock which causes a thread trying to acquire it to simply wait in a loop (“spin”) while repeatedly checking if the lock is available. © Wikipedia

Our Lock method will try to swap unlockedstate to locked until it will be possible.

Are we finished? Let’s check it with simple test:

import (
"testing"
)
func TestLock(t *testing.T) {
mx := &Lock{}
resource := make(map[int]int)
done := make(chan struct{})

go func() {
for i := 0; i < 10; i++ {
mx.Lock()
resource[i] = i
time.Sleep(time.Millisecond)
mx.Unlock()
}

done <- struct{}{}
}()

for i := 0; i < 10; i++ {
mx.Lock()
_ = resource[i]
time.Sleep(time.Millisecond)
mx.Unlock()
}

<-done
}

We do some concurrent reads and writes here. Let’s run it:

$ go test -cpu 1 -run Lock

But… Wait. Why test is running so long? Stop. SIGQUIT? Killed due to timeout?

Since the go 1.14 released with support of goroutine preemptive scheduling this issue is not longer a problem.

Probably you asked yourself why we’ve added time.Sleep to test. It have done to illustrate how our solution will [not] work inside the one system thread.

Let’s look closer to what’s happening here:

First goroutine obtained lock, updated resource and then called time.Sleep. After that Go scheduler somehow scheduled other goroutine for exectution. Second goroutine tries to get lock in loop. The problem is that the scheduler can’t interrupt goroutine execution at any time and returns to the first goroutine to unlock resource. Instead of this we got infinite loop.

We can fix it with runtime.Gosched function.

Gosched tells the scheduler that it can schedule next goroutine for execution (even if current and next goroutine are the same).

import (
"runtime"
"sync/atomic"
)

const (
unlocked int32 = iota
locked
)

type Lock struct {
state int32
}

func (l *Lock) Lock() {
for !atomic.CompareAndSwapInt32(&l.state, unlocked, locked) {
runtime.Gosched()
}
}
func (l *Lock) Unlock() {
atomic.StoreInt32(&l.state, unlocked)
}

Run test again. Everything works. Even if we add flag -race to check code for data races.

And last but not least — benchmarks!

$ go test -bench=. -cpu 1,2,4BenchmarkLock           100000000               15.0 ns/op
BenchmarkLock-2 100000000 15.0 ns/op
BenchmarkLock-4 100000000 15.0 ns/op
BenchmarkMutex 100000000 15.3 ns/op
BenchmarkMutex-2 100000000 15.1 ns/op
BenchmarkMutex-4 100000000 15.4 ns/op
BenchmarkWMutex 50000000 32.2 ns/op
BenchmarkWMutex-2 50000000 32.4 ns/op
BenchmarkWMutex-4 50000000 32.2 ns/op
BenchmarkRMutex 100000000 15.7 ns/op
BenchmarkRMutex-2 100000000 14.7 ns/op
BenchmarkRMutex-4 100000000 15.4 ns/op

Gist with complete example and benchmarks.

This post is an example of how you can use CompareAndSwap and Goshed. But don’t use this locker on production without good reason.

Thanks for reading. Comments are welcome.

--

--