Always manage your goroutines

Yinon Eliraz
2 min readFeb 11, 2019

--

One of the great things in golang are goroutines. They are easy to use, light wieghted, and are managed by the scheduler. However, they come with a cost.

The graph above describes the memory utilization of a service written in golang, serving requests from clients. One of the endpoints triggers another request from the service to another service. To speed things up (and since we can), the service is triggering the second request asynchronisly. It looks something like this:

func Invoke(ctx context.Context, job *api.Job) (*pb.Empty, error) {
go func() {
if err := ma.invoke(job); err != nil {
logger.Errorf(ctx, "failed to invoke")
}
}()
return &pb.Empty{}, nil
}

This was a quick fix, and made us overlook the consequences. When this service was loaded with requests, the number of goroutines went out of hand and the service crashed.

Once we realized where the problem was, the solution was very clear: we are not managing the goroutines and we create them on the fly. To solve this issue we simply sent the job, with the function that handles it to a channel. On the other side of the channel, a worker then unloads the job and applies the function to the job:

func Invoke(ctx context.Context, job *api.Job) (*pb.Empty, error{
ma.workerChannel <- &worker.Work{
Task: job,
WorkerFunction: ma.invoke,
}
return &protobuf.Empty{}, nil
}

Worker:

func (worker *Worker) Start(){
go func(){
for {
select{
case job :=<- worker.tasks:{
if job != nil{
err := job.WorkerFunction(job.Task)
if err != nil {
logger.Errorf("worker function failed")
}
}
}
case <- worker.stop:
break
}
}
}()
}

Creating 100 workers solved the memory issue and sped up the response time under loads. This design is simple and easy to implement — use it!

--

--