When and Where to Use Goroutines in Go

Gaurav Kapatia
Newton School
Published in
6 min readJul 28, 2024

In my previous blog, “Go-ing Concurrent: The Power of Goroutines and Channels”, we delved into the basics of goroutines and channels, exploring how they enable concurrent programming in Go. Now, let’s take a step further and discuss when and where to use goroutines effectively to maximise performance and efficiency in your Go applications.

When to Use Goroutines

Concurrent Tasks:

Goroutines are perfect for tasks that can run independently. For instance, if you need to perform multiple operations simultaneously, such as processing different data sets or executing independent calculations, goroutines are your go-to solution.

go processDataset1()
go processDataset2()

I/O Bound Operations:

Network calls, file operations, and database queries often involve waiting for external resources, making them ideal candidates for goroutines. Using goroutines for these operations ensures that your application remains responsive while waiting for I/O operations to complete.

go fetchDataFromAPI()
go readFileFromDisk()

Parallelism:

For CPU-bound tasks, goroutines can leverage multi-core processors to perform parallel computations, significantly improving performance. This is especially useful in scenarios like image processing, data analysis, and scientific computations.

go computeTask1()
go computeTask2()

Real-Time Data Processing:

In real-time applications, such as streaming data or event-driven systems, goroutines can process incoming data concurrently, ensuring that your application handles high-throughput data efficiently.

go processIncomingData(dataChannel)
go handleEvents(eventChannel)

Where to Use Goroutines

Web Servers:

Web servers often handle multiple client requests simultaneously. Using goroutines for each request allows the server to process numerous requests concurrently, improving responsiveness and scalability.

func handler(w http.ResponseWriter, r *http.Request) {
go processRequest(r)
fmt.Fprintf(w, "Request is being processed")
}

NOTE: Using frameworks like Gin, Gorilla Mux, or Echo can greatly simplify the process of writing web servers in Go. These frameworks leverage Go’s concurrency model, enabling each incoming HTTP request to be handled in its own goroutine. This allows you to focus on your application logic while the frameworks manage the underlying concurrent processing efficiently.

Microservices:

In a microservices architecture, goroutines can isolate tasks within services, allowing for efficient resource utilisation and better fault isolation. Each microservice can handle its tasks concurrently, ensuring that the overall system remains robust and responsive.

go handleServiceA()
go handleServiceB()

NOTE: Using frameworks like Go Kit, Micro, or Gizmo can help streamline the development of microservices by providing essential tools and patterns for building and managing services, all while leveraging Go’s concurrency model.

Background Processing:

Running background tasks without blocking the main execution flow is a common requirement in many applications. Goroutines are ideal for executing tasks like sending emails, generating reports, or performing periodic maintenance.

go sendEmail(email)
go generateReport(reportData)

NOTE: Utilizing libraries such as Work, Asynq, or Taskq can help manage background tasks more effectively. These libraries provide robust task management features like retries, scheduling, and task prioritization, all while leveraging Go’s concurrency model. This allows you to offload complex background processing to specialised systems, ensuring your main application logic remains clean and efficient.

Pipelines:

Building data processing pipelines with goroutines allows for efficient handling of complex workflows. Each stage of the pipeline can run concurrently, passing data to the next stage via channels.

go fetchData(dataChannel)
go processData(dataChannel, processedDataChannel)
go storeData(processedDataChannel)

NOTE: Using frameworks like Go-Flow, Goka, or Kafka-Go can help manage the complexities of data pipelines, providing robust tools for processing, routing, and transforming data streams concurrently.

Best Practices

Avoiding Goroutine Leaks:

Ensure that goroutines are properly terminated to avoid memory leaks. This often involves using channels or contexts to signal goroutines to exit.

/*
worker is a function that processes data from a channel.
It ensures that goroutines are not leaked by using a context for cancellation.
*/
func worker(ctx context.Context, ch <-chan int) {
for {
select {
// This case handles the context cancellation, which helps to avoid goroutine leaks.
case <-ctx.Done():
// When the context is canceled, the goroutine will exit, preventing a leak.
return
case data := <-ch:
// Process the received data.
process(data)
}
}
}

Error Handling:

Properly handle errors within goroutines to prevent silent failures. Use channels to propagate errors back to the main function.

/*
worker is a function that processes data from a channel and handles errors.
ch is a read-only channel that provides data to be processed.
errCh is a write-only channel used to report errors.
*/
func worker(ch <-chan int, errCh chan<- error) {
// Loop over the data received from the channel.
for data := range ch {
// Attempt to process the data.
if err := process(data); err != nil {
// If an error occurs, send it to the error channel.
errCh <- err
// Exit the function to stop further processing.
return
}
}
}
/*
process is a placeholder for a function that processes the data and
returns an error if something goes wrong.
Replace this with your actual processing logic.
*/
func process(data int) error {
// Example processing logic (replace with actual logic)
if data < 0 {
return fmt.Errorf("negative data: %d", data)
}
return nil
}

Synchronisation:

Use synchronisation primitives like sync.WaitGroup and sync.Mutex to coordinate goroutines and protect shared resources.

var wg sync.WaitGroup

// Loop to start 5 goroutines.
for i := 0; i < 5; i++ {
wg.Add(1) // Increment the WaitGroup counter before starting each goroutine.

// Launch a goroutine to process the item.
go func(i int) {
defer wg.Done() // Ensure Done() is called when the goroutine completes.
process(i) // Call the process function with the current index.
}(i) // Pass the current value of 'i' to the goroutine.
}

wg.Wait() // Wait for all goroutines to complete.

Resource Management:

Manage resources efficiently by ensuring that channels are closed properly and resources like file handles are released.

func producer(ch chan<- int) {
// Loop to produce 10 integers and send them to the channel.
for i := 0; i < 10; i++ {
ch <- i // Send the integer 'i' to the channel.
}
// Close the channel to signal that no more values will be sent.
close(ch)
}

Common Pitfalls and How to Avoid Them

Race Conditions:

Race conditions occur when multiple goroutines access shared data concurrently without proper synchronisation. Use sync.Mutex to protect shared data.

var mu sync.Mutex // Declare a Mutex to protect shared data.
var counter int // Shared data that needs protection.

func increment() {
mu.Lock() // Lock the Mutex before accessing the shared data.
counter++ // Safely increment the shared counter.
mu.Unlock() // Unlock the Mutex after accessing the shared data.
}

Deadlocks:

Deadlocks happen when goroutines wait indefinitely for each other to release resources. Avoid circular dependencies and use channels carefully to prevent deadlocks.

/*
This setup can lead to a deadlock if both goroutines end up waiting
indefinitely for each other to release resources.
*/
func main() {
ch1 := make(chan int) // Create the first channel.
ch2 := make(chan int) // Create the second channel.

// First goroutine: sends a value to ch1 and waits to receive from ch2.
go func() {
ch1 <- 1 // Send a value to ch1.
<-ch2 // Wait to receive a value from ch2.
}()

// Second goroutine: sends a value to ch2 and waits to receive from ch1.
go func() {
ch2 <- 1 // Send a value to ch2.
<-ch1 // Wait to receive a value from ch1.
}()
}

Overhead:

While goroutines are lightweight, creating too many can still cause overhead. Use worker pools to manage the number of active goroutines.

/*
workerPool creates a pool of worker goroutines to process tasks concurrently,
reducing overhead.
*/
func workerPool(tasks []func(), numWorkers int) {
var wg sync.WaitGroup
taskCh := make(chan func()) // Channel to distribute tasks to workers.

// Create worker goroutines.
for i := 0; i < numWorkers; i++ {
wg.Add(1) // Increment the WaitGroup counter for each worker.
go func() {
defer wg.Done() // Ensure the WaitGroup counter is decremented when the worker finishes.
for task := range taskCh { // Continuously receive tasks from the channel.
task() // Execute the received task.
}
}()
}

// Send tasks to the worker pool.
for _, task := range tasks {
taskCh <- task // Send each task to the channel.
}
close(taskCh) // Close the channel to signal no more tasks will be sent.

wg.Wait() // Wait for all workers to finish processing tasks.
}

Understanding when and where to use goroutines can significantly enhance the performance and efficiency of your Go applications. By leveraging goroutines for concurrent tasks, I/O operations, parallelism, real-time data processing, and more, you can build robust, high-performance applications that handle complex workflows seamlessly. Remember to follow best practices and be mindful of common pitfalls to make the most of Go’s powerful concurrency model.

Feel free to dive deeper into each section, adding your insights and experiences to make the blog more engaging and informative. Happy coding!

--

--

Gaurav Kapatia
Newton School

Tech Lead at Newton School. Python, Django, Kubernetes entusiast. Leading tech at a $100M+ ed-tech startup. Passionate about innovation and scalable solutions.