Anatomy of Channels in Go - Concurrency in Go

What are channels?

A channel is a communication object using which goroutines can communicate with each other. Technically, a channel is data transfer pipe where data can be passed into or read from. Hence one goroutine can send data into a channel, while other goroutine can read that data from the same channel.

Declaring a channel

Go provides chan keyword to create a channel. A channel can transport data of only one data type. No other data types are allowed to be transported from that channel.

Above program declares a channel c which can transport data type of int. Above program prints <nil> because zero-value of channel is nil. But a nil channel is not useful. You can not pass data to or read data from a channel which is nil. Hence, we have to use make function to create a ready-to-use channel.

We have used short-hand syntax := to make a channel using make function. Above program yields following result

type of `c` is chan int
value of `c` is 0xc0420160c0

Notice value of channel c. Looks like it is a memory address. Channels by default are pointers. Mostly, when you want to communicate with a goroutine, you pass the channel as argument to the function or method. Hence when goroutine receives that channel as argument, you don’t need to dereference it to push or pull data from that channel.

Data read and write

Go provide very easy to remember left arrow syntax <- to read and write data from a channel.

c <- data

Above syntax means we want to push or write data to the channel c. Look at the direction of the arrow. It points from data to channel c. Hence we can imagine that we are trying to push data to c.

<- c

Above syntax means we want to read some data from channel c. Look at the direction of arrow, it starts from the channel c. This statement does not push data into anything, but still it’s a valid statement. If you have variable that can hold the data coming from the channel, you can use below syntax

var data int
data = <- c

Now data coming from channel c which is of type int can be stored into variable data of type int.

Above syntax can be re-written using shorthand syntax as below

data := <- c

Go will figure out data type of data being transported in channel c and gives data a valid data type.

All above channel operations are blocking by default. In previous lesson, we saw time.Sleep blocking a goroutine. Channel operations are also blocking in nature. When some data is written to the channel, goroutine is blocked until some other goroutine reads it from that channel. At the same time, as we seen in concurrency chapter, channel operations tells scheduler to schedule another goroutine, that’s why program doesn’t block forever on same goroutine. These features of channel is very useful in goroutines communication as it prevent us writing manual locks and hacks to make them work with each other.

Channels in practice

Enough talking among us, let’s talk to a goroutine.

Let’s talk about execution of above program step by step.

  • We declared greet function which accepts a channel c of transport data type string. In that function, we are reading data from the channel c and printing that data to the console.
  • In main function, program prints main started to the console as it is the first statement.
  • Then we made channel c of type string using make function.
  • We passed channel c to the greet function but executed it as a goroutine using go keyword.
  • At this point, process has 2 goroutines while active goroutine is main goroutine (check previous lesson to know what it is). Then control goes to next line.
  • We pushed a string value John to channel c. At this point, goroutine is blocked until some goroutine reads it. Go scheduler schedule greet goroutine and it’s execution starts as per mentioned in the first point.
  • Then main goroutine becomes active and execute final statement, printing main stopped.


As discussed, when we write or read data from channel, that goroutine is blocked and control is passed to available goroutine. What if there is no other goroutines available, imagine all of them are sleeping. That’s where deadlock error occurs crashing the whole program.

If you are trying to read data from a channel but channel does not have a value available with it, it blocks the current goroutine and unblocks other in a hope that some goroutine will push a value to the channel. Hence, this read operation will be blocking. Similarly, if you are to send data to a channel, it will block current goroutine and unblock others until some goroutine reads the data from it. Hence, this send operation will be blocking.

A simple example of deadlock would be only main goroutine doing some channel operation.

Above program will throw below error in runtime.

main() started
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan send]:
program.Go:10 +0xfd
exit status 2

fatal error: all goroutines are asleep — deadlock!. Seems like all goroutines are asleep or simply no other goroutines are available to schedule.

☛ Closing a channel

A channel can be closed so that no more data can be sent through it. Receiver goroutine can find out state of the channel using val, ok := <- channel syntax where ok is true if channel is open or read operations can be performed and false if channel is closed and no more read operations can be performed. A channel can be closed using close built-in function with syntax close(channel). Let’s see a simple example.
Just to help you understand blocking concept, first send operation c <- "John" is blocking and some goroutine has to read data from the channel, hence greet goroutine is scheduled by the Go Scheduler. Then first read operation <-c is non-blocking because data is present in channel c to be read from. Second read operation <-c will be blocking because channel c does not have any data to be read from, hence Go Scheduler activates main goroutine and program starts execution from close(c) function.

From above error, we can see that we are trying to send data on a closed channel. Also, if we are trying to read from closed channel, program will panic. To better understand usability of closed channels, let’s see for loop.

☛ For loop

An infinite syntax for loop for{} can be used to read multiple values sent through a channel.

In above example, we are creating goroutine squares which returns squares of numbers from 0 to 9 one by one. In main goroutine, we are reading those numbers inside infinite for loop.

In infinite for loop, since we need a condition to break the loop at some point, we are reading value from the channel with syntax val, ok := <-c. Here, ok will gives us additional information when channel is closed. Hence, in squares goroutine, after done writing all data, we close the channel using syntax close(c). When ok is true, program prints value in val and channel status ok. When it is false, we break out of the loop using break keyword. Hence, above program yields following result

main() started
0 true
1 true
4 true
9 true
16 true
25 true
36 true
49 true
64 true
81 true
0 false <-- loop broke!
main() stopped
When channel is closed, value read by the goroutine is zero value of the data type of the channel. In this case, since channel is transporting int data type, it will be 0 as we can see from the result.

To avoid pain of manually checking for channel closed condition, Go gives easier for range loop which will automatically close when channel is closed. Let’s modify our previous above program.

In above program, we used for val := range c instead of for{}. range will read value from the channel one at a time until it is closed. Hence, above program yields below result

main() started
main() stopped
If you don’t close the channel in for range loop, program will throw deadlock fetal error in runtime.

☛ Buffer size or channel capacity

As we saw, every send operation on channel blocks the current goroutine. But so far we used make function without second parameter. This second parameter is capacity of a channel or buffer size. By default, channel buffer size is 0 also called as unbuffered channel. Whatever written to the channel is immediately available to read.

When the buffer size is non-zero n, goroutine is not blocked until after buffer is full. When buffer is full, any value sent to the channel is added to the buffer by throwing out last value in the buffer which is available to read (where the goroutine will be blocked). But there is a catch, read operation on buffered is thirsty. That means, once read operation starts, it will continue until buffer is empty. Technically, that means goroutine that reads buffer channel will not block until buffer is empty.

We can define buffered channel with following syntax.

c := make(chan Type, n)

This will create a channel of data type Type with buffer size n. Until channel receives n+1 send operations, it won’t block the current goroutine.

Let’s prove that goroutine doesn’t block until buffer is full and overflows.

In above program, channel c has buffer capacity of 3. That means it can hold 3values, which is does at line no. 20 but since buffer is not overflowing (as we didn’t push any new value), main goroutine will not block and program exists.

Let’s push send extra value.

As stated earlier, as now a filled buffer gets the push by c <- 4 send operation, main goroutine blocks and squares goroutine drain out all the values.

length and capacity of a channel

Similar to a slice, a buffered channel has length and capacity. Length of channel is number of values queued (unread) in channel buffer while capacity of channel is the buffer size. To calculate length, we use len function while to find out capacity, we use cap function, just like a slice.

If you are wondering, why above program runs well and deadlock error was not thrown. This is because, as channel capacity is 3 and only 2 values are available in buffer, Go did not try to schedule another goroutine by blocking main goroutine execution. You can simply read these value in main goroutine if you want, because even if buffer is not full, that doesn’t prevent you to read values from the channel.

Here is another cool example

I have a brain teaser for you.

Using buffered channel and for range, we can read from closed channels. Since for closed channels, data lives in buffer, we can still extract that data.
Buffered channels are like Pythagoras Cup, watch this interesting video on Pythagoras Cup.

Working with multiple goroutines

Let’s write 2 goroutines, one for calculating square of integers and other for cube of integers.

Let’s talk about execution of above program step by step.

  • We created 2 functions square and cube which we will run as goroutines. Both receives channel of type int as argument in argument c and we read data from it in variable num. Then we write data to the channel c in next line.
  • In main goroutine, we create 2 channels squareChan and cubeChan of type int using make function.
  • Then we run square and cube goroutine.
  • Since control is still inside main goroutine, testNumb variable gets value of 3.
  • Then we push data to squareChan and cubeVal. main goroutine will be blocked until these channels read it from . Once they read it, main goroutine will continue execution.
  • When in main goroutine, we try to read data from given channels, control will be blocked until these channels write some data from their goroutines. Here, we have used shorthand syntax := to receive data from multiple channels.
  • Once these goroutines write some data to the channel, main goroutine will be blocked.
  • When channel write operation is done, main goroutine starts executing. Then we calculate the sum and print it on the console.

Hence above program will yield following result

[main] main() started
[main] sent testNum to squareChan
[square] reading
[main] resuming
[main] sent testNum to cubeChan
[cube] reading
[main] resuming
[main] reading from channels
[main] sum of square and cube of 3 is 36
[main] main() stopped

☛ Unidirectional channels

So far, we have seen channels which can transmit data from both sides or in simple words, channels on which we can do read and write operations. But we can also create channels which are unidirectional in nature. For example, receive only channels which allows only read operation on them and send only channels which allow only write operation on them.

Unidirectional channel is also created using make function but with additional arrow syntax.

roc := make(<-chan int)
soc := make(chan<- int)

In above program, roc is receive only channel as arrow direction in make function points away from chan keyword. While soc is send only channel where arrow direction in make function points towards chan keyword. They also have a different type.

But what is the use of unidirectional channel? Using unidirectional channels increases the type-safety of a program. Hence program is less prone to error.

But let’s say that you have a goroutine where you need to only read data from a channel but main goroutine needs to read and write data from/to the same channel. How that will work?

Luckily, Go provide easier syntax to convert bi-directional channel to unidirectional channel.

We modified greet goroutine example to convert bi-directional channel c to receive only channel roc in greet function. Now we can only read from that channel. Any write operation on it will result in fetal error "invalid operation: roc <- "some text" (send to receive-only type <-chan string)".

☛ Anonymous goroutine

In goroutines chapter, we learned about anonymous goroutines. We can also implement channels with them. Let’s modify previous simple example to implement channel in anonymous goroutine.

This was our earlier example

Below one is the modified example where we made greet goroutine a anonymous goroutine.

☛ channel as data type of channel

Yes, channels are first-class values and can be used anywhere like other values: as struct elements, function arguments, function returning values and even like a type for another channel. Here, we are interested in using channel as data type of another channel.

☛ Select

select is just like switch without any input argument but it only used for channel operations. Select statement is used to perform operation on only one channel out of many, conditionally selected by case block.

Let’s first see an example, then discuss how it works.

From above program, we can see that select statement is just like switch but instead of boolean operations, we add channel operations like read or write or mixed of read and write. Select statement is blocking except when it has default case (we will see that later). Once, one of the case conditions fulfill, it will unblock. So when a case condition fulfills?

If all case statements (channel operations) are blocking then select statement will wait until one of the case statement (its channel operation) unblocks and that case will be executed. If some or all of the channel operations are non-blocking, then one of the non-blocking cases will be chosen randomly and executed immediately.

To explain above program, we started 2 goroutines with independent channels. Then we introduced select statement with 2 cases. One case reads value from chan1 and other from chan2. Since, these channels are unbuffered, read operation will be blocking (so the write operations). So both the cases of select are blocking. Hence select will wait until one of the case becomes unblocking.

When control is at select statement, main goroutine will block and it will schedule all goroutines present in select statement (one at a time), which are service1 and service2. service1 waits for 3 second and then unblocks by writing to the chan1. Similarly, service2waits for 5 second and then unblocks by writing to the chan2. Since, service1 unblocks earlier than service2, case 1 will unblocked first and hence that case will be executed and other cases (here case 2) will be ignored. Once done with case execution, main function execution will proceed further.

Above program simulates real world web service where a load balancer gets millions of requests and it has to return a response from one of the services available. Using goroutines, channels and select, we can ask multiple services for a response, and one which responds quickly can be used.

To simulate when all the cases are blocking and response is available nearly at the same time, we can simply remove Sleep call.

Above program yields following result (you may get different result)

main() started 0s
service2() started 481µs
Response from service 2 Hello from service 2 981.1µs
main() stopped 981.1µs

but sometimes, it can also be

main() started 0s
service1() started 484.8µs
Response from service 1 Hello from service 1 984µs
main() stopped 984µs

This happens because chan1 and chan2operations happen at nearly the same time, but still there is some time difference in execution and scheduling.

To simulate when all the cases are non-blocking and response is available at the same time, we can use buffered channel.

Above program yields following result

main() started 0s
Response from chan2 Value 1 0s
main() stopped 1.0012ms

In some cases, it can also be

main() started 0s
Response from chan1 Value 1 0s
main() stopped 1.0012ms

In above program, both channels have 2 values in their buffer. Since we are sending on 2 values in channel of buffer capacity 2, these channel operations won’t block and control will go to select statement. Since reading from buffered channel is non-blocking operation until entire buffer is empty and we are reading only one value in case condition, all case operations are non-blocking. Hence, Go runtime will select any case statement at random.

default case

Like switch statement, select statement also have default case. A default case is non-blocking. But that’s not all, default case make select statement always non-blocking. That means, send and receive operation on any channel (buffered or unbuffered) is always non-blocking.

If a value is available on any channel then select will execute that case. If not then it will immediately execute the default case.

In above program, since channels are unbuffered and value is not immediately available on both channel operations, default case will be executed. If above select statement wouldn’t have default case, select would have been blocking and response would have been different.

Since with default, select is non-blocking, scheduler does not get call from main goroutine to schedule available goroutines. But we can do that manually by calling time.Sleep. This way, all goroutines will execute and die, returning control to main goroutine which will wake up after some time. When main goroutine wakes up, channels will have values immediately available.

Hence, above program yields following result

main() started 0s
service1() started 0s
service2() started 0s
Response from service 1 Hello from service 1 3.0001805s
main() stopped 3.0001805s

In some case, it could also be

main() started 0s
service1() started 0s
service2() started 0s
Response from service 2 Hello from service 2 3.0000957s
main() stopped 3.0000957s


default case is useful when no channels are available to send or receive data. To avoid deadblock, we can use default case. This is possible because all channel operations due to default case are non-blocking, Go does not schedule any other goroutines to send data to channels if data is not immediately available.

Similar to receive, in send operation, if other goroutines are sleeping (not ready to receive value), default case is executed.

nil channel

As we know, default value of channel is nil. Hence we can not perform send or receive operations on a nil channel. But in a case, when a nil channel is used in select statement, it will throw one of the below or both errors.

From above result, we can see that select (no cases) means that select statement is virtually empty because cases with nil channel are ignored. But as empty select{} statement blocks the main goroutine and service goroutine is scheduled in it’s place, channel operation on nil channels throws chan send (nil chan) error. To avoid this, we use default case.

Above program not-only ignores the case block but executes the default statement immediately. Hence scheduler does not get time to schedule service goroutine. But this is really a bad design. You should always check channel for nil value.

☛ Adding timeout

Above program is not very useful since only default case is getting executed. But sometime, what we want is that any available services should respond in desirable time, if it doesn’t, then default case should get executed. This can be done using a case with a channel operation that unblocks after defined time. This channel operation is provided by time package’s After function. Let’s see an example.

Above program yields following result after 2 seconds

main() started 0s
No response received 2.0010958s
main() stopped 2.0010958s

In above program, <-time.After(2 * time.Second) unblocks after 2 seconds returning time at which it was unblocked, but here, we are not interested in its return value. Since, it also act like a goroutine, we have 3 goroutines out of which this one unblocks first. Hence, case corresponding to that goroutine operation gets executed.

This is useful, because you don’t want to wait too long for a response from available services, where user has to wait a long time before getting anything from the service. If we add 10 * time.Second in above example, response from service1will be printed, I guess that’s obvious now.

☛ Empty select

Like for{} empty loop, empty select{} syntax is also valid but there is a gotcha. As we know, select statement is blocked until one of the cases unblocks, and since there is no case statements available to unblock it, main goroutine will block forever resulting in deadblock.

In above program, as we know select will block the main goroutine, scheduler will schedule another available goroutine which is service. But after that, it will die and scheduler has to schedule another available goroutine, but since main routine is blocked and no other goroutines are available, resulting in deadblock.

main() started
Hello from service!
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [select (no cases)]:
program.Go:16 +0xba
exit status 2

☛ WaitGroup

Let’s imagine a condition where you need to know if all goroutines finished their job. This is somewhat opposite to select where you needed only one conditions to be true, but here you need all conditions to be true in order to unblock the main goroutine. Here condition is successful channel operation.

WaitGroup is a struct with a counter value which tracks how many goroutines were spawned and how many have completed their job. This counter when reaches zero, means all goroutines have done their job.

Let’s dive into an example and see the terminology.

In above program, we created empty struct (with zero-value fields) wg of type sync.WaitGroup. WaitGroup struct has unexported fields like noCopy, state1and sema whose internal implementation we don’t need to know. This struct has three methods viz. Add, Wait and Done.

Add method expects one int argument which is delta for the WaitGroup counter. Counter is nothing but an integer with default value 0. It holds how many goroutines are running. When WaitGroup is created, its counter value is 0 and we can increment it by passing delta as parameter using Add method. Remember, counter does not automatically understand when a goroutine was launched, hence we need to manually increment it.

Wait method is used to block the current goroutine from where it was called. Once counter reaches 0, that goroutine will unblock. Hence, we need something to decrement the counter.

Done method decrements the counter. It does not accept any argument, hence it only decrements counter by 1.

In above program, after creating wg, we ran for loop for 3 times. In each turn, we launched 1 goroutine and incremented counter by 1. That means, now we have 3 goroutines waiting to be executed and WaitGroup counter is 3. Notice that, we passed pointer to wg in goroutine. This is because in goroutine, once we done with whatever the goroutine was supposed to do, we need to call Done method to decrement the counter. If wg was passed as a value, wg in main would not get decremented. This is pretty obvious.

After for loop done executing, we still did not pass control to other goroutines. This is done by calling Wait method on wg like wg.Wait(). This will block main goroutine until counter reaches 0. Once counter reaches 0 because from 3 goroutines, we called Done method on wg 3 times, main goroutine will unblock and starts executing further code.

Hence above program yields below result

main() started
Service called on instance 2
Service called on instance 3
Service called on instance 1
main() stopped

Above result might be different for you guys, as order of execution of goroutines may vary.

Add method accept type of int, that means delta can also be negative. To know more about this, visit official documentation here.

☛ Worker pool

As from the name, a worker pool is collection of goroutines working concurrently to perform a job. In WaitGroup, we saw collection of goroutines working concurrently but they did not have specific job. Once you throw channels in them, they have some job to do and becomes worker pool.

So concept behind worker pool is maintaining a pool of worker goroutines which receives some task and returns the result. Once they all done with their job, we collect the result. All of these goroutines uses same channel for individual purpose.

Let’s see a simple example with two channels viz. tasks and results.

Don’t worry, I am going to explain what’s happening here.

  • sqrWorker is a worker function which takes tasks channel, results channel and id. The job of this goroutine is to send squares of the number received from tasks channel to results channel.
  • In main function, we created tasks and result channel with buffer capacity 10. Hence any send operation will be non-blocking until buffer is full. Hence setting large buffer value is not a bad idea.
  • Then we spawn multiple instances of sqrWorker as goroutines with above two channels and id parameter to get information on which worker is executing a task.
  • Then we passed 5 tasks to the tasks channel which were non-blocking.
  • Since we are done with tasks channel, we closed it. This is not necessary but it will save lot of time in future if some bugs get in.
  • Then using for loop, with 5 iterations, we are pulling data from results channel. Since, read operation on empty buffer is blocking, a goroutine will be scheduled from worker pool. Until that goroutine returns some result, main goroutine will be blocked.
  • Since we are simulating blocking operation in worker goroutine, that will call scheduler to schedule another available goroutine until it becomes available. When worker goroutine becomes available, it writes to the results channel. As writing to buffered channel is non-blocking until buffer is full, writing to results channel here is non-blocking. Also while current worker goroutine was unavailable, multiple other worker goroutines were executed consuming values in tasks buffer. After all worker goroutines consumed tasks, for range loop finishes when tasks channel buffer is empty. It won’t throw deadlock error as tasks channel was closed.
  • Sometimes, all worker goroutines can be sleeping, so main goroutine will wake up and works until results channel buffer is again empty.
  • After all worker goroutines died, main goroutine will regain control and print remaining results from results channel and continue it’s execution.

Above example is mouthful but brilliantly explain how multiple goroutines can feed on same channel and get job done elegantly. goroutines are powerful when worker’s job is blocking. If you remove, time.Sleep() call, then only one goroutine will perform the job as no other goroutines are scheduled until for range loop is done and goroutine dies.

You can get different result like in previous example depending on how fast your system is because if all worker gorutines are blocked even for micro-second, main goroutine will wake up as explained.

Now, let’s use WaitGroup concept of synchronizing goroutines. Using previous example with WaitGroup, we can achieve same results but more elegantly.

Above result looks neat because read operations on results channel in main goroutine is non-blocking as result channel is already populated by result while main goroutine was blocked by wg.Wait() call. Using waitGroup, we can prevent lots of (unnecessary) context switching (scheduling), 7 here compared to 9 in previous example. But there is a sacrifice, as you have to wait until all jobs are done.

☛ Mutex

Mutex is one of the easiest concepts in Go. But before I explain it, let’s first understand what a race condition is. goroutines have their independent stack and hence they don’t share any data between them. But there might be conditions where some data in heap is shared between multiple goroutines. In that case, multiple goroutines are trying to manipulate data at same memory location resulting into unexpected results. I will show you one simple example.

In above program, we are spawning 1000 goroutines which increments value of a global variable i which initially is at 0. Since we are implementing WaitGroup, we want all 1000 goroutines to increment value of i one by one resulting final value of i to be 1000. When main goroutine starts executing again after wg.Wait() call, we are printing i. Let’s see the final result.

value of i after 1000 operations is 937

What? Why we got less than 1000? Looks like some goroutines did not work. But actually, our program had a race condition. Let’s see what might have happened.

i = i + 1 calculation has 3 steps

  • (1) get value of i
  • (2) increment value of i by 1
  • (3) update value of i with new value

Let’s imagine a scenario where different goroutines were scheduled in between these steps. For example, let’s consider 2 goroutines out of the pool of 1000 goroutines viz. G1 and G2.

G1 starts first when i is 0, run first 2 steps and i is now 1. But before G1 updates value of i in step 3, new goroutine G2 is scheduled and it runs all steps. But in case of G2, value of i is still 0 hence after it executes step 3, i will be 1. Now G1 is again scheduled to finish step 3 and updates value of i which is 1from step 2. In perfect world where goroutines are scheduled after completing all 3 steps, successful operations of 2 goroutines would have produced value of i to be 2 but that’s not the case here. Hence, we can pretty much speculate why our program did not yield value of i to be 1000.

So far we learned that goroutines are cooperatively scheduled. Until unless a goroutine blocks with one of the conditions mentioned in concurrency lesson, another goroutine won’t take its place. And since i = i + 1 is not blocking, why Go scheduler schedules another goroutine?

You should definitely check out this answer on stackoverflow. At any condition, you shouldn’t rely on Go’s scheduling algorithm and implement your own logic to synchronize different goroutines.

One way to make sure that only one goroutine complete all 3 above steps at a time is my implementing mutex. Mutex (mutual exclusion) is a concept in programming where only one routine (thread) can perform operation at a time. This is done by one routine acquiring the lock on value, do whatever manipulation on the value it has to do and then releasing the lock. When value is locked, no other routine can read or write to it.

In Go, mutex data structure (a map) is provided by sync package. In Go, before performing any operation on a value which might cause race condition, we acquire a lock using mutex.Lock() method followed by code of operation. Once we are done with the operation, in above program i = i + 1, we unlock it using mutext.Unlock() method. When any other goroutine is trying to read or write value of i when the lock is present, that goroutine will block until operation is unlocked from first goroutine. Hence only 1 goroutine can get to read or write value of i, avoiding race conditions. Remember, any variables present in operation(s) between lock and unlock will not be available for other goroutines until whole operation(s) is unlocked.

Let’s modify previous example with mutex.

In above program, we have created one mutex m and passed a pointer to it to all spawned goroutines. Before we begin operation on i, we acquired the lock on mutex m using m.Lock() syntax and after operation we unlocked it using m.Unlock() syntax. Above program yields below result

value of i after 1000 operations is 1000

From above result we can see that mutex helped us resolve racing conditions. But the first rule is to avoid shared resources between goroutines.

You can test for race condition in Go using race flag while running a program like Go run -race program.Go. Read more about race detector here.

Concurrency Patterns

There are tons of ways concurrency makes our day to day programming life easier. Following are few concepts and methodologies using which we can make programs faster and reliable.


Using channels, we can implement generator in much better way. If an generator is computationally expensive, then we might as well do generation of data concurrently. That way, program doesn’t have to wait until all data is generated. For example, generating fibonacci series.

Using fib function, we are getting a channel over which we can iterate and make use of data received from it. While inside fib function, since we have to return a receive only channel, we are creating a buffered channel and returning it at the end. Return value of this function will convert this bi-directional channel to unidirectional receive only channel. While using anonymous goroutine, we are pushing fibonacci number to this channel using for loop. Once we are done with for loop, we are closing it from the inside of goroutine. In main goroutine, using for range on fib function call, we are getting direct access to this channel.

fan-in & fan-out

fan-in is a multiplexing strategy where input of several channels are combined to produce a output channel. fan-out is demultiplexing strategy where single channel is split into multiple channels.

I am not going to explain how above program is working because I have added comments in the program explaining just that. Above program yields following result

Sum of squares between 0-9 is 285