Cristhian Villegas
Backend11 min read2 views

Concurrency in Go: Goroutines and Channels — Go Course #7

Concurrency in Go: Goroutines and Channels — Go Course #7

Introduction: concurrency vs parallelism

Go Logo

Welcome to part 7 of our Go course for beginners. Concurrency is one of Go's most powerful features and what sets it apart from many other programming languages.

Before writing any code, it is essential to understand the difference between two commonly confused concepts:

  • Concurrency: the ability to handle multiple tasks at the same time by switching between them. Think of a chef who prepares several dishes simultaneously: while waiting for water to boil, they chop vegetables.
  • Parallelism: the actual simultaneous execution of multiple tasks. Think of multiple chefs, each preparing a different dish at the same time.
Key insight: Go was designed from the ground up for concurrency. Rob Pike, one of Go's creators, put it this way: "Concurrency is not parallelism. Concurrency is a way to structure a program; parallelism is a property of execution."

Go implements its concurrency model based on CSP (Communicating Sequential Processes), where independent processes communicate by sending and receiving messages through channels.

Goroutines: lightweight threads with the go keyword

A goroutine is a function that runs concurrently with other goroutines. They are extremely lightweight: while an OS thread may consume ~1 MB of memory, a goroutine starts with only ~2 KB. You can have thousands or even millions of goroutines running at the same time.

go
1package main
2
3import (
4    "fmt"
5    "time"
6)
7
8func greet(name string) {
9    for i := 0; i < 3; i++ {
10        fmt.Println("Hello from", name)
11        time.Sleep(100 * time.Millisecond)
12    }
13}
14
15func main() {
16    // Launch a goroutine with the "go" keyword
17    go greet("goroutine 1")
18    go greet("goroutine 2")
19
20    // The main function is also a goroutine
21    greet("main")
22
23    // Without the Sleep in greet, main might finish
24    // before the goroutines complete their work
25}

Notice that you only need to add the go keyword before a function call to run it as a goroutine. It is that simple.

Warning: When the main function returns, all goroutines that are still running are immediately terminated. Do not let main exit before your goroutines finish. Use sync.WaitGroup or channels to synchronize.

sync.WaitGroup for synchronization

The problem with the previous example is that we used time.Sleep to wait, which is unreliable. The correct solution is sync.WaitGroup, which lets you wait for a group of goroutines to finish.

go
1package main
2
3import (
4    "fmt"
5    "sync"
6)
7
8func worker(id int, wg *sync.WaitGroup) {
9    // defer ensures Done() is called when the function exits
10    defer wg.Done()
11    fmt.Printf("Worker %d: starting\n", id)
12    // Simulate work...
13    for i := 0; i < 3; i++ {
14        fmt.Printf("Worker %d: step %d\n", id, i+1)
15    }
16    fmt.Printf("Worker %d: done\n", id)
17}
18
19func main() {
20    var wg sync.WaitGroup
21
22    for i := 1; i <= 5; i++ {
23        wg.Add(1) // Increment the counter
24        go worker(i, &wg)
25    }
26
27    wg.Wait() // Block until the counter reaches 0
28    fmt.Println("All workers finished")
29}

The pattern is straightforward: call wg.Add(1) before launching each goroutine, wg.Done() when the goroutine finishes, and wg.Wait() to wait for all of them.

Channels: communication between goroutines

Channels are the primary mechanism for communication between goroutines in Go. Think of a channel as a pipe through which you can send and receive values.

Unbuffered channels

An unbuffered channel requires both the sender and receiver to be ready at the same time. The sender blocks until someone receives the value.

go
1package main
2
3import "fmt"
4
5func main() {
6    // Create a channel of type string
7    messages := make(chan string)
8
9    // Send a value in a goroutine
10    go func() {
11        messages <- "Hello from the goroutine"
12    }()
13
14    // Receive the value (blocks until it arrives)
15    msg := <-messages
16    fmt.Println(msg) // Hello from the goroutine
17}

Buffered channels

A buffered channel has capacity to store a fixed number of values. The sender only blocks when the buffer is full.

go
1package main
2
3import "fmt"
4
5func main() {
6    // Buffered channel with capacity 3
7    ch := make(chan int, 3)
8
9    // We can send without anyone receiving yet
10    ch <- 10
11    ch <- 20
12    ch <- 30
13
14    // Receive values in order (FIFO)
15    fmt.Println(<-ch) // 10
16    fmt.Println(<-ch) // 20
17    fmt.Println(<-ch) // 30
18}
Rule of thumb: Use unbuffered channels when you need guaranteed synchronization between goroutines. Use buffered channels when you want to decouple the sender from the receiver and allow some degree of asynchrony.

Channel directions: send-only and receive-only

Go lets you restrict the direction of a channel in function parameters. This makes your code safer and clearer about intent.

go
1package main
2
3import "fmt"
4
5// Can only send to the channel
6func producer(ch chan<- int) {
7    for i := 0; i < 5; i++ {
8        ch <- i * 10
9    }
10    close(ch) // Close the channel when done
11}
12
13// Can only receive from the channel
14func consumer(ch <-chan int) {
15    for value := range ch {
16        fmt.Println("Received:", value)
17    }
18}
19
20func main() {
21    ch := make(chan int, 5)
22    go producer(ch)
23    consumer(ch) // Runs in main, not as a goroutine
24}

The syntax is: chan<- int for send-only and <-chan int for receive-only. If you try to read from a send-only channel, the compiler will give you an error.

Select: multiplexing channels

The select statement lets you wait on multiple channel operations simultaneously. It works like a switch but for channels.

go
1package main
2
3import (
4    "fmt"
5    "time"
6)
7
8func main() {
9    ch1 := make(chan string)
10    ch2 := make(chan string)
11
12    go func() {
13        time.Sleep(100 * time.Millisecond)
14        ch1 <- "result from ch1"
15    }()
16
17    go func() {
18        time.Sleep(200 * time.Millisecond)
19        ch2 <- "result from ch2"
20    }()
21
22    // Wait for whichever responds first
23    for i := 0; i < 2; i++ {
24        select {
25        case msg1 := <-ch1:
26            fmt.Println("Received from ch1:", msg1)
27        case msg2 := <-ch2:
28            fmt.Println("Received from ch2:", msg2)
29        }
30    }
31}

The select blocks until one of the channels is ready. If multiple channels are ready at the same time, it picks one at random. You can also add a default case for non-blocking selects, or use time.After to implement timeouts.

go
1select {
2case msg := <-ch:
3    fmt.Println("Received:", msg)
4case <-time.After(3 * time.Second):
5    fmt.Println("Timeout: no response in 3 seconds")
6}

Mutex and sync.RWMutex for shared state

Although channels are the preferred communication method in Go, sometimes you need to protect access to shared data. That is where mutexes (mutual exclusion locks) come in.

go
1package main
2
3import (
4    "fmt"
5    "sync"
6)
7
8// SafeCounter uses a mutex to protect the value
9type SafeCounter struct {
10    mu    sync.Mutex
11    value int
12}
13
14func (c *SafeCounter) Increment() {
15    c.mu.Lock()
16    defer c.mu.Unlock()
17    c.value++
18}
19
20func (c *SafeCounter) Get() int {
21    c.mu.Lock()
22    defer c.mu.Unlock()
23    return c.value
24}
25
26func main() {
27    counter := SafeCounter{}
28    var wg sync.WaitGroup
29
30    // 1000 goroutines incrementing the same counter
31    for i := 0; i < 1000; i++ {
32        wg.Add(1)
33        go func() {
34            defer wg.Done()
35            counter.Increment()
36        }()
37    }
38
39    wg.Wait()
40    fmt.Println("Final value:", counter.Get()) // Always 1000
41}

If you have many reads and few writes, sync.RWMutex is more efficient. It allows multiple simultaneous readers but only one writer at a time.

go
1type Cache struct {
2    mu   sync.RWMutex
3    data map[string]string
4}
5
6func (c *Cache) Read(key string) (string, bool) {
7    c.mu.RLock()         // Read lock (multiple readers OK)
8    defer c.mu.RUnlock()
9    value, ok := c.data[key]
10    return value, ok
11}
12
13func (c *Cache) Write(key, value string) {
14    c.mu.Lock()          // Write lock (exclusive)
15    defer c.mu.Unlock()
16    c.data[key] = value
17}
Go philosophy: "Don't communicate by sharing memory; share memory by communicating." In other words, prefer channels over mutexes whenever possible. Mutexes are appropriate when protecting a simple data structure; channels are better for coordinating complex workflows.

Common patterns: worker pool, fan-in, fan-out

Go has several well-established concurrency patterns. Let us look at the most important ones:

Worker Pool

A worker pool processes tasks from a shared channel. It is useful when you want to limit the number of active goroutines.

go
1package main
2
3import (
4    "fmt"
5    "sync"
6)
7
8func worker(id int, tasks <-chan int, results chan<- int, wg *sync.WaitGroup) {
9    defer wg.Done()
10    for task := range tasks {
11        fmt.Printf("Worker %d processing task %d\n", id, task)
12        results <- task * 2 // simulate processing
13    }
14}
15
16func main() {
17    const numWorkers = 3
18    const numTasks = 10
19
20    tasks := make(chan int, numTasks)
21    results := make(chan int, numTasks)
22
23    var wg sync.WaitGroup
24
25    // Launch workers
26    for i := 1; i <= numWorkers; i++ {
27        wg.Add(1)
28        go worker(i, tasks, results, &wg)
29    }
30
31    // Send tasks
32    for i := 1; i <= numTasks; i++ {
33        tasks <- i
34    }
35    close(tasks)
36
37    // Wait for completion and close results
38    go func() {
39        wg.Wait()
40        close(results)
41    }()
42
43    // Collect results
44    for r := range results {
45        fmt.Println("Result:", r)
46    }
47}

Fan-out / Fan-in

Fan-out means distributing work among multiple goroutines. Fan-in means combining results from multiple goroutines into a single channel.

go
1// Fan-in: merge multiple channels into one
2func fanIn(channels ...<-chan string) <-chan string {
3    var wg sync.WaitGroup
4    merged := make(chan string)
5
6    for _, ch := range channels {
7        wg.Add(1)
8        go func(c <-chan string) {
9            defer wg.Done()
10            for val := range c {
11                merged <- val
12            }
13        }(ch)
14    }
15
16    go func() {
17        wg.Wait()
18        close(merged)
19    }()
20
21    return merged
22}

Race conditions and the race detector

A race condition occurs when two or more goroutines access the same variable simultaneously and at least one of them modifies it. The result is unpredictable.

bash
1# Run with the race detector
2go run -race main.go
3
4# Also works with tests
5go test -race ./...

Go's race detector is an invaluable tool. It analyzes your program at runtime and alerts you if it detects unsafe concurrent accesses.

go
1// RACE CONDITION EXAMPLE (incorrect)
2package main
3
4import (
5    "fmt"
6    "sync"
7)
8
9func main() {
10    counter := 0
11    var wg sync.WaitGroup
12
13    for i := 0; i < 1000; i++ {
14        wg.Add(1)
15        go func() {
16            defer wg.Done()
17            counter++ // RACE CONDITION: unprotected access
18        }()
19    }
20
21    wg.Wait()
22    // The result will be different every time
23    fmt.Println("Value:", counter) // Could be 980, 995, 1000...
24}
Always use -race during development: Run your tests and programs with the -race flag during development. Race conditions are silent bugs that can cause unpredictable behavior in production. The detector finds them for you.

Practical example: concurrent URL checker

Let us build a complete program that checks the status of multiple URLs concurrently, using goroutines, channels, and WaitGroup.

go
1package main
2
3import (
4    "fmt"
5    "net/http"
6    "sync"
7    "time"
8)
9
10// Result stores information about each check
11type Result struct {
12    URL      string
13    Status   int
14    Duration time.Duration
15    Error    string
16}
17
18func checkURL(url string, results chan<- Result, wg *sync.WaitGroup) {
19    defer wg.Done()
20
21    start := time.Now()
22    client := http.Client{Timeout: 5 * time.Second}
23
24    resp, err := client.Get(url)
25    duration := time.Since(start)
26
27    if err != nil {
28        results <- Result{
29            URL:      url,
30            Status:   0,
31            Duration: duration,
32            Error:    err.Error(),
33        }
34        return
35    }
36    defer resp.Body.Close()
37
38    results <- Result{
39        URL:      url,
40        Status:   resp.StatusCode,
41        Duration: duration,
42        Error:    "",
43    }
44}
45
46func main() {
47    urls := []string{
48        "https://www.google.com",
49        "https://www.github.com",
50        "https://go.dev",
51        "https://www.rust-lang.org",
52        "https://nonexistent-site.xyz",
53    }
54
55    results := make(chan Result, len(urls))
56    var wg sync.WaitGroup
57
58    fmt.Println("Checking URLs...")
59    start := time.Now()
60
61    for _, url := range urls {
62        wg.Add(1)
63        go checkURL(url, results, &wg)
64    }
65
66    // Close the channel when all goroutines finish
67    go func() {
68        wg.Wait()
69        close(results)
70    }()
71
72    // Collect and print results
73    success := 0
74    failed := 0
75    for r := range results {
76        if r.Error != "" {
77            fmt.Printf("  FAIL   %s (%v) - %s\n", r.URL, r.Duration.Round(time.Millisecond), r.Error)
78            failed++
79        } else {
80            fmt.Printf("  OK %d  %s (%v)\n", r.Status, r.URL, r.Duration.Round(time.Millisecond))
81            success++
82        }
83    }
84
85    fmt.Printf("\nSummary: %d success, %d failed in %v\n",
86        success, failed, time.Since(start).Round(time.Millisecond))
87}
Notice the power of concurrency: If each URL takes ~500ms to respond and you have 5 URLs, sequentially it would take ~2.5 seconds. With goroutines, all URLs are checked simultaneously and the total is approximately ~500ms (the time of the slowest one).

Summary and next article

In this article you learned the fundamentals of concurrency in Go:

  • Goroutines: functions that run concurrently with go func()
  • sync.WaitGroup: waiting for a group of goroutines to finish
  • Channels: pipes for sending and receiving data between goroutines
  • Buffered and unbuffered channels: different synchronization strategies
  • Channel directions: chan<- and <-chan for added safety
  • Select: waiting on multiple channels simultaneously
  • Mutex: protecting shared data with sync.Mutex and sync.RWMutex
  • Patterns: worker pool, fan-in, fan-out
  • Race detector: go run -race to find bugs

In the next article (part 8) we will learn about packages, modules, and tooling in Go: how to organize your code into packages, use Go modules, and master the Go ecosystem tools.

Share:
CV

Cristhian Villegas

Software Engineer specializing in Java, Spring Boot, Angular & AWS. Building scalable distributed systems with clean architecture.

Comments

Sign in to leave a comment

No comments yet. Be the first!

Related Articles