DEV Community

Cover image for Go Concurrency: A Deep Dive to Expertise
Leapcell
Leapcell

Posted on

Go Concurrency: A Deep Dive to Expertise

Image description

Leapcell: The Next-Gen Serverless Platform for Golang app Hosting

Analysis of goroutine Concurrency Control Methods in Go Language

In the Go language programming system, goroutine, as a lightweight thread, provides powerful support for the efficient implementation of concurrent operations due to its significant advantages of low resource consumption and low switching costs. However, how to effectively control these concurrently running goroutines has become an important issue that developers need to face.

When it comes to concurrency control, the locking mechanism is often the first means to be considered. In the Go language, relevant locking mechanisms are also provided, including the mutual exclusion lock sync.Mutex and the read-write lock sync.RWMutex, and in addition, it also has atomic operations sync/atomic. But it should be clear that these operations mainly focus on data security during concurrency, rather than directly targeting the goroutine itself.

This article will focus on introducing common methods for controlling the concurrent behavior of goroutines. In the Go language, there are three most commonly used methods, namely sync.WaitGroup, channel, and Context.

I. sync.WaitGroup

sync.WaitGroup is a very practical synchronization primitive in the Go language, and its main function is to assist developers in waiting for a group of goroutines to complete execution. Usually, sync.WaitGroup is used in the following scenarios:

  1. When it is necessary to ensure that a group of goroutines have all been executed before exiting the program in the main function.
  2. In a scenario where multiple goroutines are started inside a function, and the result should not be returned until these goroutines have all been completed.
  3. When a function starts multiple goroutines and a specific operation needs to be executed after they have all been completed.
  4. In a situation where a function starts multiple goroutines and a specific resource needs to be closed after they have all been completed.
  5. In a case where a function starts multiple goroutines and the loop cannot be exited until they have all been completed.

When using sync.WaitGroup, the specific steps are as follows: First, create a sync.WaitGroup object; then, use the Add method of this object to specify the number of goroutines to be waited for; subsequently, use the go keyword to start multiple goroutines, and call the Done method of the sync.WaitGroup object inside each goroutine to indicate that the goroutine has completed execution; finally, call the Wait method of the sync.WaitGroup object to wait for all goroutines to be completed.

The following is a simple example. This example will start 3 goroutines, which sleep for 0 seconds, 1 second, and 2 seconds respectively, and the main function will exit after these 3 goroutines have ended:

package main

import (
    "fmt"
    "sync"
    "time"
)

func main() {
    var wg sync.WaitGroup

    for i := 0; i < 3; i++ {
        wg.Add(1)
        go func(i int) {
            defer wg.Done()
            fmt.Printf("sub goroutine sleep: %ds\n", i)
            time.Sleep(time.Duration(i) * time.Second)
        }(i)
    }

    wg.Wait()
    fmt.Println("main func done")
}
Enter fullscreen mode Exit fullscreen mode

II. channel

In the Go language, channel is a powerful tool that can help developers better control the concurrency of goroutines. The following are several common methods of using channel to control the concurrency of goroutines:

(I) Using an Unbuffered Channel for Synchronization

An unbuffered channel can be used to implement the producer-consumer pattern. In this pattern, one goroutine is responsible for producing data, and another goroutine is responsible for consuming data. When the producer goroutine sends data to the channel, the consumer goroutine will enter a blocked state and wait for the data to arrive. In this way, data synchronization between the producer and the consumer can be ensured.

The following is a simple example code:

package main

import (
    "fmt"
    "sync"
    "time"
)

func producer(ch chan int, wg *sync.WaitGroup) {
    defer wg.Done()
    for i := 0; i < 10; i++ {
        ch <- i
        fmt.Println("produced", i)
        time.Sleep(100 * time.Millisecond)
    }
    close(ch)
}

func consumer(ch chan int, wg *sync.WaitGroup) {
    defer wg.Done()
    for i := range ch {
        fmt.Println("consumed", i)
        time.Sleep(150 * time.Millisecond)
    }
}

func main() {
    var wg sync.WaitGroup
    ch := make(chan int)

    wg.Add(2)
    go producer(ch, &wg)
    go consumer(ch, &wg)

    wg.Wait()
}
Enter fullscreen mode Exit fullscreen mode

In this example, an unbuffered channel is created to transfer data between the producer goroutine and the consumer goroutine. The producer goroutine sends data to the channel, and the consumer goroutine receives data from the channel. In the producer goroutine, the time.Sleep function is used to simulate the time required to produce data; in the consumer goroutine, the time.Sleep function is also used to simulate the time required to consume data. Finally, sync.WaitGroup is used to wait for all goroutines to be completed.

(II) Using a Buffered Channel for Rate Limiting

A buffered channel can be used to limit the number of concurrent goroutines. The specific approach is to set the capacity of the channel to the desired maximum number of concurrent goroutines. Before starting each goroutine, send a value to the channel; when the goroutine finishes execution, receive a value from the channel. In this way, it can be ensured that the number of simultaneously running goroutines does not exceed the specified maximum concurrent number.

The following is a simple example code:

package main

import (
    "fmt"
    "sync"
)

func main() {
    var wg sync.WaitGroup
    maxConcurrency := 3
    semaphore := make(chan struct{}, maxConcurrency)

    for i := 0; i < 10; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            semaphore <- struct{}{}
            fmt.Println("goroutine", i, "started")
            // do some work
            fmt.Println("goroutine", i, "finished")
            <-semaphore
        }()
    }

    wg.Wait()
}
Enter fullscreen mode Exit fullscreen mode

In this example, a buffered channel is created, and its buffer size is 3. Then 10 goroutines are started. In each goroutine, an empty struct is sent to the channel to indicate that the goroutine has started execution; when the goroutine is completed, an empty struct is received from the channel to indicate that the goroutine has completed execution. In this way, it can be ensured that the number of simultaneously running goroutines does not exceed 3.

III. Context

In the Go language, Context is an important means of controlling the concurrency of goroutines. The following are several common methods of using Context to control the concurrency of goroutines:

(I) Timeout Control

In some cases, in order to avoid problems such as long-term blocking or deadlock of the program, it is necessary to limit the execution time of goroutines. Context can help developers better control the execution time of goroutines. The specific operation is to create a Context with a timeout period and pass it to the goroutine. If the goroutine fails to complete execution within the timeout period, the Done method of the Context can be used to cancel the execution of the goroutine.

The following is a simple example code:

package main

import (
    "context"
    "fmt"
    "time"
)

func main() {
    ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
    defer cancel()

    go func() {
        for {
            select {
            case <-ctx.Done():
                fmt.Println("goroutine finished")
                return
            default:
                fmt.Println("goroutine running")
                time.Sleep(500 * time.Millisecond)
            }
        }
    }()

    time.Sleep(3 * time.Second)
}
Enter fullscreen mode Exit fullscreen mode

In this example, a Context with a timeout period is created and passed to the goroutine. Inside the goroutine, the select statement is used to listen to the Done method of the Context. If the Context times out, the execution of the goroutine will be cancelled.

(II) Cancellation Operation

During the running process of the program, sometimes it is necessary to cancel the execution of certain goroutines. Context can help developers better control the cancellation operation of goroutines. The specific approach is to create a Context with a cancellation function and pass it to the goroutine. If it is necessary to cancel the execution of the goroutine, the Cancel method of the Context can be called.

The following is a simple example code:

package main

import (
    "context"
    "fmt"
    "sync"
    "time"
)

func main() {
    ctx, cancel := context.WithCancel(context.Background())

    var wg sync.WaitGroup
    wg.Add(1)
    go func() {
        defer wg.Done()
        for {
            select {
            case <-ctx.Done():
                fmt.Println("goroutine finished")
                return
            default:
                fmt.Println("goroutine running")
                time.Sleep(500 * time.Millisecond)
            }
        }
    }()

    time.Sleep(2 * time.Second)
    cancel()
    wg.Wait()
}
Enter fullscreen mode Exit fullscreen mode

In this example, a Context with a cancellation function is created and passed to the goroutine. Inside the goroutine, the select statement is used to listen to the Done method of the Context. If the Context is cancelled, the execution of the goroutine will be cancelled. In the main function, the time.Sleep function is used to simulate a certain moment during the running process of the program when it is necessary to cancel the execution of the goroutine, and then the Cancel method of the Context is called.

(III) Resource Management

In some scenarios, it is necessary to manage the resources used by goroutines to prevent problems such as resource leakage or race conditions. Context can help developers better manage the resources used by goroutines. The specific operation is to associate the resources with the Context and pass the Context to the goroutine. When the goroutine finishes execution, the Context can be used to release the resources or perform other resource management operations.

The following is a simple example code:

package main

import (
    "context"
    "fmt"
    "sync"
    "time"
)

func worker(ctx context.Context, wg *sync.WaitGroup) {
    defer wg.Done()
    for {
        select {
        case <-ctx.Done():
            fmt.Println("goroutine finished")
            return
        default:
            fmt.Println("goroutine running")
            time.Sleep(500 * time.Millisecond)
        }
    }
}

func main() {
    ctx, cancel := context.WithCancel(context.Background())

    var wg sync.WaitGroup
    wg.Add(1)
    go worker(ctx, &wg)

    time.Sleep(2 * time.Second)
    cancel()
    wg.Wait()
}
Enter fullscreen mode Exit fullscreen mode

In this example, a Context with a cancellation function is created and passed to the goroutine. Inside the goroutine, the select statement is used to listen to the Done method of the Context. If the Context is cancelled, the execution of the goroutine will be cancelled. In the main function, the time.Sleep function is used to simulate a certain moment during the running process of the program when it is necessary to cancel the execution of the goroutine, and then the Cancel method of the Context is called.

Leapcell: The Next-Gen Serverless Platform for Golang app Hosting

Finally, I would like to recommend a platform that is most suitable for deploying Go services: Leapcell

Image description

1. Multi-Language Support

  • Develop with JavaScript, Python, Go, or Rust.

2. Deploy unlimited projects for free

  • pay only for usage — no requests, no charges.

3. Unbeatable Cost Efficiency

  • Pay-as-you-go with no idle charges.
  • Example: $25 supports 6.94M requests at a 60ms average response time.

4. Streamlined Developer Experience

  • Intuitive UI for effortless setup.
  • Fully automated CI/CD pipelines and GitOps integration.
  • Real-time metrics and logging for actionable insights.

5. Effortless Scalability and High Performance

  • Auto-scaling to handle high concurrency with ease.
  • Zero operational overhead — just focus on building.

Image description

Explore more in the documentation!

Leapcell Twitter: https://x.com/LeapcellHQ

Top comments (0)