What is a Workload? And why are we talking about it?
Could it be to better understand when to use concurrency in Go❤️?
The answer to this question is yes 😁 and not just for Go, but for any technology that deals with parallelism and concurrency 😁.
A workload refers to the amount of work an application needs to perform. In other words, a workload is the set of tasks or operations an application must execute to meet its requirements👊🏼.
Workloads can be distributed across multiple processes or threads to improve application performance. For example, if an application has a CPU-intensive workload, it can be broken down into smaller parts that run in parallel across multiple cores.
Workloads can also be distributed across multiple machines to enhance application scalability. For instance, if an application has an I/O-intensive workload, it can be distributed among several servers to improve user request handling capacity.
Therefore, we can say that a workload is the amount of work an application needs to process, and it plays a crucial role in choosing the right approach for concurrency or parallelism. Proper workload distribution can significantly improve performance and scalability.
Concurrency in Go is incredibly powerful yet simple—this was the intention of the engineers who designed Go. Many problems can be solved far more efficiently using concurrency, and this is where Go shines. That’s why Go has become a God when it comes to backend concurrency.
Because of this, problems in this domain can be solved with high efficiency while using very few computational resources.🚀
Example of a Worker in Go:
Every Go program starts by defining the package
and importing the necessary libraries.
package main
import "fmt"
import "time"
This function is our Worker, which runs concurrently as a goroutine while the rest of the code continues execution.
Let's break down the parameters and understand channels, which are the mechanism for safely passing values between goroutines:
-
jobs <-chan int
: This is an input channel (receive-only). -
results chan<- int
: This is an output channel (send-only). -
result2 chan string
: This is a bidirectional channel, meaning that inside the function, you can both send and receive values through it.
Worker Function in Go
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Println("worker:", id, "started job", j)
time.Sleep(time.Second)
fmt.Println("worker:", id, "finished job", j)
results <- j * 8
}
}
Here, we initialize our program by declaring the channels.
We launch multiple goroutines of our worker to distribute the workload efficiently.
In this case, we are spawning three concurrent workers.
Main Function: Spawning Workers and Distributing Workloads
func main() {
jobs := make(chan int, 100)
results := make(chan int, 100)
// Launching 3 worker goroutines
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// Sending 5 jobs to the workers
for j := 1; j <= 5; j++ {
jobs <- j
}
close(jobs)
// Collecting results from the workers
for a := 1; a <= 5; a++ {
<-results
}
}
Go is designed to handle concurrency efficiently and safely.
Concurrency is achieved through goroutines, which are small pieces of code that can run concurrently but not necessarily in parallel.
Goroutines are lightweight in terms of resources, meaning that thousands of them can be created in a single program without degrading performance.
The Go runtime decides whether a goroutine runs concurrently or in parallel across multiple CPU cores, depending on resource availability and the number of physical cores available.
Communication Between Goroutines
Communication between goroutines is done through channels, synchronization mechanisms that allow safe data transfer between goroutines. This enables goroutines to work together, sharing data and cooperating to achieve a common goal.
Concurrency in Go’s net/http Package
The net/http
package works concurrently, scaling requests and distributing computational resources more effectively.
If you're interested in benchmarks on this topic and comparisons with different programming languages, check out this link:
👉 benchmark.
Here, you can see the power of net/http
when abstracted by the Quick framework (quick), developed in Go, which utilizes net/http
.
It’s also compared against Elixir, a functional and concurrent programming language.
Understanding Workloads in Concurrency
When talking about concurrency, it is important to highlight different types of workloads, as they act as a compass to help us determine the right balance when solving concurrency-related problems.
A thread can handle two types of workloads (Workloads):
✅ CPU-Bound
✅ I/O-Bound
CPU-BOUND
This type of workload never puts a thread into a waiting state.
It is constantly performing calculations. A thread computing pi to the eighteenth power, for example, would be CPU-limited.
(Fun fact: Emma Haruka Iwao is a Japanese computer scientist and Cloud Developer Engineer at Google. In 2019, she calculated the most precise value of pi ever recorded, reaching 31.4 trillion digits.)
For CPU-bound workloads, parallelism is needed to maximize simultaneous execution. However, simply increasing the number of goroutines will not help and may even degrade performance.
This happens because of the latency cost (the time spent) in moving goroutines in and out of the OS thread.
I/O-BOUND
In this type of workload, threads enter waiting states.
A good example is network requests or system calls.
A thread that needs to:
- Access a database
- Perform synchronization events (mutex, atomic)
- Communicate over a network
…will have to wait, making it an I/O-bound workload.
For this type of workload, parallelism is not necessary to achieve concurrency—a single physical core is often enough to efficiently handle multiple goroutines.
Goroutines enter and exit waiting states as part of their workload.
Having more goroutines than physical cores can speed up execution since the latency cost of moving goroutines between OS threads is not creating an additional bottleneck.
Instead, work is naturally interrupted, allowing another goroutine to utilize the same physical core instead of letting it remain idle.
CONCLUSION
The Go programming language provides powerful and easy-to-use tools for handling concurrency, making it an excellent choice for high-performance and scalable applications.
Understanding when to use parallelism across multiple physical cores vs. when simple concurrency on fewer cores (or a single core) is sufficient is essential.
Go offers various patterns to balance these workloads, and in some cases, you may need a combination of both approaches.
Want to dive deeper into this topic? Check out this post:
👉 Simplifying Complexity: The Beginning
Hope you like it!! 🚀😃
Top comments (0)