When utilizing Go’s great feature of concurrency and goroutines, it is not safe to start too many goroutines at once, say 10000 or 10,000 goroutines running at the moment, these numbers could be too high and not safe as well, could also cause runtime error or panic.
How can we control and limit the total number of goroutines running at the same time? See the following example.
When the waitChan is full (reach the limit of MAX_CONCURRENT_JOBS), the for loop blocks and waits for one of the running goroutines to be completed. Once a goroutine is completed, <-waitChan is executed, a position in waitChan is released, then the for loop can continue to start another goroutine.
So the result is that the number of goroutines you will have is at most MAX_CONCURRENT_JOBS at the same time.
package main
import (
"fmt"
"math/rand"
"time"
)
// Change this for your situation, 20 or 30; 1,000 or 10,000 may be too high
const MAX_CONCURRENT_JOBS = 2
func main() {
// Buffered channel to limit concurrency
waitChan := make(chan struct{}, MAX_CONCURRENT_JOBS)
count := 0
for {
// Block if channel is full
waitChan <- struct{}{}
count++
go func(count int) {
job(count)
// Release a slot
<-waitChan
}(count)
}
}
func job(index int) {
fmt.Println(index, "begin doing something")
// Simulate work with random sleep
time.Sleep(time.Duration(rand.Intn(10)) * time.Second)
fmt.Println(index, "done")
}
Output:
2 begin doing something
1 begin doing something
2 done
3 begin doing something
1 done
4 begin doing something
3 done
5 begin doing something
5 done
6 begin doing something
4 done
7 begin doing something
6 done
8 begin doing something
8 done
9 begin doing something
Explanation
- Buffered Channel as Semaphore: The
waitChanacts as a semaphore. Sending to the channel (waitChan <- struct{}{}) blocks when the buffer is full, limiting active goroutines. - Releasing Slots: Each goroutine removes an item from the channel when done, allowing the next one to start.
- Infinite Loop: The example uses an infinite loop for demonstration; in practice, use a finite loop or signal to stop.
Alternatives
- Worker Pool Pattern: Use a pool of workers with channels for jobs and results.
- sync.WaitGroup: For waiting on a fixed number of goroutines, but doesn’t limit concurrency.
- Third-Party Libraries: Like
golang.org/x/sync/semaphorefor more advanced control.
Best Practices
- Tune MAX_CONCURRENT_JOBS: Based on your system’s CPU cores, memory, and I/O operations.
- Avoid Over-Subscription: Too many goroutines can lead to thrashing and poor performance.
- Monitor Resources: Use profiling tools to check CPU and memory usage.
- Graceful Shutdown: Implement ways to stop goroutines cleanly.
Conclusion
Limiting goroutines prevents resource exhaustion and improves stability. This pattern is simple yet effective for controlling concurrency in Go applications.