Why Unlimited Goroutines Are Risky For more context, see Go Installation Guide, Go Ecosystem Overview, Go Best Practices.
Goroutines are lightweight, but they are not free. Unbounded spawning can cause:
- Memory pressure.
- Scheduler overhead.
- Connection storms to downstream systems.
- Timeouts and cascading failures.
Concurrency should be controlled as a first-class design decision.
Pattern 1: Buffered Channel Semaphore
A buffered channel is the simplest concurrency limiter.
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
const MaxConcurrentJobs = 4
func main() {
rand.Seed(time.Now().UnixNano())
sem := make(chan struct{}, MaxConcurrentJobs)
var wg sync.WaitGroup
for i := 1; i <= 20; i++ {
wg.Add(1)
sem <- struct{}{} // block when full
go func(id int) {
defer wg.Done()
defer func() { <-sem }() // release slot
job(id)
}(i)
}
wg.Wait()
}
func job(id int) {
fmt.Printf("job %d start\n", id)
time.Sleep(time.Duration(rand.Intn(500)+100) * time.Millisecond)
fmt.Printf("job %d done\n", id)
}
This ensures at most MaxConcurrentJobs run simultaneously.
Pattern 2: Worker Pool
Worker pools are better when jobs come from a stream/queue.
type Job struct {
ID int
}
func worker(id int, jobs <-chan Job, results chan<- int) {
for j := range jobs {
// process
time.Sleep(100 * time.Millisecond)
results <- j.ID
}
}
Benefits:
- Fixed concurrency by number of workers.
- Clear backpressure via channel buffering.
- Easier shutdown control.
Pattern 3: Weighted Semaphore (x/sync/semaphore)
Useful when tasks have different resource cost.
Example idea:
- small task weight = 1
- heavy task weight = 5
Weighted limits prevent one class of jobs from starving others.
Add Context for Cancellation
Concurrency limits are incomplete without cancellation.
Use context.Context to:
- stop new work.
- cancel in-flight operations where possible.
- enforce deadlines.
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
Backpressure Is a Feature
If your input rate exceeds processing capacity, the system must apply backpressure instead of unlimited buffering.
Backpressure options:
- Block producer.
- Drop low-priority tasks.
- Spill to durable queue.
- Return
429 Too Many Requestsupstream.
Choosing MaxConcurrentJobs
Set concurrency by bottleneck type:
- CPU-bound: close to CPU cores.
- I/O-bound: higher, tuned by downstream limits.
- External API-limited: align with provider rate limits.
Always benchmark and observe.
Observability Metrics to Track
For concurrency control, monitor:
- active goroutine count.
- queue depth.
- task latency percentiles.
- error/timeout rates.
- dropped/rejected job count.
Without metrics, tuning is guesswork.
Common Mistakes
- Infinite producer loop without stop condition.
- Missing
deferrelease on semaphore slot. - Ignoring context cancellation.
- Large unbounded buffered channels.
- No timeout around external calls.
Production Checklist
- Concurrency limit defined.
- Cancellation path implemented.
- Queue/backpressure policy documented.
- Metrics exported.
- Load test performed.
Conclusion
Goroutine limits are essential reliability controls. The buffered-channel semaphore pattern is excellent for simple workloads, and worker pools or weighted semaphores are better for complex systems.
Design concurrency intentionally, measure continuously, and tune based on actual bottlenecks.
Comments