Concurrency in Go often brings us to channels, but there’s another synchronization primitive that may be exactly what you need in some scenarios: sync.Cond
. If you’ve ever wondered why you’d reach for a sync.Cond
instead of using channels alone, this article is for you. By the end, you’ll see a simple custom implementation, understand how the real sync.Cond
works under the hood, and know when to choose it in your own projects.
Why Use sync.Cond
?
Most Go developers instinctively reach for channels to coordinate goroutines: sending values, waiting for results, and so on. However, channels also carry data. What if all you need is a simple “wake-up” signal, without any payload? That’s exactly where sync.Cond
shines. It’s a lightweight way to block one or more goroutines until a condition becomes true, without transferring actual data.
Think of it like a broadcast system: goroutines can call Wait()
and suspend until somebody calls Signal()
(wake a single waiter) or Broadcast()
(wake all waiters). Underneath, sync.Cond
doesn’t allocate a channel for each goroutine; instead, it maintains a small linked list of waiting goroutines, making it more memory-efficient when you just need signaling.
To illustrate, let’s build our own “poor man’s” condition variable using channels. Once you see the analogy, switching to sync.Cond
becomes straightforward.
Building a Custom Cond
with Channels
Here’s a minimal struct that mimics sync.Cond
by using a slice of channels and a mutex:
type MyCond struct {
chs []chan struct{}
mu sync.Mutex
}
-
chs
holds one channel per waiting goroutine. -
mu
ensures that appending or removing fromchs
is safe.
Below are three methods—Wait()
, Signal()
, and Broadcast()
—that emulate the core behavior of sync.Cond
.
func (c *MyCond) Wait() {
c.mu.Lock()
ch := make(chan struct{})
c.chs = append(c.chs, ch)
c.mu.Unlock()
// wait for a signal
<-ch
}
func (c *MyCond) Signal() {
c.mu.Lock()
defer c.mu.Unlock()
if len(c.chs) == 0 {
return
}
// pick the first channel and send signal
ch := c.chs[0]
ch <- struct{}{}
close(ch)
// remove that channel from the slice
c.chs = c.chs[1:]
}
func (c *MyCond) Broadcast() {
c.mu.Lock()
defer c.mu.Unlock()
for _, ch := range c.chs {
ch <- struct{}{}
close(ch)
}
// reset the slice so no stale channels remain
c.chs = make([]chan struct{}, 0)
}
What’s happening here?
- Wait()
- Lock the mutex.
- Create a new “signal” channel (
ch
). - Append it to
c.chs
. - Unlock, then block on
<-ch
. - When someone calls
Signal()
orBroadcast()
, that channel is closed (and a value is sent), letting this goroutine resume.
- Signal()
- Lock the mutex.
- If there’s at least one waiting channel, pick the first.
- Send a dummy
struct{}{}
onto it, thenclose(ch)
so that any extra<-ch
receives don’t hang. - Remove that channel from the slice.
- Broadcast()
- Lock the mutex.
- Loop over every waiting channel: send a dummy signal and close it.
- Reset the slice to empty, so future waiters start fresh.
This simple approach shows how condition variables signal “ready to go” without passing actual payloads—just signals.
Testing Our Custom MyCond
To see MyCond
in action, imagine spawning multiple worker goroutines that all wait for a signal. Then, from another goroutine, send one signal at a time. Finally, switch to broadcasting to wake everyone at once.
func main() {
cond := &MyCond{}
wg := sync.WaitGroup{}
tasks := 5
wg.Add(tasks) // add tasks count to the wait group
for id := range tasks {
// create separate go routine for each task
go func() {
defer wg.Done()
fmt.Println("Waiting", id)
cond.Wait()
fmt.Println("Done", id)
}()
}
go func() {
for range tasks {
time.Sleep(1 * time.Second)
cond.Signal() // send signal to one routine in every 1 second
}
}()
// wait for all routines to finish
wg.Wait()
}
The output
When you run that, each goroutine hangs on cond.Wait()
. Every second, Signal()
wakes exactly one goroutine, until all 5 finish.
Switching to Broadcast
Instead of signaling one by one, you can broadcast after a delay to wake all of them at once:
// just change the signal to broadcast go routine
go func() {
// - for range tasks {
// - time.Sleep(1 * time.Second)
// - cond.Signal() // send signal to one routine in every 1 second
// - }
time.Sleep(2 * time.Second)
cond.Broadcast() // send signal to all routines at once after 2 seconds
}()
The output
With this modification, all 5 goroutines sleep in cond.Wait()
. After two seconds, a single Broadcast()
wakes everybody, and you’ll see all “Done” messages in rapid succession.
Replacing MyCond
with sync.Cond
Once you’ve verified the custom behavior, swapping in the real sync.Cond
is straightforward. Anywhere you wrote:
// cond := &MyCond{}
cond := sync.NewCond(&sync.Mutex{})
// cond.Wait()
cond.L.Lock()
cond.Wait()
cond.L.Unlock()
You’ll get the same “Waiting … Done” behavior as before, but now backed by the official, optimized implementation.
Under the hood, sync.Cond
doesn’t spin up a channel per waiter. Instead it uses an internal notifyList
—a small doubly linked list of waiting goroutines ) and uses low‐level runtime primitives to park and wake goroutines. Each call to Wait()
enqueues the goroutine on that list. Signal()
removes one link and wakes its goroutine; Broadcast()
traverses the whole list and wakes every waiter. Memory-wise, this is much cheaper than allocating a channel per waiter, especially if you have hundreds or thousands of goroutines occasionally blocking on the same condition.
For a deeper dive, check out the source code for sync.Cond
.
When to Choose sync.Cond
Over Channels
Here are a few scenarios where sync.Cond
makes sense:
Simple Signaling
If goroutines only need a “go now” notification—no data passed—sync.Cond
provides a clearer, more intent-expressive API than channels filled with dummy values.Broadcast Semantics
Channels lack a built-in “wake everyone” primitive. You could loop over a list of channels, but managing that list is extra boilerplate.sync.Cond.Broadcast()
does exactly what it says: wake all waiters at once.Lower Memory Overhead
Each Go channel has internal buffers, mutexes, and so on. If you merely need a “signal,” channels allocate more than necessary. Async.Cond
maintains a minimal linked list of waiters, which is especially noticeable if you have thousands of goroutines waiting occasionally.Condition-Based Waiting
Often you combinesync.Cond
with a separate shared value. For example:
mu.Lock()
for !conditionMet {
cond.Wait()
}
// now the condition is true; proceed
mu.Unlock()
This “wait in a loop” pattern is common in concurrent structures like pools, queues, or buffered buffers. Channels alone can’t—you’d have to juggle extra variables or use select
, which can get messy.
In short, if your goroutines coordinate purely on a boolean or numerical condition—and you want to wake either one waiter or all waiters—sync.Cond
shines. If you need to send actual data, channels remain the more idiomatic choice.
Top comments (5)
Super clear breakdown of sync.Cond ..
This makes me rethink how I handle pure signaling - didn't realize memory impact of channels vs sync.Cond until now!
Have you run into practical bugs from using channels instead of sync.Cond before?
I ran the
MyCond
implementation with 1M tasks using the same code example mentioned above to measure its maximum RSS (resident set size) usage.Command used (on Linux):
MyCond
used: 2,737,792 KBsync.Cond
used: 2,609,184 KBFor simple signaling, nil channels are still a good choice. But when we need to deal with N routines for signal, then
sync.Cond
would be good choiceI understand one is supposed to call
cond.L.Lock()
beforecond.Wait()
. That's not required by your original implementation and seems more error prone. Do you know why they designed the API that way?Also, the official documentation suggests using channels "for simple cases"! Which also sounds odd to me as I would find it easier to understand code using
sync.Cond
than an equivalent implementation using channels.On my custom implementation, I used the internal MX lock inside the
Wait
method in a different flow. since I used channels I just needed lock it when I am pushing that channel inside the waitlist. I don't know why it feels error-prone to you. curious to know.As I understood, if you notice their implementation they call
c.L.Unlock()
first and thenc.L.Lock()
later again after calling theruntime_notifyListWait
func.The flow is
Wait
methodWait
methodThis is the reason we have to lock in first and unlock last explicitly.
For simple signaling, nil channels are still a good choice. But when we need to deal with N routines for signal, then sync.Cond would be good choice
Some comments may only be visible to logged-in visitors. Sign in to view all comments.