The sync
library provides some basic synchronization primitives. The types in the sync
library are mostly intended for low-level routine. Higher-level synchronization is better done via channels and communication.
1. sync.Mutex
& sync.RWMutex
What is mutex
A Mutex is a mutual exclusion lock. It is used to protect critical area and shared resource in the scenario of multi-threads/ multi-routines.
Why mutex
Why do we need a mutex? Imagine you and your friends are sitting at a table with just one slice of pizza left in the box. Everyone is in a hurry and wants to grab the last slice, but without coordination, it could lead to chaos as you all reach for it simultaneously.
A mutex can solve this problem. Think of the mutex as a unique lock for the pizza box. Only the person holding the mutex can open the box and take the slice, while everyone else must wait. Once the slice is taken and the mutex is released, the next person can acquire the mutex and take their turn. This ensures order and prevents conflicts.
You can check out the pizza-grabbing example code under EatPizzaMutex
and EatPizzaRace
methods in this repository: Pizza Example.
sync.Mutex
method
func (*m* *Mutex) Lock()
: Lockm
. If the lock is already in use, the calling goroutine blocks until them
is available.func (*m* *Mutex) UnLock()
: Unlockm
. It is a run-time error ifm
is not locked on entry toUnlock
.func (*m* *Mutex) TryLock() bool
: Try to lockm
and reports whether it succeeded (rare to use).
example
In the example below, the mutex ensures only one operation of deposit or withdrawal can be modify the balance at a time, preventing conflicts.
1func BankMutex() {2 fmt.Println("********** BankMutex **********")3 balance := 10004 mutex := sync.Mutex{}5 ch := make(chan int, 3)6 defer close(ch)7
8 fmt.Printf("Initial balance: %d\n", balance)9
10 deposit := func(amount int, mutex *sync.Mutex, ch chan int) {11 fmt.Printf("Depositing %d\n", amount)12 mutex.Lock()13 balance += amount14 mutex.Unlock()15 ch <- amount23 collapsed lines
16 }17
18 withdraw := func(amount int, mutex *sync.Mutex, ch chan int) {19 fmt.Printf("Withdrawing %d\n", amount)20 mutex.Lock()21 balance -= amount22 mutex.Unlock()23 ch <- -amount24 }25
26 // start 3 goroutines27 go deposit(500, &mutex, ch)28 go withdraw(200, &mutex, ch)29 go withdraw(600, &mutex, ch)30
31 // read txn amount of each operations32 for i := 0; i < 3; i++ {33 fmt.Printf("Transaction amount: %d\n", <-ch)34 }35 time.Sleep(2 * time.Second) // sleep to ensure all goroutines are done.36
37 fmt.Printf("Final balance: %d\n", balance)38}
-
output:
Terminal window 1********** BankMutex **********2Initial balance: 10003Withdrawing 6004Transaction amount: -6005Depositing 5006Transaction amount: 5007Withdrawing 2008Transaction amount: -2009Final balance: 700
sync.RWMutex
why RWMutex
You might wonder why we need RWMutex
when we already have mutex. With a regular mutex, multiple readers must wait their turn to access data one by one, which is inefficient since concurrent reading doesn’t cause race conditions. RWMutex
addresses this by allowing either multiple simultaneous reads or a single write at a time.
If you are interested, you may follow this link to see the benchmark test between Mutex
and RWMutex
of the scenario above: Benchmark Example.
define
A RWMutex
is a reader/writer mutual exclusion lock. The lock can be held by an arbitrary number of readers or a single writer.
Simply put:
- When a shared resource is locked by write-lock, anyone who tries to acquire write-lock or read-lock would be blocked until released.
- When a shared resource is locked by read-lock, anyone who tries to acquire read-lock would not be blocked, while those trying to acquire write-lock would.
In general, the write-lock has a high priority than read-lock. So RWMutex is very useful for the scenario about writing less but read more.
methods
func (rw *RWMutex) Lock()
: Lock rw for writing.func (rw *RWMutex) RLock()
: Lock rw for reading.func (rw ***RWMutex) UnLock()
: Unlock rw for writing.func (rw ***RWMutex) RUnLock()
: Unlock a singleRlock
call → not affect other simultaneous readers.func (rw ***RWMutex) TryLock() bool
func (rw ***RWMutex) TryRLock() bool
example
The following is a simple example of RWMutex
.
1func SimpleRWMutex() {2 balance := 10003 rwMutex := sync.RWMutex{}4
5 write := func(amount int, rwMutex *sync.RWMutex) {6 rwMutex.Lock()7 defer rwMutex.Unlock()8 balance += amount9 }10 read := func(rwMutex *sync.RWMutex) int {11 rwMutex.RLock()12 defer rwMutex.RUnlock()13 return balance14 }15
7 collapsed lines
16 go write(100, &rwMutex)17 go read(&rwMutex)18 go write(200, &rwMutex)19 go read(&rwMutex)20
21 time.Sleep(2 * time.Second)22}
2. sync.WaitGroup
intro
In the previous examples, we use time.Sleep
to “ensure” all the goroutines we fire terminate. However, this method is very time-consuming and inefficient. What’s worse, in a more complex scenario, like calling APIs, we cannot garauntee how much time it would take for all goroutines to get terminated. Some important operations may end before being settled.
We can tackle the issue above using WaitGroup
.
concepts
- Define: A WaitGroup waits for a collection of goroutines to finish.
- How it works:
- The main goroutine, i.e. your main function, calls
WaitGroup.Add
to set the number of goroutines to wait for. - Each goroutines runs and calls
WaitGroup.Done
when finished. - At the same time,
WaitGroup.Wait
can be used to block until all goroutines have finished.
- The main goroutine, i.e. your main function, calls
methods
Let’s dive into these methods:
func (wg *WaitGroup) Add(delta int)
: Adddelta
to theWaitGroup
counter, wheredelta
typically represnets # of goroutines.- If counter == 0, all goroutines blocked on
WaitGroup
.Wait are released. - If counter < 0, Add panics.
- If counter == 0, all goroutines blocked on
func (wg *WaitGroup) Done()
: Decrements theWaitGroup
counter by 1.func (wg *WaitGroup) Wait()
: Wait blocks until theWaitGroup
counter is 0.
example
The following example find prime number in range of start
and end
:
- We can create a
WaitGroup
by either:wg := new(sync.WaitGroup)
var wg sync.WaitGroup
1func FindPrimeNumbersInRange(start, end int) {2 // Create a WaitGroup3 // wg := new(sync.WaitGroup)4 var wg sync.WaitGroup5
6 // Add the number of goroutines to the WaitGroup7 wg.Add(end - start + 1)8
9 // Create a goroutine for each number10 for i := start; i <= end; i++ {11 go IsPrime(i, &wg)12 }13
14 wg.Wait() // <- Block here until all goroutines to finish15}16 collapsed lines
16
17func IsPrime(n int, wg *sync.WaitGroup) bool {18 // Defer Done to notify the WaitGroup when the task done19 defer wg.Done()20
21 if n < 2 {22 return false23 }24 for i := 2; i*i <= n; i++ {25 if n%i == 0 {26 return false27 }28 }29 fmt.Println("Prime number:", n)30 return true31}
attention when using Add
- Positive vs. Negative Delta:
-
Positive Delta: Increment the counter to indicate new tasks to wait for.
-
Negative Delta: Decrement the counter to indicate completed tasks.
1// code snippet from waitgroup.go2// *Done is equivalent to Add(-1)*3func (wg *WaitGroup) Done() {4wg.Add(-1)5}
-
- Calling
Add
Before Starting a Goroutine: callAdd
with a positive delta must occur before a Wait or creating a goroutines. - Rules for reuage: Ensure that all
Add
calls for the new set of tasks happen after the previousWait
call has completed → avoid overlapping usage of the counter. - A
WaitGroup
must not be copied after first use → use the pointer ofWaitGroup
when passing it as a parameter to functions or methods.
3. sync.Once
define
sync.Once
ensures a particular operation is execuetd ony once across multiple goroutines. It is useful for initializing shared resources, running setup code, or performing idempotent operations.
It is particularly useful in Singleton Pattern
methods
func (o *Once) Do(f func())
: calls the function if and only if Do is being called for the first time for this instance ofOnce
. If many goroutines call simultaneously, only one will executef
, and others will wait until it completes.
example: Singleton Pattern
1type DB struct{}2
3var db *DB4var once sync.Once5
6func GetDB() *DB {7 once.Do(func() {8 db = &DB{}9 fmt.Println("db instance created.")10 })11 return db12}13
14func (d *DB) Query() {15 fmt.Println("Querying the db.")13 collapsed lines
16}17
18func SingletonOnce() {19 var wg sync.WaitGroup20 wg.Add(10)21 for i := 0; i < 10; i++ {22 go func() {23 defer wg.Done()24 GetDB().Query()25 }()26 }27 wg.Wait()28}
4. sync.Map
sync.Map
is the same as Map, except it is thread-safe. sync.Map
has a lower performance to ensure concurrency control. In the scenario without concurrency, I would recommend using Map
.
5. sync.Pool
intro
The idea behind sync.Pool
is not related to the concurrency. It is more related to the issue of GC workload. In the case where a massive amount of objects were allocated repeatedly, it would cause a huge workload of GC. With sync.Pool
, we can decrease the allocations and GC workload.
define
A Pool
is a set of temporary objects that may be individually saved and retrieved. It is thread-safe.
Pool
’s purpose is to cache allocated but unused items for later reuse, relieving pressure on the garbage collector.
functions
-
func (p *Pool) Get() any
:- Select an arbitrary item from the Pool.
- Remove it from the Pool.
- Return it to the caller.
If multiple goroutines request objects at the same time and the pool is empty, the New function will be invoked multiple times to create new objects.
-
func (p *Pool) Put(x any)
: Add x to the pool. -
New func() any
: optionally specifies a function to generate a value whenGet
would otherwise returnnil
. It is customized when initializingsync.Pool
:1var slicePool = sync.Pool{2New: func() interface{} {3return make([]byte, 1024)4},5}
use cases
It is typically used in scenarios where we need to allocate many temporary objects, such as buffers, large arrays, or other data structures. These objects may be expensive to allocate and destroy repeatedly, causing a high overhead to the GC.
- Summary of use case:
- Objects are frequently allocated and deallocated
- Objects are independent of each other: don’t require explicit cleanup and can be reused directly.
- High concurrency
- Allocation costs need to be amortized
example
The following example illustrates a case of managing reusable byte buffers in a concurrent web server:
1func PoolExample(workNum int) {2 // Counter for new buffer creations3 created := atomic.Int32{}4
5 bufferPool := sync.Pool{6 New: func() interface{} {7 created.Add(1)8 return new(bytes.Buffer)9 },10 }11
12 worker := func(id int, wg *sync.WaitGroup) {13 defer wg.Done()14
15 // Get buffer from pool20 collapsed lines
16 buf := bufferPool.Get().(*bytes.Buffer)17 defer bufferPool.Put(buf)18
19 // Actually use the buffer20 buf.Reset()21 fmt.Fprintf(buf, "Worker %d using buffer\n", id)22 fmt.Print(buf.String())23 }24
25 wg := new(sync.WaitGroup)26 for i := 0; i < workNum; i++ {27 wg.Add(1)28 // Simulate request pattern: between 0ms - 10ms29 time.Sleep(time.Duration(rand.Intn(10))*time.Millisecond)30 go worker(i, wg)31 }32 wg.Wait()33
34 fmt.Printf("Created %d buffers\n", created.Load())35}
6. sync.Cond
intro
Cond implements a condition variable, a rendezvous point for goroutines waiting for or announcing the occurrence of an event. It enables one or more goroutines to wait until a specific condition is met before they continue execution.
methods
func NewCond(l Locker) *Cond
: used for initialization. Return a new Cond with Lockerl
.func (c *Cond) Broadcast()
: wake all goroutines waiting onc
.func (c *Cond) Signal()
: wake one goroutine waiting onc
, if there is any.func (c *Cond) Wait()
: atomically unlocksc.L
and suspends execution of the calling goroutine.- Steps:
- Unlock the associated lock.
- Block the calling goroutine.
- Resume execution when the waiting goroutine is signaled via
.Signal()
or.Broadcast()
.
- Steps:
example
The following code is an example of Signal()
for sequential, one-by-one goroutine processing. At any given time, only one goroutine is allowed to proceed with its task. Once it completes its work, it signals the next goroutine to continue.
1func CondSignalExample() {2 mutex := sync.Mutex{}3 c := sync.NewCond(&mutex)4 wg := sync.WaitGroup{}5
6 for i := 0; i < 5; i++ {7 wg.Add(1)8 go func(id int) {9 defer wg.Done()10 // Acquire lock before waiting11 c.L.Lock()12 fmt.Printf("Goroutine %d: waiting...\n", id)13 // Release lock and wait for signal14 c.Wait()15
19 collapsed lines
16 // After receiving signal, lock is reacquired17 fmt.Printf("Goroutine %d: started at %s!\n",18 id,19 time.Now().Format("15:04:05.999"),20 )21 // Simulate work22 time.Sleep(1 * time.Second)23 c.L.Unlock()24
25 // Signal next goroutine to proceed26 c.Signal()27 }(i)28 }29
30 time.Sleep(2 * time.Second)31 // Signal first goroutine to start32 c.Signal()33 wg.Wait()34}
The following code is an example of Broadcast() to notify multiple goroutines simultaneously. All goroutines initially wait for a signal to proceed. After a delay, the Broadcast
call wakes up all waiting goroutines, allowing them to start their tasks concurrently.
1func CondBroadcastExample() {2 mutex := sync.Mutex{}3 c := sync.NewCond(&mutex)4 wg := sync.WaitGroup{}5
6 for i := 0; i < 5; i++ {7 wg.Add(1)8 go func(id int) {9 defer wg.Done()10 c.L.Lock()11 fmt.Printf("Goroutine %d: waiting to start...\n", id)12 c.Wait()13 fmt.Printf("Goroutine %d: started!\n", id)14 c.L.Unlock()15 }(i)6 collapsed lines
16 }17
18 time.Sleep(2 * time.Second)19 c.Broadcast()20 wg.Wait()21}
Broadcast()
vs. Signal()
Method | How it works | Example |
---|---|---|
Signal() | Only one goroutine needs to act on the event. | Producer-Consumer: Notify one consumer |
Broadcast() | All goroutines need to act on the event to re-evaluate their state. | Pub/Sub: Notify all subscribers |
- Follow the link to see the examples of Producer-Consumer and Pub/Sub: Link
To see the full example of code, you can visit: goSync.