Jida's Blog

Concurrency in Go (1): sync

19th December 2024
Golang
Golang Standard Package
Concurrency
Last updated:25th January 2025
10 Minutes
1992 Words

The sync library provides some basic synchronization primitives. The types in the sync library are mostly intended for low-level routine. Higher-level synchronization is better done via channels and communication.

1. sync.Mutex & sync.RWMutex

What is mutex

A Mutex is a mutual exclusion lock. It is used to protect critical area and shared resource in the scenario of multi-threads/ multi-routines.

Why mutex

Why do we need a mutex? Imagine you and your friends are sitting at a table with just one slice of pizza left in the box. Everyone is in a hurry and wants to grab the last slice, but without coordination, it could lead to chaos as you all reach for it simultaneously.

A mutex can solve this problem. Think of the mutex as a unique lock for the pizza box. Only the person holding the mutex can open the box and take the slice, while everyone else must wait. Once the slice is taken and the mutex is released, the next person can acquire the mutex and take their turn. This ensures order and prevents conflicts.

You can check out the pizza-grabbing example code under EatPizzaMutex and EatPizzaRace methods in this repository: Pizza Example.

sync.Mutex

method

  • func (*m* *Mutex) Lock(): Lock m. If the lock is already in use, the calling goroutine blocks until the m is available.
  • func (*m* *Mutex) UnLock(): Unlock m. It is a run-time error if m is not locked on entry to Unlock.
  • func (*m* *Mutex) TryLock() bool: Try to lock m and reports whether it succeeded (rare to use).

example

In the example below, the mutex ensures only one operation of deposit or withdrawal can be modify the balance at a time, preventing conflicts.

1
func BankMutex() {
2
fmt.Println("********** BankMutex **********")
3
balance := 1000
4
mutex := sync.Mutex{}
5
ch := make(chan int, 3)
6
defer close(ch)
7
8
fmt.Printf("Initial balance: %d\n", balance)
9
10
deposit := func(amount int, mutex *sync.Mutex, ch chan int) {
11
fmt.Printf("Depositing %d\n", amount)
12
mutex.Lock()
13
balance += amount
14
mutex.Unlock()
15
ch <- amount
23 collapsed lines
16
}
17
18
withdraw := func(amount int, mutex *sync.Mutex, ch chan int) {
19
fmt.Printf("Withdrawing %d\n", amount)
20
mutex.Lock()
21
balance -= amount
22
mutex.Unlock()
23
ch <- -amount
24
}
25
26
// start 3 goroutines
27
go deposit(500, &mutex, ch)
28
go withdraw(200, &mutex, ch)
29
go withdraw(600, &mutex, ch)
30
31
// read txn amount of each operations
32
for i := 0; i < 3; i++ {
33
fmt.Printf("Transaction amount: %d\n", <-ch)
34
}
35
time.Sleep(2 * time.Second) // sleep to ensure all goroutines are done.
36
37
fmt.Printf("Final balance: %d\n", balance)
38
}
  • output:

    Terminal window
    1
    ********** BankMutex **********
    2
    Initial balance: 1000
    3
    Withdrawing 600
    4
    Transaction amount: -600
    5
    Depositing 500
    6
    Transaction amount: 500
    7
    Withdrawing 200
    8
    Transaction amount: -200
    9
    Final balance: 700

sync.RWMutex

why RWMutex

You might wonder why we need RWMutex when we already have mutex. With a regular mutex, multiple readers must wait their turn to access data one by one, which is inefficient since concurrent reading doesn’t cause race conditions. RWMutex addresses this by allowing either multiple simultaneous reads or a single write at a time.

If you are interested, you may follow this link to see the benchmark test between Mutex and RWMutex of the scenario above: Benchmark Example.

define

A RWMutex is a reader/writer mutual exclusion lock. The lock can be held by an arbitrary number of readers or a single writer.

Simply put:

  • When a shared resource is locked by write-lock, anyone who tries to acquire write-lock or read-lock would be blocked until released.
  • When a shared resource is locked by read-lock, anyone who tries to acquire read-lock would not be blocked, while those trying to acquire write-lock would.

In general, the write-lock has a high priority than read-lock. So RWMutex is very useful for the scenario about writing less but read more.

methods

  • func (rw *RWMutex) Lock(): Lock rw for writing.
  • func (rw *RWMutex) RLock(): Lock rw for reading.
  • func (rw ***RWMutex) UnLock(): Unlock rw for writing.
  • func (rw ***RWMutex) RUnLock(): Unlock a single Rlock call → not affect other simultaneous readers.
  • func (rw ***RWMutex) TryLock() bool
  • func (rw ***RWMutex) TryRLock() bool

example

The following is a simple example of RWMutex.

1
func SimpleRWMutex() {
2
balance := 1000
3
rwMutex := sync.RWMutex{}
4
5
write := func(amount int, rwMutex *sync.RWMutex) {
6
rwMutex.Lock()
7
defer rwMutex.Unlock()
8
balance += amount
9
}
10
read := func(rwMutex *sync.RWMutex) int {
11
rwMutex.RLock()
12
defer rwMutex.RUnlock()
13
return balance
14
}
15
7 collapsed lines
16
go write(100, &rwMutex)
17
go read(&rwMutex)
18
go write(200, &rwMutex)
19
go read(&rwMutex)
20
21
time.Sleep(2 * time.Second)
22
}

2. sync.WaitGroup

intro

In the previous examples, we use time.Sleep to “ensure” all the goroutines we fire terminate. However, this method is very time-consuming and inefficient. What’s worse, in a more complex scenario, like calling APIs, we cannot garauntee how much time it would take for all goroutines to get terminated. Some important operations may end before being settled.

We can tackle the issue above using WaitGroup.

concepts

  • Define: A WaitGroup waits for a collection of goroutines to finish.
  • How it works:
    1. The main goroutine, i.e. your main function, calls WaitGroup.Add to set the number of goroutines to wait for.
    2. Each goroutines runs and calls WaitGroup.Done when finished.
    3. At the same time, WaitGroup.Wait can be used to block until all goroutines have finished.

methods

Let’s dive into these methods:

  • func (wg *WaitGroup) Add(delta int): Add delta to the WaitGroup counter, where delta typically represnets # of goroutines.
    • If counter == 0, all goroutines blocked on WaitGroup.Wait are released.
    • If counter < 0, Add panics.
  • func (wg *WaitGroup) Done(): Decrements the WaitGroup counter by 1.
  • func (wg *WaitGroup) Wait() : Wait blocks until the WaitGroup counter is 0.

example

The following example find prime number in range of start and end:

  • We can create a WaitGroup by either:
    • wg := new(sync.WaitGroup)
    • var wg sync.WaitGroup
1
func FindPrimeNumbersInRange(start, end int) {
2
// Create a WaitGroup
3
// wg := new(sync.WaitGroup)
4
var wg sync.WaitGroup
5
6
// Add the number of goroutines to the WaitGroup
7
wg.Add(end - start + 1)
8
9
// Create a goroutine for each number
10
for i := start; i <= end; i++ {
11
go IsPrime(i, &wg)
12
}
13
14
wg.Wait() // <- Block here until all goroutines to finish
15
}
16 collapsed lines
16
17
func IsPrime(n int, wg *sync.WaitGroup) bool {
18
// Defer Done to notify the WaitGroup when the task done
19
defer wg.Done()
20
21
if n < 2 {
22
return false
23
}
24
for i := 2; i*i <= n; i++ {
25
if n%i == 0 {
26
return false
27
}
28
}
29
fmt.Println("Prime number:", n)
30
return true
31
}

attention when using Add

  1. Positive vs. Negative Delta:
    • Positive Delta: Increment the counter to indicate new tasks to wait for.

    • Negative Delta: Decrement the counter to indicate completed tasks.

      1
      // code snippet from waitgroup.go
      2
      // *Done is equivalent to Add(-1)*
      3
      func (wg *WaitGroup) Done() {
      4
      wg.Add(-1)
      5
      }
  2. Calling Add Before Starting a Goroutine: call Add with a positive delta must occur before a Wait or creating a goroutines.
  3. Rules for reuage: Ensure that all Add calls for the new set of tasks happen after the previous Wait call has completed → avoid overlapping usage of the counter.
  4. A WaitGroup must not be copied after first use → use the pointer of WaitGroup when passing it as a parameter to functions or methods.

3. sync.Once

define

sync.Once ensures a particular operation is execuetd ony once across multiple goroutines. It is useful for initializing shared resources, running setup code, or performing idempotent operations.

It is particularly useful in Singleton Pattern

methods

  • func (o *Once) Do(f func()): calls the function if and only if Do is being called for the first time for this instance of Once. If many goroutines call simultaneously, only one will execute f, and others will wait until it completes.

example: Singleton Pattern

1
type DB struct{}
2
3
var db *DB
4
var once sync.Once
5
6
func GetDB() *DB {
7
once.Do(func() {
8
db = &DB{}
9
fmt.Println("db instance created.")
10
})
11
return db
12
}
13
14
func (d *DB) Query() {
15
fmt.Println("Querying the db.")
13 collapsed lines
16
}
17
18
func SingletonOnce() {
19
var wg sync.WaitGroup
20
wg.Add(10)
21
for i := 0; i < 10; i++ {
22
go func() {
23
defer wg.Done()
24
GetDB().Query()
25
}()
26
}
27
wg.Wait()
28
}

4. sync.Map

sync.Map is the same as Map, except it is thread-safe. sync.Map has a lower performance to ensure concurrency control. In the scenario without concurrency, I would recommend using Map.

5. sync.Pool

intro

The idea behind sync.Pool is not related to the concurrency. It is more related to the issue of GC workload. In the case where a massive amount of objects were allocated repeatedly, it would cause a huge workload of GC. With sync.Pool, we can decrease the allocations and GC workload.

define

A Pool is a set of temporary objects that may be individually saved and retrieved. It is thread-safe.

Pool’s purpose is to cache allocated but unused items for later reuse, relieving pressure on the garbage collector.

functions

  • func (p *Pool) Get() any:

    1. Select an arbitrary item from the Pool.
    2. Remove it from the Pool.
    3. Return it to the caller.

    If multiple goroutines request objects at the same time and the pool is empty, the New function will be invoked multiple times to create new objects.

  • func (p *Pool) Put(x any): Add x to the pool.

  • New func() any: optionally specifies a function to generate a value when Get would otherwise return nil. It is customized when initializing sync.Pool:

    1
    var slicePool = sync.Pool{
    2
    New: func() interface{} {
    3
    return make([]byte, 1024)
    4
    },
    5
    }

use cases

It is typically used in scenarios where we need to allocate many temporary objects, such as buffers, large arrays, or other data structures. These objects may be expensive to allocate and destroy repeatedly, causing a high overhead to the GC.

  • Summary of use case:
    1. Objects are frequently allocated and deallocated
    2. Objects are independent of each other: don’t require explicit cleanup and can be reused directly.
    3. High concurrency
    4. Allocation costs need to be amortized

example

The following example illustrates a case of managing reusable byte buffers in a concurrent web server:

1
func PoolExample(workNum int) {
2
// Counter for new buffer creations
3
created := atomic.Int32{}
4
5
bufferPool := sync.Pool{
6
New: func() interface{} {
7
created.Add(1)
8
return new(bytes.Buffer)
9
},
10
}
11
12
worker := func(id int, wg *sync.WaitGroup) {
13
defer wg.Done()
14
15
// Get buffer from pool
20 collapsed lines
16
buf := bufferPool.Get().(*bytes.Buffer)
17
defer bufferPool.Put(buf)
18
19
// Actually use the buffer
20
buf.Reset()
21
fmt.Fprintf(buf, "Worker %d using buffer\n", id)
22
fmt.Print(buf.String())
23
}
24
25
wg := new(sync.WaitGroup)
26
for i := 0; i < workNum; i++ {
27
wg.Add(1)
28
// Simulate request pattern: between 0ms - 10ms
29
time.Sleep(time.Duration(rand.Intn(10))*time.Millisecond)
30
go worker(i, wg)
31
}
32
wg.Wait()
33
34
fmt.Printf("Created %d buffers\n", created.Load())
35
}

6. sync.Cond

intro

Cond implements a condition variable, a rendezvous point for goroutines waiting for or announcing the occurrence of an event. It enables one or more goroutines to wait until a specific condition is met before they continue execution.

methods

  • func NewCond(l Locker) *Cond: used for initialization. Return a new Cond with Locker l.
  • func (c *Cond) Broadcast(): wake all goroutines waiting on c.
  • func (c *Cond) Signal(): wake one goroutine waiting on c, if there is any.
  • func (c *Cond) Wait(): atomically unlocks c.L and suspends execution of the calling goroutine.
    • Steps:
      1. Unlock the associated lock.
      2. Block the calling goroutine.
      3. Resume execution when the waiting goroutine is signaled via .Signal() or .Broadcast().

example

The following code is an example of Signal() for sequential, one-by-one goroutine processing. At any given time, only one goroutine is allowed to proceed with its task. Once it completes its work, it signals the next goroutine to continue.

1
func CondSignalExample() {
2
mutex := sync.Mutex{}
3
c := sync.NewCond(&mutex)
4
wg := sync.WaitGroup{}
5
6
for i := 0; i < 5; i++ {
7
wg.Add(1)
8
go func(id int) {
9
defer wg.Done()
10
// Acquire lock before waiting
11
c.L.Lock()
12
fmt.Printf("Goroutine %d: waiting...\n", id)
13
// Release lock and wait for signal
14
c.Wait()
15
19 collapsed lines
16
// After receiving signal, lock is reacquired
17
fmt.Printf("Goroutine %d: started at %s!\n",
18
id,
19
time.Now().Format("15:04:05.999"),
20
)
21
// Simulate work
22
time.Sleep(1 * time.Second)
23
c.L.Unlock()
24
25
// Signal next goroutine to proceed
26
c.Signal()
27
}(i)
28
}
29
30
time.Sleep(2 * time.Second)
31
// Signal first goroutine to start
32
c.Signal()
33
wg.Wait()
34
}

The following code is an example of Broadcast() to notify multiple goroutines simultaneously. All goroutines initially wait for a signal to proceed. After a delay, the Broadcast call wakes up all waiting goroutines, allowing them to start their tasks concurrently.

1
func CondBroadcastExample() {
2
mutex := sync.Mutex{}
3
c := sync.NewCond(&mutex)
4
wg := sync.WaitGroup{}
5
6
for i := 0; i < 5; i++ {
7
wg.Add(1)
8
go func(id int) {
9
defer wg.Done()
10
c.L.Lock()
11
fmt.Printf("Goroutine %d: waiting to start...\n", id)
12
c.Wait()
13
fmt.Printf("Goroutine %d: started!\n", id)
14
c.L.Unlock()
15
}(i)
6 collapsed lines
16
}
17
18
time.Sleep(2 * time.Second)
19
c.Broadcast()
20
wg.Wait()
21
}

Broadcast() vs. Signal()

MethodHow it worksExample
Signal()Only one goroutine needs to act on the event.Producer-Consumer: Notify one consumer
Broadcast()All goroutines need to act on the event to re-evaluate their state.Pub/Sub: Notify all subscribers
  • Follow the link to see the examples of Producer-Consumer and Pub/Sub: Link

To see the full example of code, you can visit: goSync.

References

Article title:Concurrency in Go (1): sync
Article author:Jida-Li
Release time:19th December 2024