Share by communicating

Go encourages a different approach in which shared values are passed around on channels and, in fact, NEVER actively shared by separate threads of execution.
Only one goroutine has access to the value at any given time.
Data races cannot occur, by design.

Do not communicate by sharing memory; instead, share memory by communicating.

Unix pipelines fit this model perfectly.


Goroutines

A goroutine has a simple model: it is a function executing concurrently with other goroutines in the same address space.
It is lightweight, costing little more than the allocation of stack space.
And the stacks start small, so they are cheap, and grow by allocating (and freeing) heap storage as required.

go list.Sort() // run list.Sort concurrently; don't wait for it.

The effect is similar to the Unix shell’s & notation for running a command in the background.

Function literals are closures: the implementation makes sure the variables referred to by the function survive as long as they are active.


Channels

  1. ci := make(chan int) // unbuffered channel of integers
  2. cj := make(chan int, 0) // unbuffered or synchronous channel of integers
  3. cs := make(chan *os.File, 100) // buffered channel of pointers to Files

Unbuffered channels combine communication with synchronization—guaranteeing that two calculations (goroutines) are in a known state.

  1. c := make(chan int) // Allocate a channel.
  2. // Start the sort in a goroutine; when it completes, signal on the channel.
  3. go func() {
  4. list.Sort()
  5. c <- 1 // Send a signal; value does not matter.
  6. }()
  7. doSomethingForAWhile()
  8. <-c // Wait for sort to finish; discard sent value.

A buffered channel can be used like a semaphore, for instance to limit throughput.


Channels of channels

One of the most important properties of Go is that a channel is a first-class value that can be allocated and passed around like any other.

demultiplexing


Parallelization

Another application of these ideas is to parallelize a calculation across multiple CPU cores.

  • an expensive operation to perform on a vector of items
  • the value of the operation on each item is independent ```go type Vector []float64

// Apply the operation to v[i], v[i+1] … up to v[n-1]. func (v Vector) DoSome(i, n int, u Vector, c chan int) { for ; i < n; i++ { v[i] += u.Op(v[i]) } c <- 1 // signal that this piece is done }

const numCPU = runtime.NumCPU() // var numCPU = runtime.GOMAXPROCS(0)

func (v Vector) DoAll(u Vector) { c := make(chan int, numCPU) // Buffering optional but sensible. for i := 0; i < numCPU; i++ { go v.DoSome(ilen(v)/numCPU, (i+1)len(v)/numCPU, u, c) } // Drain the channel. for i := 0; i < numCPU; i++ { <-c // wait for one task to complete } // All done. }

  1. <br />
  2. ---
  3. <a name="MZXIO"></a>
  4. ## A leaky buffer
  5. The tools of concurrent programming can even make** non-concurrent** ideas easier to express.
  6. **a leaky bucket free list**<br />To avoid allocating and freeing buffers, it keeps a free list, and uses a buffered channel to represent it.
  7. ```go
  8. var freeList = make(chan *Buffer, 100)
  9. var serverChan = make(chan *Buffer)
  10. func client() {
  11. for {
  12. var b *Buffer
  13. // Grab a buffer if available; allocate if not.
  14. select {
  15. case b = <-freeList:
  16. // Got one; nothing more to do.
  17. default:
  18. // None free, so allocate a new one.
  19. b = new(Buffer)
  20. }
  21. load(b) // Read next message from the net.
  22. serverChan <- b // Send to server.
  23. }
  24. }

The server loop receives each message from the client, processes it, and returns the buffer to the free list.

  1. func server() {
  2. for {
  3. b := <-serverChan // Wait for work.
  4. process(b)
  5. // Reuse buffer if there's room.
  6. select {
  7. case freeList <- b:
  8. // Buffer on free list; nothing more to do.
  9. default:
  10. // Free list full, just carry on.
  11. }
  12. }
  13. }