Go is a language heavily oriented toward concurrent programming. That shows up even in why the language exists. Multi-core processors feel like they have always been here, but commercial multi-core CPUs are not actually that old. Intel shipped the first mainstream dual-core processors around 2005–2006. In 2007, Google started work on Go so those multi-core machines could be exploited at the language level, natively.

If you write multithreaded code in Go, you feel the difference. Compared with many other languages, concurrency is relatively easy to use. You do not implement the Runnable interface or subclass Thread and override run, as in Java—you put the keyword go in front of the function you want to run on another thread.

The convenience does not stop there. Go provides a built-in type, channel, as the way for threads to share data. A channel behaves much like a queue: it supports enqueue/dequeue semantics and FIFO ordering. So why did Go make channel a language feature instead of leaving it to the standard library as an ordinary data structure?

When you learn about channels in Go, you keep running into one sentence:

Do not communicate by sharing memory; instead, share memory by communicating

At first glance it is opaque. Once it clicks, you see how seriously Go takes getting concurrency right.

Here is a small example.

var counter = 0

func main() {
  for i := 0; i < 100; i++ {
    go increaseCounter()
  }
  fmt.Println(counter)
}

func increaseCounter() {
  counter += 1
}

The idea is simple. There is a function, increaseCounter, that increments counter, and the main goroutine launches it on separate threads a hundred times, then prints counter. Intuitively you expect 100, but you usually do not get it. On my MacBook the result lands in the high 80s or low 90s. On a single-core machine, you would likely see the intended 100.

That happens because every goroutine touches the shared counter at once. On a dual-core-or-better machine, multiple threads can execute the increment concurrently, so counter often does not reach 100.

That is one of the classic mistakes in multithreaded code: failing to manage a shared resource correctly. The usual fix is a lock: acquire before touching the shared data, release when done. Go exposes that pattern through sync.Mutex.

In Go, that style is actually something you are encouraged to move away from. The motto above says Do not communicate by sharing memory—and this sample does exactly that by sharing counter in memory across threads.

Let us rewrite it with a channel.

func main() {
  ch := make(chan int)
  go receiver(ch)
  sender(ch)
}

func receiver(ch chan int) {
  counter := 0
  for value := range ch {
    counter += value
    fmt.Println(counter)
  }
}

func sender(ch chan int) {
  for i := 0; i < 100; i++ {
    ch <- 1
  }
}

This uses two threads of execution. One runs receiver, reading from the channel and adding each value to a local counter. The other is the main goroutine running sender, sending the value 1 over the channel a hundred times.

Here the goroutines share only the channel, not a separate piece of mutable shared state like before. That removes the need for explicit locks. They share data by communicating through the channel.

Multithreaded programming is trickier than it looks; a lot of the pain comes from mishandling shared resources. Throwing locks at the problem makes code harder to read, invites new bugs, and can easily hurt performance from constant lock/unlock churn.

To steer you away from that from the start, Go bakes in goroutines and channel and, in doing so, nudges you toward concurrency best practices.