Golang Channel 学习笔记

July 24, 2020 •


Due to current business requirements and large amounts of data (over 10 million per minute) to be processed, we need to separate the http protocol and back-end database from the current structure.

Since I lack channel-related knowledge, so I take a day to do some demonstrations to learn Golang Channel and want to figure out how it works.

Shared memory

  • The most common and often mentioned design pattern in the Go language is-do not communicate through shared memory, but share memory through communication. In many mainstream programming languages, the way for multiple threads to transfer data is generally shared memory. In order to solve the problem of thread conflicts, we need to limit the number of threads that can read and write these variables at the same time. This is contrary to the way encouraged by the Go language. Not the same.

Multiple threads use shared memory to transfer data

  • Although we can also use shared memory and mutexes to communicate in the Go language, the Go language provides a different concurrency model, that is, Communicating sequential processes (CSP). Goroutine and Channel respectively correspond to the entities in CSP and the medium of information transmission. Goroutine in Go language will transmit data through Channel.

Goroutine uses Channel to pass data

  • In the two Goroutines in the above figure, one will send data to the Channel, and the other will receive data from the Channel. They can operate independently and are not directly related, but they can communicate indirectly through the Channel.

First in first out

The current Channel transceiver operations follow the first-in-first-out (FIFO) design. The specific rules are as follows:

- The Goroutine that first reads the data from the Channel will receive the data first;
- The Goroutine that sends data to the Channel first will get the right to send data first;

The design of this FIFO is relatively easy to understand, but the implementation of earlier versions of the Go language does not strictly follow this semantics.
Runtime: make sure blocked channels run operations in FIFO order puts forward a channel with a buffer to perform receiving and sending. The operation did not follow FIFO rule 2.

  • The sender will write data to the buffer, and then wake up the receiver, multiple receivers will try to read the data from the buffer, if not read it will fall into sleep again;
  • The receiver will read data from the buffer, and then wake up the sender, the sender will try to write data to the buffer, if the buffer is full, it will fall into sleep again;

This retry-based mechanism will cause Channel processing to not follow the FIFO principle. After runtime: simplify buffered channels and runtime: simplify chan ops, take the two submitted modifications, the channels with buffer and without buffer will both receive and send data according to first-in first-out.

Lock-free pipeline

Locking is a common concurrency control technology. We generally divide locks into optimistic locking and pessimistic locking, namely optimistic concurrency control and pessimistic concurrency control. A more accurate description of lock-free queues is to use optimistic concurrency control queues. . Optimistic concurrency control is also called optimistic locking, but it is not a real lock. Many people mistakenly believe that optimistic locking is a real lock, but it is just a concurrency control idea