Previously, we implemented channel, which is the most popular concurrency toy in recent years, in C ++ for the coming CPU 10000 core era. (I made a library that does things like goroutine/Channel in C ++ 20) At that time, I read the implementation of golang channel, so I will explain the implementation
Functionally, it's about the same as what is called blocking_queue in thread.
It has the following features -Runs on goroutine (not thread + mutex base) --Use a finite queue --The number of elements in the queue allows 0 --Has a close function
I couldn't find the definition of the word goroutine, but I think the word goroutine is used in roughly the following two meanings.
In addition, goroutine is abbreviated as g
in the source code.
reference
Let's take a look at the implementation of channel immediately https://github.com/golang/go/blob/7307e86afda3c5c7f6158d2469c39606fd1dba65/src/runtime/chan.go
https://github.com/golang/go/blob/7307e86afda3c5c7f6158d2469c39606fd1dba65/src/runtime/chan.go#L32
As you read on, it has a structure like the one below.
struct chan{
std::deque<value_type> value_queue; //It's actually a Ring Buffer, but it's about the same.
std::list<RecvSudog> recv_queue;
std::list<SendSudog> send_queue;
bool closed = false;
std::mutex mutex;
};
The channel is mainly made up of 3 queues as above
value_queue
is a queue that stores values
send/recv_queue
is a queue for storing goroutine in the sending and receiving town.
https://github.com/golang/go/blob/7307e86afda3c5c7f6158d2469c39606fd1dba65/src/runtime/chan.go#L71
Here, channel is generated When the size of the queue element or element type is 0, we are optimizing to reduce the value_queue, but there is nothing else interesting, so skip it.
The behavior of the transmission process is as follows
Keeping this in mind and then reading the source will make it easier to read. I think I put it in, so I'll read it right away
Upper: https://github.com/golang/go/blob/7307e86afda3c5c7f6158d2469c39606fd1dba65/src/runtime/chan.go#L158-L200 We are performing processing and optimization that can be implemented only with atomic instructions without using mutex (OS lock).
If there is an element in recv_queue: https://github.com/golang/go/blob/7307e86afda3c5c7f6158d2469c39606fd1dba65/src/runtime/chan.go#L293 Extract one goroutine from recv_queue, pass the value to goroutine, unlock the channel, and make the retrieved goroutine restartable (≒ post to thread_pool)
What should be noted here is that goroutine can be restarted after unlocking. this is This is caused by the problem that "it becomes deadlock when trying to lock the channel when changing the stack size of goroutine". For more information, see issues around here https://github.com/golang/go/issues/12967
If there is space in value_queue: https://github.com/golang/go/blob/7307e86afda3c5c7f6158d2469c39606fd1dba65/src/runtime/chan.go#L214 There is nothing interesting enough to explain, so please read the implementation yourself
When enqueue to the waiting queue: https://github.com/golang/go/blob/7307e86afda3c5c7f6158d2469c39606fd1dba65/src/runtime/chan.go#L258
gopark(chanparkcommit, unsafe.Pointer(&c.lock), waitReasonChanSend, traceEvGoBlockSend, 2)
Explain the interruption and lock release by gopark ()
(https://github.com/golang/go/blob/fd6ba1c8a23d8a3fffb6c475b21f78510152ef5c/src/runtime/proc.go#L319) here
When you call gopark
It becomes the flow.
The reason why we are doing complicated processing such as "opening the lock outside the oroutine (machine)" is This is because if the lock of the channel is released first, the goroutine may be restarted before the goroutine is interrupted by other processing using the channel.
To prevent this, you can add mutex to goroutine so that goroutine is not executed at the same time, but in that case, additional mutex cost is required. It seems that the current implementation is to prevent it
Please read it as it is almost the same as the transmission process.
https://github.com/golang/go/blob/c06a354bccf60ea32ed74238be409a00aac292c5/src/runtime/select.go
select is (for some reason) separated into separate files
The behavior of select is as follows
Keeping this in mind and then reading the source will make it easier to read. I think I put it in, so I'll read it right away
Determination of poll order and lock acquisition process of each channel: https://github.com/golang/go/blob/c06a354bccf60ea32ed74238be409a00aac292c5/src/runtime/select.go#L121-L230
In what order should pollorder be searched when "if there is something that can be executed immediately, perform that operation"? Is a list that determines As you can see from the implementation, the order is random This is an emulation of asynchronous processing
Regarding the acquisition of lock, the lock is acquired in the order of the memory address of the channel structure. This is because if there are multiple processes to acquire locks for multiple channels, it may become a deadlock unless all of them acquire locks in the same order.
Search for what you can do: https://github.com/golang/go/blob/c06a354bccf60ea32ed74238be409a00aac292c5/src/runtime/select.go#L243
// pass 1 - look for something already waiting
Search for an action that can be performed as described in the comment, and execute it as soon as it is found.
Enqueue to all channels: https://github.com/golang/go/blob/c06a354bccf60ea32ed74238be409a00aac292c5/src/runtime/select.go#L288-L327
// pass 2 - enqueue on all chans
Enqueue to all channels and interrupt
Run/Resume: https://github.com/golang/go/blob/c06a354bccf60ea32ed74238be409a00aac292c5/src/runtime/select.go#L327
Pass3 is executed when restarted by any channel
https://github.com/golang/go/blob/7307e86afda3c5c7f6158d2469c39606fd1dba65/src/runtime/chan.go#L819
As a run-time process, a CAS operation is performed on g.selectDone
at the time of dequeue to prevent multiple channels from executing the select process.
Post-processing after restart: https://github.com/golang/go/blob/c06a354bccf60ea32ed74238be409a00aac292c5/src/runtime/select.go#L336
// pass 3 - dequeue from unsuccessful chans
// otherwise they stack up on quiet channels
// record the successful case, if any.
// We singly-linked up the SudoGs in lock order.
In pass3, the unused ones that were enqueued in pass2 are deleted from the queue.
If you take a closer look at the golang channel, it's a rather complicated process.
Recommended Posts