[LINUX] I didn't have enough words, I'm sorry

I regret that I wrote something that I didn't understand in the previous article. Well, this is what I wanted to do. In Linux environment

  1. I want to send a small amount of data in one way in process A and process B
  2. I want to do it with relatively time-critical constraints
  3. I want to monitor the communication path for blocking with read ()

The last 3 is to avoid wasting the CPU. And it was the FIFO, a named pipe, that I tried to use to meet this specification.

There are various methods of interprocess communication,

-Domain sockets are convenient, but overhead is a concern. -Although shared memory is fast, it is difficult to take measures against wasting CPU. -I want to block with read () and leave the dispatch to the OS.

So I tried using FIFO. As a result, it was a big mistake.

First, if fd's open () is a songwriter and ** used for FIFO **

-If you open () only the write side, SIGPIPE will occur (it can be ignored) -Block when open () on the read side ・ If you open () with non-blocking, you cannot block with read (). ・ When both ends of the FIFO are closed, the data held at that moment disappears.

Well, it was a difficult thing to handle.

Inevitably Block with open () Read () when the data arrives Return to the beginning with close ()

I tested the implementation, but the delay of open () / close () is huge. I can no longer meet the first time-critical constraint.

That's not it. What I want to do is different.

  1. Open () the communication path with blocking specified, but immediately return
  2. After reading () the communication path, block until the data arrives
  3. After that, repeat 2 in the loop to minimize the overhead.
  4. Close the communication path when the loop ends ()

I was looking for an API that would meet this. And finally I found it. That was the POSIX message queue. It's an extension of the System V message queue.

-Once a queue is created, it exists at the kernel life even if the process goes down. ・ Data will not be lost even if both ends are closed. -Which of the read end and the write end may be opened first ・ Immediate return even if mq_open () ・ When mq_receive () is called, it waits for data in blocking mode. ・ If the other process does mq_send (), you can get up and get data. ・ Moreover, the operation is faster than expected. ・ You can prioritize messages and send signals (I don't use it)

Well, it seems to be all good, but there are some quirks. Since it is a "queue" of "message", the data is treated as a packet. It's not a byte stream like a pipe. Also, I specify some parameters when creating a queue.

・ Blocking or non-blocking ・ Size of 1 message -Number of messages that can be held in the queue ・ One more (forgot)

Must be specified.

・ The number of messages that does not cause data congestion in the queue ・ Greatest common divisor message size -Queue blocking behavior is determined at creation time

The design is necessary. In addition, there is one pitfall,

-The buffer size of mq_receive () is larger than the message size.

There is a restriction. If a buffer smaller than the message size is specified, an error will be returned.

If you hold this down, you can use it relatively easily. However, it should be noted that the queue data remains in the kernel buffer even if the process goes down. If there is a restart from a process drop, consider the remaining data in the queue.

Well, to put it in detail, there are signal notifications, forks, pipes, memory-mapped files, etc., but when considering data exchange with other processes other than others in red, I decided to choose from FIFO, queue, or domain socket. It will be. Exclusive control and CPU waste countermeasures are troublesome for shared memory.

I fell into a trap that was hard to see from the surface of the API specification.

Recommended Posts

I didn't have enough words, I'm sorry