Are you writing memory-aware code in Rails?
Rails uses Ruby's ** garbage collection * 1 **, so you can write code without worrying about memory release.
* 1 Ruby collects objects that are no longer used and automatically releases the memory.
Therefore, there is a case where the production server suddenly goes down (due to a memory error) without noticing it, even though the implementation eats up memory unknowingly.
The reason I can say that is because this phenomenon happened at the site where I am working now.
I was in charge of implementing and modifying it myself, but I learned a lot from that experience, so I will leave a note so that I will not forget it.
First of all, you have to investigate where the memory error is.
I used ʻObjectSpace.memsize_of_all` to investigate memory usage in Rails.
By using this method, you can investigate the memory usage consumed by all living objects in bytes.
Set this method as a checkpoint in the place where the execution process is likely to drop, and steadily investigate where the memory is consumed in large quantities.
■ Usage example to check memory usage
class Hoge
def self.hoge
puts 'Number of object memories before memory expansion by map'
puts '↓'
puts ObjectSpace.memsize_of_all <====Checkpoint
array = ('a'..'z').to_a
array.map do |item| <==== ①
puts "#{item}Object memory count"
puts '↓'
puts ObjectSpace.memsize_of_all <====Checkpoint
item.upcase
end
end
end
■ Execution result
irb(main):001:0> Hoge.hoge
Number of object memories before memory expansion by map
↓
137789340561
Number of object memories in a
↓
137789342473
Number of object memories in b
↓
137789342761
Number of object memories in c
↓
137789343049
Number of object memories in d
↓
137789343337
Number of object memories in e
↓
137789343625
.
.
.
Number of object memories of x
↓
137789349097
Number of object memories in y
↓
137789349385
Number of object memories in z
↓
137789349673
=> ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"]
From this execution result, you can see that the data passed by map is first expanded in memory at once, and the memory consumption has increased there. (Part ①)
You can also see that the memory consumption increases each time the loop is processed.
There is no problem if the process is simple like this sample code.
If the amount of data passed is large and the implementation performed by loop processing is complicated, memory consumption will be squeezed.
** I get a memory error (an error that occurs when memory processing cannot keep up). ** **
This investigation was also investigated by the above procedure, and as a result, it was concluded that a memory error occurred because the amount of data passed was large and the heavy processing of spitting out queries with map was implemented.
I understand the cause.
Next, let's think about countermeasures.
The first measures I came up with are the following three.
1.Increase memory with the power of money
2. Thread(thread)Concurrent processing with
3.Batch processing
To be honest, this is the fastest, and you only have to raise the memory specs of the server with the power of money, so let's do this!
I thought.
There is no memory-intensive implementation other than this process, so I thought it would be foolish to spend money just for this part, so I stopped this idea.
I came up with the parallel processing of Ruby as the next countermeasure, but if the bottleneck is the processing time (timeout), it is correct because it will be faster if multiple threads are set up and calculated in parallel and merged, but this time Since the bottleneck is memory pressure due to a memory error, the amount of data handled by multiple threads does not change, so it is expected that a memory error will occur in the end, so I stopped this idea.
The biggest cause of this memory error is a memory error caused by expanding a large amount of data at once and repeating high-load processing in a loop.
Therefore, I thought that it would be good if a large amount of data could be implemented while saving memory if it was implemented in batch processing in units of 1,000 without expanding the memory at once.
Rails has a method called find_in_batches
, which can be used to process 1000 items by default.
Example) 10,1 for 000,Divide into 000 processes and divide into 10 batch processes.
find_in_An image that uses less memory by limiting processing with batches.
** Batch processing using find_in_batches **
Once you know how to deal with it, all you have to do is implement it.
Let's implement it. (Since it is not possible to actually show the company code, only the image is shown)
■ Implementation image
User.find_in_batches(batch_size: 1000) do |users|
#Something processing
end
Even if 10,000 User data are acquired, if find_in_batches is used, 1000 will be processed at a time.
In other words, it is an image that divides into 10,000/1000 = 10 processes.
Memory consumption has been reduced to 1/100.
** However, the biggest disadvantage of this implementation is that it takes too much processing time. ** **
If you are using heroku etc., this implementation will result in ** RequestTimeOut error * 1 **.
* 1 In heroku, processing that takes 30 seconds or more will result in a RequestTimeOut error.
Therefore, I think it is better to move this high-load processing implementation to background processing.
If you are using Rails, you can do this by using Sidekiq.
I think you should work with the following procedure.
STEP1. find_in_Use batches to reduce memory consumption
STEP2.When STEP1 is completed, it will take some time, but it should be in a working state without a memory error.
However, since it takes time to process, move the process to the background.
At first, I thought it was an annoying task.
I learned a lot, and I'm glad I implemented it now.
https://techblog.lclco.com/entry/2019/07/31/180000 https://qiita.com/kinushu/items/a2ec4078410284b9856d
Recommended Posts