Nice to meet you everyone. My name is Ikenier18.
This article is an introductory article for those who are vaguely thinking "I want to try Deep learning" ** individually.
It is a memorandum that summarizes what I thought when I chose a personal computer (hereinafter referred to as PC) to do voice-related deep learning at home.
Therefore, the main content of this time is about computational resources (self-made PCs, etc.). The environment setup itself has already been published in many articles by our predecessors, so I will not touch on it in this article.
In research, etc.
--At a minimum, you must use more than XX equipment --You are using a graphic board (GPU) that costs a million yen
It is a story of people who eat rice using the results of deep learning such as companies and research institutes. ** The conditions required for deep learning are quite loose for the amount of personal and relaxing ** deep learning.
PC configuration is mainly
Consists of. (Here, we will not deal with the case where GPU is not used) This is the same for desktop PCs and laptops. In other words, it is possible to do deep learning even on a laptop. (I don't recommend it very much) In my experience, if you meet the following specifications, you may feel inconvenienced, but it was possible to do deep learning. (For reference, I will post the specifications of the notebook PC that I played with deep learning for a while.) As will be described later in [Royal road route ②: Select GTX1000 series], the GTX1000 series is still fully active.
|OS||Windows or Linux||Windows|
|Motherboard||No matter||Do not know|
|memory||16GB or more||16GB|
|GPU||Equipped with 4GB or more of memory||GTX1650 4GB|
|storage||250GB or more free space||512GB|
|Power supply||750W(For one GPU)||---|
As I mentioned earlier, the configuration of the PC is mainly
Consists of. Here, in this article, it is assumed that the GPU made by NVIDIA is used. (Because I have no experience of deep learning using AMD GPU.) So, below, I would like to talk about each part.
There are three main types of operating systems on PCs: Windows, Linux, and Mac. ** There is no effect on Deep learning itself due to the difference in OS. ** ** However, each has its own characteristics, such as completely different operability, so select the most suitable one according to the application. Also, since I have no development experience on Mac, I will omit the explanation.
Windows There was a time when I thought "development on Windows", but it was easier to use than I thought I would actually use it. Especially, the compatibility with the Anaconda system is excellent, and with Pytorch, which is supposed to be used in the Anaconda environment, I feel that using Windows makes it easier to work.
--The environment setup is completed just by downloading and executing the specified executable file. --Easy to complete troublesome environment construction
--Since the automation of setup sacrifices the freedom of detailed operations (solving driver dependencies, etc.), you will stumble when trying to do something a little off (using an old GPU, etc.). --Windows10 Pro version is expensive at around 25,000 yen
Linux Linux is a very convenient environment for development. Since you can give instructions to the PC directly on the command line, if you get used to it, you will be able to develop with a much higher degree of freedom than Windows. Taking Ubuntu as an example, it took a lot of time and know-how to resolve dependencies such as drivers, but since 18.04, those tasks have been considerably labor-saving, and if you get used to it ** The difficulty of building an environment will also be greatly reduced. Development environments such as Keras, Chainer, and Tensorflow, where small dependencies are a bottleneck, will be favored by Linux.
――Since you can manually specify the detailed operating environment etc., it is easy to do things that are slightly off (as mentioned earlier, using an old GPU etc.)
――It takes some effort and mastery because you have to do the details of the setting work by yourself.
Mac Unfortunately, I have no experience developing on Mac, so I won't mention it here.
** You should spend money on the CPU **.
The CPU mainly affects the speed of data preprocessing (feature extraction, etc.) and data reading work. Deep learning does most of the calculations on the GPU, but it plays a very important role in supplying data to that GPU. In addition, there are many cases where replacement by purchase = replacement of the motherboard, and the financial and time costs of replacement are likely to increase. Therefore, even if you want to try deep learning for the time being, let's spend money on the CPU.
By the way, CPUs are supplied by two suppliers, Intel and AMD. Until recently, Intel was the only one, but AMD's CPU is also catching up with tremendous momentum. The conflict between the two companies is fun to watch, but I think that when building a PC for deep learning purposes, ** the choice between the two companies does not need to be so deeply committed **. (By the way, I am currently using an Intel CPU.)
Intel A veteran of the CPU industry, recently I feel a sense of crisis about the catch-up of AMD. Performance per core tends to be high per price.
If you want to create an environment for full-scale deep learning on your desktop, you want Core i7 and Core i9.
――I don't know if I will continue to do deep learning by touching it with a little play → Core i5 ――Try deep learning and continue if you enjoy it. I might do a full-fledged one → Core i7 ――I will definitely continue deep learning and do full-scale things. → Core i7, Core i9 ――I will definitely continue deep learning and do full-scale things. Apply complicated preprocessing to each data → Core i9
It would be better to choose.
Also, in the case of notebook PCs, it can be said that Core i7 is almost the choice. (Most notebook PCs with GPU are models with Core i7) In the first place, the performance of deep learning on a notebook PC is dominated by the specifications of the installed GPU. When choosing a laptop for deep learning, choose the GPU you have, and then choose the PC with the best CPU according to your budget.
--High performance per core --The operation is stable --Since there are many seniors in environment setup on the Internet, it is easy to find documents.
――As of the end of March 2020, it has been a long time since the Core i series for general users was released, and the performance per core is losing to AMD's latest model. ――The price setting is relatively high (it seems that future models will be cheaper)
AMD Intel's old rival that shows a tremendous catch-up. The number of cores per price tends to be high.
AMD's CPUs are growing steadily in recent years with one new model announced per year. Also, it is attractive that the price is set relatively cheaply. The latest model comes out every year = it tends to be an old model, so the previous Ryzen 7 2700 etc. can be obtained for about 30,000 yen. As for CPU performance, it is characterized by a large number of cores per price, so it shows its true value when performing parallel operations. When programming, it is required to describe data processing in parallel, which is a remarkable attraction. Some have reported that the total time required for learning did not differ much even after parallelization, or that it took longer, but the author said that it was not a level of concern. It is recognition.
--Large number of cores per price --Good at parallelized data processing --The latest model has the same performance per core as Intel. --Relatively cheap
--Performance per core is a little low ――Although the number is increasing, the literature on the Internet is small.
** Make sure to purchase the motherboard with some expandability. ** ** Regardless of deep learning, when I try to do something, I often think "I should have done it" later. Especially in the case of deep learning, for example, when handling audio, the required specifications for data preprocessing are high, and when handling images, the required capacity of memory and storage per data is large, etc., which is required across fields. Will change significantly. Since the environment is built in anticipation of these, it is necessary to emphasize expandability of the motherboard, which is the basis of the PC. When doing deep learning, keep the following items in mind when choosing.
--Choose the correct CPU socket ――This is a matter of course. Choose the one that works with the CPU you buy
--Choose 4 or more memory slots --For ATX and E-ATX motherboards, choose one that supports SLI. ――This will be mentioned in a later section, but the above two points are the parts that are likely to be expanded later. It may seem a bit expensive, but it's a good idea to pay in advance to avoid having to repurchase your motherboard.
――Be prepared when choosing Micro ATX or Mini ITX ――Do you regret it?
** There is a limit to the number of memories you can insert, so be sure to choose one with 16GB of memory (32GB if your motherboard supports it). ** ** Other than that, I don't really touch on memory. It works as it is with 16GB. 32GB is too much, but in the case of Windows, the memory consumption of the OS is large, so it is safe to add 32GB. Memory is an area mainly used for preprocessing data. Therefore, if you want to perform pre-processing that requires a large amount of memory (such as Dynamic Time Warping in the acoustic field example), you may need to purchase additional memory. If you need it, you can buy more, and if it works with the logic, there is no problem. For that reason, it is important to make space on the motherboard so that you can buy more in case of emergency **.
The author ** would like to have a GPU with enough memory for personal deep learning. ** There is an infinite amount of time for personal deep learning. Therefore, there is some compromise in computing power. However, be aware that GPU memory may run out when using methods such as GAN and CycleGAN that tend to require large GPU memory, or when trying deep learning to handle images in earnest. please. ** GPU memory (except NV-LINK) cannot be added later. ** **
In my experience, GPU memory is at least 4GB, which is enough to manage learning or memory is insufficient, and 8GB or less is enough for deep learning to play a little. I have the impression that 8GB or more is required for deep learning. 11GB is quite comfortable for learning. However, the amount of GPU memory required is on a case-by-case basis. If you want to do deep learning in earnest, be sure to investigate in advance how expensive the method you want to do is. When I used CycleGAN for audio, I sometimes needed a memory of nearly 10GB, and in the field of images with a large amount of data to handle, there are cases where even 11GB does not have enough memory.
The GPU is the most important part of the main learning part of deep learning. GPU performance is determined by the CUDA core (Tensor core) and the amount of installed memory. The CUDA core is an arithmetic processing unit that performs deep learning calculations, and the larger the number, the higher the computing power. In addition, the RTX series is equipped with a Tensor core optimized exclusively for deep learning, and with this, it becomes a parameter that the computing power is even higher. As for the memory, the data at the time of training is read and the training model is read. Deep learning is called batch learning, in which multiple models are calculated in parallel during learning, multiple models are manually trained for a huge amount of data, and they are integrated for efficient learning. The technique to do is taken. Therefore, when learning the latest huge model, a huge amount of GPU memory is required. (It is possible to deal with it to some extent by adjusting to fit in GPU memory by reducing the number of batches.)
Also, when talking about deep learning, the functions ** SLI and NV-LINK ** are indispensable. (SLI was formerly called, and the name of the latest method of SLI is NV-LINK.) SLI is a function that raises the performance of GPUs by connecting multiple GPUs with their own bridges. If your motherboard supports SLI, you can do both SLI and NV-LINK. However, there are detailed conditions such as the same GPU must be used for both (example: RTX2080 × 2 is possible, RTX2080 + RTX2080Ti is not possible, etc.), and the dedicated bridge is different. Also, storing multiple GPUs in one PC means having two large heat sources. Therefore, ** When installing multiple GPUs, be careful about the exhaust heat of the PC, such as choosing the one with exhaust outside the case, and choose the GPU. ** **
|Compatible GPU||Compatible GPU before GTX1000 series||RTX2080、RTX2080Ti、Titan RTX|
|Computational power||GPU total(Note that if you use two, it will not be doubled.)||GPU total(Again, it doesn't simply double, but it's dramatically better than SLI.)|
|memory||Memory of one GPU||Total memory of all connected GPUs|
|Bridge used||Issued by each manufacturerSLI dedicated bridge||Issued by each manufacturerNV-LINK dedicated bridge|
Based on the above story, I chose GPU, but since the choices were wider than I expected, I will explain it by dividing it into multiple items. ** With the RTX3000 series coming up in the second half of 2020, there are roughly two types of GPU choices. ** **
** Go with the RTX2000 series **, or ** Muddy the tea with the RTX2000 and wait for the RTX3000 series **. The options for general users are as follows. (Titan RTX will be included in the list, but this time it will not be included in the options.)
|GPU||memory||CUDA core(Tensor core)||price||Remarks|
|Titan RTX||24GB||4608(576)||About 320,000 yen||I get a call when I try to do something serious about images. NV-It supports LINK, but there is only an exhaust model inside the case. Therefore, NV-A heat exhaust problem occurs during LINK.|
|RTX 2080Ti||11GB||4352(544)||About 150,000 to 200,000 yen||NV-Supports LINK.|
|RTX 2080 SUPER||8GB||3072(384)||About 90 to 100,000 yen||NV-Supports LINK.|
|RTX 2080||8GB||2944(368)||About 90 to 100,000 yen||NV-Supports LINK.|
|RTX 2070 SUPER||8GB||2560(320)||About 60,000 yen||NV-Not compatible with LINK. There is not much umami when inserting multiple sheets.|
|RTX 2070||8GB||2304(288)||About 50,000 yen||NV-Not compatible with LINK. There is not much umami when inserting multiple sheets.|
|RTX 2060 SUPER||8GB||2176(272)||About 50,000 yen||NV-Not compatible with LINK. There is not much umami when inserting multiple sheets.|
|RTX 2060||6GB||1920(240)||About 40,000 yen||NV-Not compatible with LINK. There is not much umami when inserting multiple sheets.|
The RTX2060 is a little underpowered, but all have sufficient performance for deep learning. From a memory cost perspective, the RTX2060 SUPER shines, but it lacks computing power. There will be problems with computing power when doing full-scale deep learning. Considering the existence of NV-LINK, such as NV-LINK in RTX2080SUPER can obtain 16GB of memory at the same price as RTX2080Ti, RTX2080, RTX2080SUPER, RTX2080Ti will be attractive. The following is an example of the idea. It's a good idea to select a GPU as appropriate, taking into account the existence of NV-LINK and your future policy.
――I don't intend to do full-scale deep learning, or do not do full-scale until the RTX3000 series comes out. → Consult with budget including those that do not support NV-LINK such as RTX 2070 SUPER ――We are looking at the RTX3000 series, but we may do full-scale deep learning. → Choose from RTX2080 and RTX2080SUPER, and if you feel that the specifications are insufficient, replace it with a new one or perform NV-LINK. ――We are looking at the RTX3000 series, but we may do full-scale deep learning. → Choose from RTX2080 and RTX2080SUPER and do NV-LINK from the beginning, or if you feel that the specifications of RTX2080Ti are insufficient, replace it or do NV-LINK. --I haven't considered the RTX3000 series, but I may do full-scale deep learning. → Select from RTX2080, RTX2080SUPER, RTX2080Ti, and if you feel that the specifications are insufficient, replace it with a new one or perform NV-LINK. ――We are not looking at the RTX3000 series, but we will do full-scale deep learning → Choose from RTX2080, RTX2080SUPER, RTX2080Ti and do NV-LINK from the beginning, or if you feel that the specifications are insufficient, replace it or do NV-LINK.
** Do not do full-scale deep learning **, ** Try deep learning on an existing machine **, or ** Muddy tea with GTX1000 series and wait for RTX3000 series ** It will be an option.
The options for general users are as follows.
|GTX1080Ti||11GB||3584||About 80,000 yen||SLI compatible. New products are difficult to obtain, and the second-hand market is traded in units of 100,000 yen.|
|GTX1080||8GB||2560||About 70,000 yen||SLI compatible.|
|GTX1070Ti||8GB||2432||About 60,000 yen||SLI compatible.|
|GTX1070||8GB||1920||About 60,000 yen||SLI compatible.|
|GTX1660Ti||6GB||1536||About 30,000 yen||Not compatible with SLI.|
|GTX1660||6GB||1408||2.About 50,000 yen||Not compatible with SLI.|
|GTX1060||6GB||1280||About 30,000 yen||Not compatible with SLI.|
|GTX1650||4GB||896||About 20,000 yen||Not compatible with SLI.|
|GTX1050Ti||4GB||768||1.About 50,000 yen||Not compatible with SLI.|
Unlike NV-LINK, these GPUs are SLI when using multiple GPUs. And the bottleneck of SLI is that GPU memory cannot be expected to increase. Considering these points, it is undeniable that all models lack power and attractiveness compared to the RTX2000 series in order to do full-scale deep learning. If anything, it is more likely that the existing PC environment uses the GPU as described above. GTX1080Ti, GTX1080, GTX1070Ti, GTX1070, etc. are still at a practical level and can still be used as active players. GPUs below that are also strong enough. In fact, for light-duty tasks (such as handwriting recognition and sound discrimination), I have experience running it on a desktop PC equipped with the GTX 1060 or a notebook PC equipped with the GTX 1650. If your existing environment uses a GPU like the one above, you can immediately try deep learning just by setting the environment for deep learning.
** Please think again. ** ** I think it's better to stop choosing the GTX 900 series or earlier at this timing even if you wait for the RTX 3000 series. It is certainly possible to build a PC at a low price, but the GTX1000 series will still suffice.
In fact, you can do deep learning even with AMD GPUs. There is a project called Radeon Open Compute that can utilize CUDA's development resources even on AMD GPUs. However, it is still difficult to count as a mainstream method in the research industry, and the author has no development experience. I don't think it's necessary to throw a changing ball in a new environment, so I won't touch it in this article.
** I don't really touch on storage. ** ** I think it is better to use SSD for the system part where the OS is installed, and place the data set etc. on the HDD. For HDDs, you can think purely about "whether the data set can be stored". In the case of the author, the OS is installed on a 240GB SSD, and the HDD is used with 1 to 2TB of free space. It's a quiet story, but if you install SSDs and HDDs in the open bay of your PC and replace them as appropriate, you can improve the QOL of your deep learning life.
** For the power supply, roughly calculate 500W including parts other than GPU + margin, and 250W per GPU. ** ** It's like 750W for one GPU and 1000W for two GPUs. Even if you have one GPU, if you have the possibility of using two GPUs later, consider buying a 1000W power supply in advance.
The above is a summary of my experience and examinations when I actually created a PC environment for deep learning.
By the way, the newly built environment of the author is Core i7-9700F, memory 32GB, RTX2080SUPER (exhaust exhaust) x 1, 1000W power supply. (The policy is to make all storage open bays and replace them as appropriate.) While waiting for information on the RTX3000 series, the policy is to add more GPUs in the event of an emergency.
By the way, although I bought the RTX2080Ti, I had a bitter experience of giving up the expansion because I chose the one with the exhaust inside the case without considering the exhaust heat. I hope it will be helpful for those who are interested in deep learning and those who want to do something in the future.