[PYTHON] [PyTorch] CPU vs. GPU vs. TPU [Fine Tuning]

background

[Learn while making! Development Deep Learning by PyTorch](https://www.amazon.co.jp/%E3%81%A4%E3%81%8F%E3%82%8A%E3%81%AA%E3%81%8C% E3% 82% 89% E5% AD% A6% E3% 81% B6% EF% BC% 81PyTorch% E3% 81% AB% E3% 82% 88% E3% 82% 8B% E7% 99% BA% E5% B1% 95% E3% 83% 87% E3% 82% A3% E3% 83% BC% E3% 83% 97% E3% 83% A9% E3% 83% BC% E3% 83% 8B% E3% 83% 1 of the book B3% E3% 82% B0-% E5% B0% 8F% E5% B7% 9D-% E9% 9B% 84% E5% A4% AA% E9% 83% 8E-ebook / dp / B07VPDVNKW) I tried fine tuning of -5 with Google Colaboratory. (You can see all the code at author GitHub)

important point

--You need to run the contents of make_folders_and_data_downloads.ipynb inside 1-5 ipynb. --You need to create a utils folder in the ipynb folder and copy and paste the files in the utils folder inside GitHub.

Execution result

CPU cpu.png It took about 10 minutes.

GPU gpu.png It doesn't take 20 seconds.

TPU tpu.png This also took about 10 minutes.

comment

――The GPU was extremely fast. However, the CPU didn't take much time than I expected. The cause of the defeat is that the TPU was treated as a CPU. Later research revealed that if you wanted to use TPU with PyTorch, you would have to write code to make it possible. I will try it next time. -[Book](https://www.amazon.co.jp/%E3%81%A4%E3%81%8F%E3%82%8A%E3%81%AA%E3%81%8C%E3% 82% 89% E5% AD% A6% E3% 81% B6% EF% BC% 81PyTorch% E3% 81% AB% E3% 82% 88% E3% 82% 8B% E7% 99% BA% E5% B1% 95% E3% 83% 87% E3% 82% A3% E3% 83% BC% E3% 83% 97% E3% 83% A9% E3% 83% BC% E3% 83% 8B% E3% 83% B3% E3% 82% B0-% E5% B0% 8F% E5% B7% 9D-% E9% 9B% 84% E5% A4% AA% E9% 83% 8E-ebook / dp / B07VPDVNKW) GPU on AWS EC2 I was trying to use, but it seems that this kind of processing can actually be operated even with a non-charged Google Colaboratory.

Recommended Posts

[PyTorch] CPU vs. GPU vs. TPU [Fine Tuning]
Automatically switch between TPU / GPU / CPU in Tensorflow.Keras model
[Pytorch] torch.zeros vs torch.zeros_like
BigTransfer (BiT) Fine Tuning