I mainly refer to This docker Blog.
ver.20279.1
is the latest)docker pull
on WSL2$ docker pull nvidia/cuda
Check if Docker can recognize the GPU
$ docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
↓ If Docker Desktop (based WSL2) recognizes the GPU, you will get a result like this ↓
PS C:\Users\*****> docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<N> (number of bodies (>= 1) to run in simulation)
-device=<d> (where d=0,1,2.... for the CUDA device to use)
-numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
MapSMtoCores for SM 7.5 is undefined. Default to use 64 Cores/SM
GPU Device 0: "GeForce RTX 2070 SUPER" with compute capability 7.5
> Compute 7.5 CUDA device: [GeForce RTX 2070 SUPER]
40960 bodies, total time for 10 iterations: 59.828 ms
= 280.426 billion interactions per second
= 5608.513 single-precision GFLOP/s at 20 flops per interaction
Build your favorite environment and install Jupyter Notebook and the colab extension jupyter_http_over_ws
.
This time, use [Tensoflow image](https://hub.docker.com/r/tensorflow/tensorflow/tags? Page = 1 & ordering = last_updated) on DockerHub.
$ docker pull tensorflow/tensorflow:1.15.4-gpu-py3-jupyter
With the above command, you can get the environment of Tensoflow == 1.15.4 gpu version
, python3
, jupyter notebook
all in.
You can pull
according to the environment you need.
For tensorflow, you can select an image from DockerHub tensorflow/tensorflow.
If you choose an image with a tag like tensorflow/tensorflow: ***-jupyter
, you can save yourself the trouble of installing Jupyter Notebook.
Launch the image that was pull
earlier
** In Google Colabolatry **, I added the setting of jupyter notebook
at the time of run
to specify the virtual environment as the local runtime. Reference source: stackoverflow
$ docker run -it --rm --gpus=all -p 8888:8888 tensorflow/tensorflow:1.15.4-gpu-py3-jupyter \
jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-root \
--NotebookApp.allow_origin='https://colab.research.google.com'
↓ Execution result ↓
Use http://127.0.0.1:8888/?token=****
to access ** Colaboratory ** as the local runtime.
PS C:\Users\****> docker run -it --rm --gpus=all -p 8888:8888 tensorflow/tensorflow:1.15.4-gpu-py3-jupyter jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-root --NotebookApp.allow_origin='https://colab.research.google.com'
[I 16:24:42.162 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
jupyter_http_over_ws extension initialized. Listening on /http_over_websocket
[I 16:24:42.372 NotebookApp] Serving notebooks from local directory: /tf
[I 16:24:42.372 NotebookApp] Jupyter Notebook 6.1.4 is running at:
[I 16:24:42.372 NotebookApp] http://42009a4c9bbb:8888/?token=*************************************************
[I 16:24:42.372 NotebookApp] or http://127.0.0.1:8888/?token=*************************************************
[I 16:24:42.372 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 16:24:42.376 NotebookApp]
To access the notebook, open this file in a browser:
file:///root/.local/share/jupyter/runtime/nbserver-1-open.html
Or copy and paste one of these URLs:
http://42009a4c9bbb:8888/?token=*************************************************
or http://127.0.0.1:8888/?token=******************************************************
Connect to Local Runtime
from Connect
http://127.0.0.1:8888/?token=****
obtained earlier
Change to http: // ** localhost : 8888 /? token =** and enter (127.0.0.1
→ localhost
)
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
If you can recognize the GPU as shown below, you're done! !!
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 265271654709027711,
name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 12989474002923935858
physical_device_desc: "device: XLA_CPU device",
name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 12219956629618833082
physical_device_desc: "device: XLA_GPU device",
name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 7125876736
locality {
bus_id: 1
links {
}
}
incarnation: 1423521064690955886
physical_device_desc: "device: 0, name: GeForce RTX 2070 SUPER, pci bus id: 0000:01:00.0, compute capability: 7.5"]
Recently (as of December 26, 2020), Win10
x WSL2
x Docker
x CUDA
has started to work, so try applying it to the local runtime connection of Google Colaboratory
. I did.
I think that various functions will be released in a major way after a while, so I think that the Environment construction column will be unnecessary.
Recommended Posts