Since Google has released a machine learning library that includes Deep Learning etc., I immediately touched it. http://japanese.engadget.com/2015/11/09/google-tensorflow/
ubuntu 14.04 GeForce GTX 580 CUDA 7.0 Cuda is already installed by the procedure of here
Just install with pip.
pip install https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.5.0-cp27-none-linux_x86_64.whl
Make sure you have installed it properly.
$ python
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
I tensorflow/stream_executor/dso_loader.cc:77] LD_LIBRARY_PATH: /usr/local/cuda-7.0/lib64:
I tensorflow/stream_executor/cuda/cuda_dnn.cc:1062] Unable to load cuDNN DSO.
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
I tensorflow/core/common_runtime/local_device.cc:25] Local device intra op parallelism threads: 8
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:888] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:88] Found device 0 with properties:
name: GeForce GTX 580
major: 2 minor: 0 memoryClockRate (GHz) 1.544
pciBusID 0000:01:00.0
Total memory: 1.50GiB
Free memory: 1023.72MiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:112] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:122] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:611] Ignoring gpu device (device: 0, name: GeForce GTX 580, pci bus id: 0000:01:00.0) with Cuda compute capability 2.0. The minimum required Cuda capability is 3.5.
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 8
>>> print sess.run(hello)
Hello, TensorFlow!
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> print sess.run(a+b)
42
>>> exit()
Something has been said, but it seems to work for the time being. This seems to be the effect of GPU support.
In order to perform calculations on the GPU, it is necessary to install a compatible GPU and install Cuda Toolkit 7.0 and CUDNN 6.5 V2. Since CUDA was installed when chainer was installed, I thought it was only CUDNN, but my GPU did not support it. Official
TensorFlow GPU support requires having a GPU card with NVidia Compute Capability >= 3.5. Supported cards include but are not limited to:
It is written. What is Compute Capability? That was stated in NVIDIA Questions and Answers.
Q: What is Compute Capability? Also, is there a list of GPU Compute Capability? A: First of all, Compute Capability is a version of the GPU architecture. For example, Compute capability 2.0, 2.1 is Fermi, 3.0, 3.5 is Kepler, and 5.0 is Maxwell.
In my GeForce GTX 580, this version is 2.0, so I could not use it for GPU calculation in TensorFlow. Therefore, this time we will run MNIST only on the CPU.
For the explanation of MNIST, officially, Explanation for those who know MNIST and [Explanation for those who do not know]( There were two (http://tensorflow.org/tutorials/mnist/beginners/index.md), so please refer to them. Simply put, it's the recognition of handwritten numbers.
Get the source and run the MNIST sample. Here are the steps to run the MNIST sample.
git clone --recurse-submodules https://github.com/tensorflow/tensorflow
python tensorflow/models/image/mnist/convolutional.py
When you do this, you will get a MNIST handwritten digit file and start learning with the convolutional NN. I will watch the learning situation as it flows each time.
I tensorflow/core/common_runtime/local_session.cc:45] Local session inter op parallelism threads: 8
Initialized!
Epoch 0.00
Minibatch loss: 12.054, learning rate: 0.010000
Minibatch error: 90.6%
Validation error: 84.6%
Epoch 0.12
Minibatch loss: 3.285, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 7.0%
Epoch 0.23
Minibatch loss: 3.473, learning rate: 0.010000
Minibatch error: 10.9%
Validation error: 3.7%
Epoch 0.35
Minibatch loss: 3.221, learning rate: 0.010000
Minibatch error: 4.7%
Validation error: 3.2%
I was able to do it with just this, so I missed the beat. Well, CUDA is a demon gate, so next time I would like to run it on GPU.
Recommended Posts