[PYTHON] Lua version Deep Learning from scratch Part 6 [Neural network inference processing]

Past articles

Lua version Deep Learning from scratch Part 1 [Implementation of Perceptron] Lua version Deep Learning from scratch Part 2 [Activation function] Lua version Deep Learning from scratch Part 3 [Implementation of 3-layer neural network] [Lua version Deep Learning from scratch 4 [Implementation of softmax function]] (http://qiita.com/Kazuki-Nakamae/items/20e53a02a8b759583d31) Lua version Deep Learning from scratch Part 5 [Display MNIST image] Lua version Deep Learning from scratch Part 5.5 [Making pkl files available in Lua Torch]

Neural network inference processing

This time, NN inference processing of MNIST image data is performed.   In the original document, the pkl file whose weight and bias are determined in advance is read, so this time, the same processing was performed with Torch using this pkl file. For reading pkl files, see Deep Learning from scratch for Lua version 5.5 [Making pkl files available in Lua Torch]. Please refer.   The script looks like this:

neuralnet_mnist.lua


require './activationFunc.lua'
require './softmax.lua'
require './exTorch.lua'
require 'image'
npy4th = require 'npy4th' --https://github.com/htwaijry/npy4th (Author:Hani Altwaijry)

---Data acquisition function
--Acquire MNIST data.
-- @return Test data image,Test data label{Type:ByteTensor}
function get_data()
    --download
    local tar = 'http://torch7.s3-website-us-east-1.amazonaws.com/data/mnist.t7.tgz'
    if not paths.dirp('mnist.t7') then
    os.execute('wget ' .. tar)
    os.execute('tar xvf ' .. paths.basename(tar))
    end
    --get data
    local test_file = 'mnist.t7/test_32x32.t7'
    local testData = torch.load(test_file,'ascii')

    return testData['data'], testData['labels']
end

---Network generation function.
--Returns a 3-layer NN with determined weights and biases. This time we will use the weights described in pkl
-- @return 3 layer NN(Type:table)
function init_network()
    local network = npy4th.loadnpz('sample_weight.npz')

    return network
end

---Classification function.
--Classification calculation is performed according to the network.
-- @param network input(Type:torch.table)
-- @param x input
-- @return output(Type:torch.DoubleTensor)
function predict(network, x)
    local W1, W2, W3 = network['W1'], network['W2'], network['W3']
    local b1, b2, b3 = network['b1'], network['b2'], network['b3']

    --The first input makes the tensor one-dimensional to fit the numpy format
    local a1 = mulTensor(x:resize(1,W1:size()[1]), W1) + b1:double()
    local z1 = sigmoid(a1)
    local a2 = mulTensor(z1,W2) + b2:double()
    local z2 = sigmoid(a2)
    local a3 = mulTensor(z2,W3) + b3:double()
    local y = softmax(a3)

    return y

end

local x, t = get_data()
local network = init_network()

local accuracy_cnt = 0
for i = 1, x:size()[1] do
    scaledx = image.scale(x[i],28,28) --Reduce 32x32 to 28x28
    local y = predict(network, scaledx)
    local value, indices = torch.max(y, 2) -- torch.max( , 2)Get the maximum value in the row direction with
    local p = tensor2scalar(indices) --Since it is a tensor, it is converted to a scalar
    if p == t[i] then
        accuracy_cnt = accuracy_cnt + 1
    end
end

print("Accuracy:"..(accuracy_cnt/x:size()[1]))

In the original document, it was 28x28 MMIST data, but the image used this time is 32x32. Therefore, the reduced data such as image.scale (image data, 28, 28) is subjected to inference processing. Please note that the result will be different from the original.   activationFunc.lua, softmax.lua please refer to. exTorch.lua contains auxiliary defined functions to improve readability.

exTorch.lua


---Computational function of the product between tensors
--The product AB of the tensors corresponding to each dimension is performed.
-- @param A A (Type:Tensor)
-- @param B B (Type:Tensor)
-- @return AB (Type:torch.DoubleTensor)
function mulTensor(A, B)
    A = A:double()
    B = B:double()
    local AB = nil;
    if (A:dim() == 1 and B:dim() ~= 1) then
        --1Dvector / matrix
        AB = torch.mv(B:t(), A)
    else
        --others
        AB = A*B
    end
    return AB
end

---Scalar conversion function for tensor
--Convert 1x1 tensor to scalar.
-- @param tensor 1x1 tensor(Type:Tensor)
-- @return scalar(Type:number)
function tensor2scalar(tensor)
    return tensor[1]
end

Execution example


$ th neuralnet_mnist.lua
Accuracy:0.8616	

Since it is reduced, it is different from the result in python, but it seems that the inference itself is done properly.   This time, there are many processes that are forced to match the python data, but without it, inference processing can be done with the same effort as the original. The only thing to note is that type handling may be more careful than python. In python, implicit type conversion is used instead, but Lua's handling of types is rather standard, so if it is different, an error will be returned.   If you know that the type is an integer calculation, use an integer type such as byte or long, and if you are calculating with decimals, use the double type. Although it is basic, it is better to avoid float type as much as possible because there is always an error in computer decimal calculation. If you don't know, try the code below.

cancellation.lua


local floatT = torch.Tensor(1):fill(0):float()
for i = 1, 1000000 do
    floatT = floatT + torch.Tensor(1):fill(0.000001):float()
end
local doubleT = torch.Tensor(1):fill(0):double()
for i = 1, 1000000 do
    doubleT = doubleT + torch.Tensor(1):fill(0.000001):double()
end
print("float : ")
print(floatT)
print("double : ")
print(doubleT)

Execution example


$ th cancellation.lua
float : 	
 1.0090
[torch.FloatTensor of size 1]

double : 	
 1.0000
[torch.DoubleTensor of size 1]

Originally, the correct calculation is that both are 1, but the result will be different due to an error in float. Double also causes such an error, but it is safer than float.

in conclusion

This time is over.

Next time, I will try to implement batch processing in this inference processing.   Thank you very much.   Postscript (2017/06/24)

It feels uncomfortable to do it with the reduced result, so let's extract the MMIST data from python as well. Copy the dataset directory from the files in the original appendix to the current directory, and execute the following script.

saveMMISTtest.py


# coding: utf-8
import sys, os
import numpy as np
import pickle
from dataset.mnist import load_mnist

(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, flatten=True, one_hot_label=False)

np.save("x_test",x_test)
np.save("t_test",t_test)

Execution example


$ python3 saveMMISTtest.py

Since x_test.npy and t_test.npy are output, read them with get_data ().

neuralnet_mnist2.lua


require './activationFunc.lua'
require './softmax.lua'
require './exTorch.lua'
npy4th = require 'npy4th' --https://github.com/htwaijry/npy4th (Author:Hani Altwaijry)

---Data acquisition function
--Acquire MNIST data.
-- @return Test data image,Test data label{Type:ByteTensor}
function get_data()
    --image data
    local x_test = npy4th.loadnpy('x_test.npy')
    --Label data
    local t_test = npy4th.loadnpy('t_test.npy')

    return x_test, t_test
end

---Network generation function.
--Returns a 3-layer NN with determined weights and biases. This time we will use the weights described in pkl
-- @return 3 layer NN(Type:table)
function init_network()
    local network = npy4th.loadnpz('sample_weight.npz')

    return network
end

---Classification function.
--Classification calculation is performed according to the network.
-- @param network input(Type:torch.table)
-- @param x input
-- @return output(Type:torch.DoubleTensor)
function predict(network, x)
    local W1, W2, W3 = network['W1'], network['W2'], network['W3']
    local b1, b2, b3 = network['b1'], network['b2'], network['b3']

    --The first input makes the tensor one-dimensional to fit the numpy format
    local a1 = mulTensor(x, W1) + b1:double()
    local z1 = sigmoid(a1)
    local a2 = mulTensor(z1,W2) + b2:double()
    local z2 = sigmoid(a2)
    local a3 = mulTensor(z2,W3) + b3:double()
    local y = softmax(a3)

    return y

end

local x, t = get_data()
local network = init_network()

local accuracy_cnt = 0
for i = 1, x:size()[1] do
    local y = predict(network, x[i])
    local value, indices = torch.max(y, 1) -- torch.max( , 1)Get the maximum value in the col direction with
    local p = tensor2scalar(indices) -1 --Since it is a python output, the start of the array is 0, the array of Lua(table type)Starts from 1, so shift by 1
    if p == t[i] then
        accuracy_cnt = accuracy_cnt + 1
    end
end

print("Accuracy:"..(accuracy_cnt/x:size()[1]))

Execution example


$ th neuralnet_mnist2.lua
Accuracy:0.9352

With this, I was able to output the same output as the original.

Recommended Posts

Lua version Deep Learning from scratch Part 6 [Neural network inference processing]
Lua version Deep Learning from scratch Part 5.5 [Making pkl files available in Lua Torch]
"Deep Learning from scratch" Self-study memo (Part 12) Deep learning
[Deep Learning from scratch] Implement backpropagation processing in neural network by error back propagation method
Deep Learning from scratch
[Deep Learning from scratch] About the layers required to implement backpropagation processing in a neural network
Deep Learning from scratch 1-3 chapters
Python vs Ruby "Deep Learning from scratch" Chapter 3 Implementation of 3-layer neural network
[Deep Learning from scratch] Speeding up neural networks I explained back propagation processing
[Deep Learning] Execute SONY neural network console from CUI
[Deep Learning from scratch] Initial value of neural network weight using sigmoid function
[Deep Learning from scratch] Initial value of neural network weight when using Relu function
Non-information graduate student studied machine learning from scratch # 2: Neural network
I tried to implement Perceptron Part 1 [Deep Learning from scratch]
Deep learning from scratch (cost calculation)
Deep Learning memos made from scratch
Chapter 3 Neural Network Cut out only the good points of deep learning made from scratch
[Learning memo] Deep Learning made from scratch [Chapter 7]
Deep learning from scratch (forward propagation edition)
Deep learning / Deep learning from scratch 2-Try moving GRU
Deep learning / Deep learning made from scratch Chapter 6 Memo
[Learning memo] Deep Learning made from scratch [Chapter 5]
[Learning memo] Deep Learning made from scratch [Chapter 6]
"Deep Learning from scratch" in Haskell (unfinished)
Deep learning / Deep learning made from scratch Chapter 7 Memo
[Windows 10] "Deep Learning from scratch" environment construction
Learning record of reading "Deep Learning from scratch"
[Deep Learning from scratch] About hyperparameter optimization
[Learning memo] Deep Learning made from scratch [~ Chapter 4]
"Deep Learning from scratch" self-study memo (unreadable glossary)
"Deep Learning from scratch" Self-study memo (9) MultiLayerNet class
Deep Learning from scratch ① Chapter 6 "Techniques related to learning"
Good book "Deep Learning from scratch" on GitHub
Deep Learning from scratch Chapter 2 Perceptron (reading memo)
[Learning memo] Deep Learning from scratch ~ Implementation of Dropout ~
Python vs Ruby "Deep Learning from scratch" Summary
"Deep Learning from scratch" Self-study memo (10) MultiLayerNet class
"Deep Learning from scratch" Self-study memo (No. 11) CNN
[Python] [Natural language processing] I tried Deep Learning ❷ made from scratch in Japanese ①
[Deep Learning from scratch] I implemented the Affine layer
"Deep Learning from scratch" Self-study memo (No. 19) Data Augmentation
"Deep Learning from scratch 2" Self-study memo (No. 21) Chapters 3 and 4
Application of Deep Learning 2 made from scratch Spam filter
Implement Neural Network from 1
Deep Learning 2 Made from Zero Natural Language Processing 1.3 Summary
Deep Learning from the mathematical basics Part 2 (during attendance)
Study method for learning machine learning from scratch (March 2020 version)
[Deep Learning from scratch] I tried to explain Dropout
"Deep Learning from scratch" Self-study memo (Part 8) I drew the graph in Chapter 6 with matplotlib
[Deep Learning from scratch] Implementation of Momentum method and AdaGrad method
[Part 4] Use Deep Learning to forecast the weather from weather images
An amateur stumbled in Deep Learning from scratch Note: Chapter 1
[Part 1] Use Deep Learning to forecast the weather from weather images
[Part 3] Use Deep Learning to forecast the weather from weather images
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 2
Create an environment for "Deep Learning from scratch" with Docker
An amateur stumbled in Deep Learning from scratch Note: Chapter 3
An amateur stumbled in Deep Learning from scratch Note: Chapter 7
An amateur stumbled in Deep Learning from scratch Note: Chapter 5
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 7
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 1