The predictions made by the multi-layer perceptron (MLP Classifer) implemented in scikit-learn did not match the predictions of the self-made neural network using the weights and bias matrices that had been learned in advance.
I didn't standardize the input data ...
import numpy as np
import os
import sys
Import pandas as pd # library for handling dataFrame
from math import exp, expm1
input = [[array of input data]]
input = np.array(input)
df = pd.read_csv ("Data used for standardization of input data (data when trained)")
df_x = pd.get_dummies (column where you want to generate dummy variables)
# Standardization! (I forgot here)
sc = StandardScaler()
sc.fit_transform(df_x)
input = sc.transform(input)
bias0 = pd.read_csv ("bias csv output from MLP Classiefer", header = None)
bias0 = bias0.iloc[1:,1:].as_matrix()
weight0 = pd.read_csv ("weight csv output from MLP Classiefer", header = None)
weight0 = weight0.iloc[1:,1:].as_matrix()
# Weight / Input + Bias
layer0 = np.dot(weight0.T, dummy.T) + bias0
# Activation of the hidden layer. This time it was activated by the ramp function.
layer0 = np.clip(layer0, 0, np.finfo(layer0.dtype).max, out=layer0)
Only in the hidden layer, let the linear combination of input and weight flow through the activation function. .. .. In the final output layer, activated by a sigmoid function
output_layer = np.dot (weight of last hidden layer, data entering last hidden layer) + bias of last hidden layer
# Activated by sigmoid function
1 / (1 + exp(-output_layer[0, 0]))
I think that it is [0, 0] here because it deals with binary problems.
Recommended Posts