When implementing a model in TensorFlow by yourself, it is convenient to inherit the class already prepared, but it took a long time to understand, so don't forget it.
Created with reference to [Official Tutorial](https://www.tensorflow.org/tutorials/customization/custom_layers?hl=ja#model: Layer combination). After passing through the two convolution layers, the input values are added together to create a Residual Block whose output size is half that of the input.
from tf.keras.layers import Conv2D, BatchNormalization, Add, Activation
class ResidualBlock(tf.keras.Model):
def __init__(self, filters, kernel_size=2, block_name=''):
#Magic
super(ResidualBlock, self).__init__()
#Fit size to output to add input values
self.conv0 = Conv2D(filters, 1, strides=2, padding='same', name=block_name+'_conv0')
self.bn0 = BatchNormalization(name=block_name+'_bn0')
#Convolution layer (1st layer halves size)
self.conv1 = Conv2D(filters, kernel_size, strides=2, padding='same', activation='relu', name=block_name+'_conv1')
self.bn1 = BatchNormalization(name=block_name+'_bn1')
self.conv2 = Conv2D(filters, kernel_size, padding='same', activation='relu', name=block_name+'_conv2')
self.bn2 = BatchNormalization(name=block_name+'_bn2')
#Add input and output
self.add = Add(name=block_name+'_add')
self.out = Activation('relu', name=block_name+'_out')
def call(self, x):
shortcut = self.conv0(x)
shortcut = self.bn0(shortcut)
x = self.conv1(x)
x = self.bn1(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.add([shortcut,x])
x = self.out(x)
return x
Create the necessary layers with `__ init__``` and implement the calculation flow with
`call```.
Customized models and models prepared with tf.keras.applications
can be combined using
tf.keras.models.Sequential``` to create new models. There are times when you want to access the layers inside the model, such as when creating a heatmap, but it seems that you can not do it as long as you check it yourself. Therefore, create a single layer that returns the output of the model as it is.
class IdentityLayer(tf.keras.layers.Layer):
def __init__(self):
super(IdentityLayer, self).__init__()
def call(self, x):
return x
Almost the same as customizing a model.
Created with reference to Official API. Create True Positives for Categorical I / O.
class TP_metric(tf.keras.metrics.Metric):
def __init__(self, name='TP', **kwargs):
#Magic
super(TP_metric, self).__init__(name=name, **kwargs)
#Definition of internal variables
self.value = self.add_weight(name='tp', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
# one_Convert hot vector to an array of labels
y_true = tf.argmax(y_true, axis=-1)
y_true = tf.cast(y_true, K.floatx())
y_pred = tf.argmax(y_pred, axis=-1)
y_pred = tf.cast(y_pred, K.floatx())
#If both the true value and the predicted value are 1, TP increases by 1.
tmp = tf.equal(tf.add(y_true, y_pred), 2)
tmp = tf.cast(tmp, K.floatx())
#Update internal variables
self.value.assign_add(tf.reduce_sum(tmp))
def result(self):
return self.value
It's pretty noisy (even though it's Python), so try to cast diligently. At first, I tried to make it compatible with Sparse Categorical, but for some reason the shape of `y_true``` was the same as
y_pred```. Instead, adjust the label dataset.
def one_hot(x):
return tf.one_hot(x,Number of categories)
label_ds = tf.data.Dataset.from_tensor_slices(tf.cast(label_list, tf.int32))
label_ds = label_ds.map(one_hot)
Now that I'm using both tf 1.x and tf 2.x, I feel like I understand a little about eager_execution
.
Recommended Posts