I will show you how to fix the random number seed with
Tensorflow 2.x (
The code used for the test can be found here (https://github.com/tokusumi/tf-keras-random-seed).
In the development of machine learning, there are demands such as "I want to make learning reproducible" and "I want to fix the initial value of the model for testing". Since the difference in the initial value of the weight affects the learning result, it seems that fixing the initial value will help solve these problems.
Random numbers are used to generate the initial value of the weight. Random numbers are generated based on random number seeds. By default, TensorFlow has a variable random number seed. Therefore, a model with a different initial value will be generated each time. Therefore, this time, we aim to improve the reproducibility by fixing the random number seed.
In addition to TensorFlow, we also fix the seeds for NumPy and Python built-in functions. In summary, the following random number fixed functions can be implemented.
import tensorflow as tf import numpy as np import random import os def set_seed(seed=200): tf.random.set_seed(seed) # optional # for numpy.random np.random.seed(seed) # for built-in random random.seed(seed) # for hash seed os.environ["PYTHONHASHSEED"] = str(seed)
It is used as follows. However, if TensorFlow random seeding is sufficient, replace
set_seed(0) toy_model = tf.keras.Sequential( tf.keras.layers.Dense(2, input_shape=(10,)) ) #Some processing... #Reproduce the model set_seed(0) reproduced_toy_model = tf.keras.Sequential( tf.keras.layers.Dense(2, input_shape=(10,)) )
reproduced_toy_model has the same initial value (weight) as the previously generated model
toy_model. In other words, it has been reproduced.
If you do not use
toy_model will have completely different initial values, resulting in poor reproducibility.
In addition to
tf.keras.Sequential, you can also use the Functional API and SubClass.
Let's sort out the method of fixing the random number seed (
set_seed) a little more.
The behavior of
tf.random.set_seed needs a little attention.
First, after using
tf.random.set_seed, try using a function that uses random numbers (
tf.random.uniform: sampling values randomly from a uniform distribution) several times.
tf.random.set_seed(0) tf.random.uniform() # => [0.29197514] tf.random.uniform() # => [0.5554141] (Different values!) tf.random.uniform() # => [0.1952138] (Different values!!) tf.random.uniform() # => [0.17513537](Different values!!!)
Different values were output for each. It seems that reproducibility will not be possible as it is.
tf.random.set_seed again as follows.
tf.random.set_seed(0) tf.random.uniform() # => [0.29197514](A) tf.random.uniform() # => [0.5554141] (B) tf.random.set_seed(0) tf.random.uniform() # => [0.29197514](Reproduction of A) tf.random.uniform() # => [0.5554141] (Reproduction of B)
In this way, the output is reproduced starting from the place where
tf.random.set_seed is called (even though
tf.random.uniform is a function that outputs a random value).
So, for example, if you call
tf.random.set_seed just before creating a model instance (using Sequential, functional API or SubClass), the generated model will have the same initial value every time.
TensorFlow has layers and functions that allow you to pass seed as an argument.
However, I think that explicitly specifying the initializer argument to be passed to layer or layer is not a very realistic method as the model grows.
In addition, there are some that do not work well unless the
tf.random.set_seed introduced this time is used together.
So, even if you don't have many places to fix, try
In TensorFlow 2.x (tf.keras) you can use
tf.random.set_seed to fix the random seed.
In particular, it will be possible to generate a model with the same initial weight value each time, so improvement in reproducibility can be expected.