[PYTHON] Exposing the DCGAN model for Cifar 10 with keras

Overview

――Since I don't have time, I will omit the mechanism of GAN for the time being. --Write generator and discriminator settings --Miso uses LeakyReLu ** as the activation function for both generator and discriminator. --The source code of the entire learning process will be uploaded to github (sorry for preparation)

Model parameters

Generator

generator


def _build_generator(self) -> Model:
    start_pix_x = 4
    start_pix_y = 4
    kernel_ini = RandomNormal(mean=0.0, stddev=0.02)

    inputs = Input(shape=self.noise_shape)
    x = Dense(
        units=256*start_pix_x*start_pix_y,
        kernel_initializer=kernel_ini,
        bias_initializer='zeros')(inputs)
    x = LeakyReLU(alpha=0.2)(x)
    x = Reshape((start_pix_x, start_pix_y, 256))(x)
    x = Conv2DTranspose(
        filters=128,
        kernel_size=4,
        strides=2,
        padding='same',
        kernel_initializer=kernel_ini,
        bias_initializer='zeros')(x)
    x = LeakyReLU(alpha=0.2)(x)
    # x = BatchNormalization(axis=3)(x)
    x = Conv2DTranspose(
        filters=128,
        kernel_size=4,
        strides=2,
        padding='same',
        kernel_initializer=kernel_ini,
        bias_initializer='zeros')(x)
    x = LeakyReLU(alpha=0.2)(x)
    # x = BatchNormalization(axis=3)(x)
    x = Conv2DTranspose(
        filters=128,
        kernel_size=4,
        strides=2,
        padding='same',
        kernel_initializer=kernel_ini,
        bias_initializer='zeros')(x)
    x = LeakyReLU(alpha=0.2)(x)
    x = Conv2D(
        filters=3,
        kernel_size=3,
        padding='same',
        kernel_initializer=kernel_ini,
        bias_initializer='zeros')(x)

    y = Activation('tanh')(x)

    model = Model(inputs, y)
    if self.verbose:
        model.summary()

    return model

Discriminator

discriminator


def _build_discriminator(self) -> Model:
    kernel_ini = RandomNormal(mean=0.0, stddev=0.02)
    inputs = Input(shape=self.shape)
    x = GaussianNoise(stddev=0.05)(inputs)  # prevent d from overfitting.
    x = Conv2D(
        filters=64,
        kernel_size=3,
        padding='SAME',
        kernel_initializer=kernel_ini,
        bias_initializer='zeros')(x)
    x = LeakyReLU(alpha=0.2)(x)
    # x = Dropout(0.5)(x)
    x = Conv2D(
        filters=128,
        kernel_size=3,
        strides=2,
        padding='SAME',
        kernel_initializer=kernel_ini,
        bias_initializer='zeros')(x)
    x = LeakyReLU(alpha=0.2)(x)
    # x = Dropout(0.5)(x)
    # x = BatchNormalization(axis=3)(x)
    x = Conv2D(
        filters=128,
        kernel_size=3,
        strides=2,
        padding='SAME',
        kernel_initializer=kernel_ini,
        bias_initializer='zeros')(x)
    x = LeakyReLU(alpha=0.2)(x)
    # x = Dropout(0.5)(x)
    # x = BatchNormalization(axis=3)(x)
    x = Conv2D(
        filters=256,
        kernel_size=3,
        strides=2,
        padding='SAME',
        kernel_initializer=kernel_ini,
        bias_initializer='zeros')(x)
    x = LeakyReLU(alpha=0.2)(x)

    x = Flatten()(x)
    features = Dropout(0.4)(x)

    validity = Dense(1, activation='sigmoid')(features)

    model4d = Model(inputs, validity)
    model4g = Model(inputs, validity)
    if self.verbose:
        model4d.summary()

    return model4d, model4g

Output result

The output line corresponds to the class. DCGAN only generates an image, but the generated image is input to the learning model constructed from the original image, labeled according to the predicted label, and the image is output for each prediction class. By putting LeakyReLU in the generator, I feel that the object of the object can be generated more firmly. original2dcgan.png

Conclusion

I did it in a rush, so I'll write it in detail at a later date.

Recommended Posts

Exposing the DCGAN model for Cifar 10 with keras
CIFAR-10 tutorial with Keras
Calibrate the model with PyCaret
Compare DCGAN and pix2pix with keras
Validate the learning model with Pylearn2
Let's tune the model hyperparameters with scikit-learn!
Challenge image classification with TensorFlow2 + Keras CNN 1 ~ Move for the time being ~
Search for files with the specified extension
Run the interaction model with Attention Seq2 Seq
The third night of the loop with for
The second night of the loop with for
I implemented the VGG16 model in Keras and tried to identify CIFAR10
A model that identifies the guitar with fast.ai
Getting started with Keras Sequential model Japanese translation
Use logger with Python for the time being
I played with Floydhub for the time being
Solving the Lorenz 96 model with Julia and Python
Load the TensorFlow model file .pb with readNetFromTensorflow ().
Created a fooling image for the caption generative model
Add the attribute of the object of the class with the for statement
Run with CentOS7 + Apache2.4 + Python3.6 for the time being
Monitor the training model with TensorBord on Jupyter Notebook
Use the Cognitive Took Kit (CNTK) with the Keras backend
Display the integrated temperature for each field with Z-GIS
Identify the name from the flower image with keras (tensorflow)
[Boto3] Search for Cognito users with the List Users API
Information for controlling the motor with Python on RaspberryPi
Record of the first machine learning challenge with Keras
Learn Wasserstein GAN with Keras model and TensorFlow optimization
[For beginners] Quantify the similarity of sentences with TF-IDF