Nach dem letzten Mal wurde sinGAN von Tensorflow für die folgende Referenz neu geschrieben, und der Code ist leicht zu verstehen, daher SRGAN (ich habe ihn noch nicht geschrieben. .) Und Last sinGAN Ich habe mit "Challenge Big Imayuyu" gespielt, was fehlgeschlagen ist. 【Referenz】 ① Ich habe versucht, einen Teil von SinGAN zu implementieren ②ppooiiuuyh/SinGAN-tensorflow2.0 Dieses Mal habe ich gespielt, als ich die Implementierung von Reference ① ausprobiert habe.
Das Ausgabebeispiel von SR war wie folgt. (Vielleicht das gleiche wie die obige Referenz)
1 | 2 |
---|---|
Dieser Artikel beginnt mit der folgenden Abbildung, in der das obige Ausgabebeispiel mit der ursprünglichen Pytorch-Ausgabe verglichen wird. Mit anderen Worten, sie machen den gleichen Prozess, aber die erhaltene Genauigkeit scheint unterschiedlich zu sein. Dies geht von der Tatsache aus, dass dies wahrscheinlich auf die detaillierten Unterschiede der Hyperparameter zurückzuführen ist. Wie auch immer, ich kann den Code mit Pytorch nicht lesen, aber der Code für diese Referenz ist leicht zu lesen und zu ändern, daher reicht er immer noch nicht aus, aber ich habe damit gespielt.
Pytorch Version | Tensorflow-Version |
---|---|
·Umgebung ・ Überprüfen Sie die Hyperparameter ・ Versuchen Sie, das Bild der mittleren Ebene auszugeben ・ Fordern Sie den großen Imayuyu heraus
Referenz oben ① Laden Sie die Zip-Datei von der Github-Site der Site herunter und extrahieren Sie sie. Installieren Sie die folgenden Abhängigkeiten in einer normalen Conda-Umgebung.
python -m pip install -r requirements.txt
Ich denke, Sie können es unten tun. Da sich der Speicherort des Eingabebilds und das Ergebnisspeicherverzeichnis jedoch unterscheiden, ändern Sie ihn entsprechend Ihrer Umgebung.
(base) C:\Users\user\SinGAN_tf_impl-master>python main.py "SR" "Input/images/33039_LR.png "
Nur zu entschlüsseln, welche Werte und wie die Hyperparameter beider Codes verwendet werden, ist bei Wan streng.
Diesmal sind beide Codes jedoch unten zusammengefasst und relativ leicht zu lesen.
Sicher ist das nicht alles, besonders nicht in SR.py usw.
parser.add_argument('--sr_factor', type=int, default=4)
Wurde neu definiert.
Der Hyperparameter der Tensorflow-Version lautet also main.py und ist wie folgt definiert.
[email protected]
import argparse
from train import *
#from SinGAN.train import *
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('mode')
parser.add_argument('input_image')
parser.add_argument('--save_dir', default='models')
#Network hyper parameter
parser.add_argument('--hidden_channels', type=int, default=32)
parser.add_argument('--k_size', type=int, default=3)
parser.add_argument('--n_layers', type=int, default=5)
#train settings
parser.add_argument('--n_iter', type=int, default=2000)
parser.add_argument('--lr_g', type=float, default=0.0005)
parser.add_argument('--lr_d', type=float, default=0.0005)
parser.add_argument('--beta1', type=float, default=0.5)
parser.add_argument('--g_times', type=int, default=3)
parser.add_argument('--d_times', type=int, default=3)
parser.add_argument('--gp_weight', type=float, default=0.1)
parser.add_argument('--alpha', type=float, default=10)
#data manipulation
parser.add_argument('--scale_factor', type=float, default=0.75)
parser.add_argument('--noise_weight', type=float, default=0.1)
parser.add_argument('--min_size', type=int, default=18)
#SR params
parser.add_argument('--sr_factor', type=int, default=4)
args = parser.parse_args()
if args.mode == 'SR':
train_SR(args)
Andererseits ist die Pytorch-Version in config.py wie folgt definiert.
Damit
** Hier wurde bestätigt, dass alle Parameter übereinstimmen. ** **.
Jedoch,
parser.add_argument('--scale_factor', type=float, default=0.75)
Eigentlich verstehe ich noch nicht, aber in der Pytorch-Version ist es möglich, dass der Name des Ergebnisverzeichnisses die folgenden Parameter hat.
Beispiel: scale_factor = 0,793701, alpha = 100
[email protected]
import argparse
def get_arguments():
parser = argparse.ArgumentParser()
#parser.add_argument('--mode', help='task to be done', default='train')
#workspace:
parser.add_argument('--not_cuda', action='store_true', help='disables cuda', default=0)
#load, input, save configurations:
parser.add_argument('--netG', default='', help="path to netG (to continue training)")
parser.add_argument('--netD', default='', help="path to netD (to continue training)")
parser.add_argument('--manualSeed', type=int, help='manual seed')
parser.add_argument('--nc_z',type=int,help='noise # channels',default=3)
parser.add_argument('--nc_im',type=int,help='image # channels',default=3)
parser.add_argument('--out',help='output folder',default='Output')
#networks hyper parameters:
parser.add_argument('--nfc', type=int, default=32)
parser.add_argument('--min_nfc', type=int, default=32)
parser.add_argument('--ker_size',type=int,help='kernel size',default=3)
parser.add_argument('--num_layer',type=int,help='number of layers',default=5)
parser.add_argument('--stride',help='stride',default=1)
parser.add_argument('--padd_size',type=int,help='net pad size',default=0)#math.floor(opt.ker_size/2)
#pyramid parameters:
parser.add_argument('--scale_factor',type=float,help='pyramid scale factor',default=0.75)#pow(0.5,1/6))
parser.add_argument('--noise_amp',type=float,help='addative noise cont weight',default=0.1)
parser.add_argument('--min_size',type=int,help='image minimal size at the coarser scale',default=25)
parser.add_argument('--max_size', type=int,help='image minimal size at the coarser scale', default=250)
#optimization hyper parameters:
parser.add_argument('--niter', type=int, default=2000, help='number of epochs to train per scale')
parser.add_argument('--gamma',type=float,help='scheduler gamma',default=0.1)
parser.add_argument('--lr_g', type=float, default=0.0005, help='learning rate, default=0.0005')
parser.add_argument('--lr_d', type=float, default=0.0005, help='learning rate, default=0.0005')
parser.add_argument('--beta1', type=float, default=0.5, help='beta1 for adam. default=0.5')
parser.add_argument('--Gsteps',type=int, help='Generator inner steps',default=3)
parser.add_argument('--Dsteps',type=int, help='Discriminator inner steps',default=3)
parser.add_argument('--lambda_grad',type=float, help='gradient penelty weight',default=0.1)
parser.add_argument('--alpha',type=float, help='reconstruction loss weight',default=10)
return parser
Als nächstes bezweifelte ich die Struktur des Modells. Dies wurde ebenfalls nicht gelöst, aber da die Tensorflow-Version leicht zu verstehen ist, habe ich die folgenden Änderungen vorgenommen und sie an die Standardausgabe ausgegeben. Zusätzlich wird die Ausgabe vor und nach der Änderung angezeigt.
train.py
for i in range(n_blocks):
scale = math.pow(scale_factor, n_blocks-i-1)
cur_h, cur_w = int(h*scale), int(w*scale)
img = tf.image.resize(real_image, (cur_h, cur_w))
resolutions.append((cur_h, cur_w))
#inp = tf.keras.Input(shape=(None, None, 3))
#noise = tf.keras.Input(shape=(None, None, 3))
inp = tf.keras.Input(shape=(cur_h, cur_w, 3))
noise = tf.keras.Input(shape=(cur_h, cur_w, 3))
G = tf.keras.Model(inputs=[inp, noise], outputs=model.G_block(inp, noise, name='G_block_%d'%i, hidden_maps=args.hidden_channels, num_layers=args.n_layers))
D = tf.keras.Model(inputs=inp, outputs=model.D_block(inp, name='D_block_%d'%i, hidden_maps=args.hidden_channels, num_layers=args.n_layers))
lr_g = tf.Variable(args.lr_g, trainable=False)
lr_d = tf.Variable(args.lr_d, trainable=False)
opt_G = tf.keras.optimizers.Adam(lr_g, args.beta1)
opt_D = tf.keras.optimizers.Adam(lr_d, args.beta1)
G.summary()
D.summary()
Dies sollte bestimmen, ob das Netzwerk mit der Pytorch-Version identisch ist. Die Pytorch-Version hat eine bestimmte Struktur, die sich ein wenig ändert, sich aber nach einer Weile ändert, und es scheint, dass die Logik etwas anders ist.
train.py
if i > 0:
for (prev, cur) in zip(Gs[-1].layers, G.layers):
cur.set_weights(prev.get_weights())
for (prev, cur) in zip(Ds[-1].layers, D.layers):
cur.set_weights(prev.get_weights())
init_opt(opt_G, G)
init_opt(opt_D, D)
with tqdm(range(args.n_iter)) as bar:
bar.set_description('Block %d / %d'%(i+1, n_blocks))
for iteration in bar:
if i == 0:
prev_img = tf.zeros_like(img)
else:
prev_img = proc_image(tf.zeros([1, resolutions[0][0], resolutions[0][1], 3]), Gs, args.noise_weight, resolutions)
g_loss, d_loss = train_step(img, prev_img, args.noise_weight, G, D, opt_G, opt_D, args.g_times, args.d_times, args.alpha)
bar.set_postfix(ordered_dict=OrderedDict(
g_loss=g_loss.numpy(), d_loss=d_loss.numpy()
))
if iteration == int(args.n_iter*0.8):
lr_d.assign(args.lr_d*0.1)
lr_g.assign(args.lr_g*0.1)
Gs.append(G)
Ds.append(D)
G.save(os.path.join(save_dir, 'SR_G_%d_res_%dx%d.h5'%(i+1, cur_h, cur_w)))
D.save(os.path.join(save_dir, 'SR_D_%d_res_%dx%d.h5'%(i+1, cur_h, cur_w)))
scale_factor = math.pow(1/2, 1/3)
target_res = 4
scale = 1.0 / scale_factor
n, h, w, c = real_image.shape
t_h, t_w = h*target_res, w*target_res
iter_times = int(math.log(target_res, scale))
img = real_image
os.makedirs(os.path.join(save_dir, 'result'), exist_ok=True)
for j in range(1, iter_times+1, 1):
res = (int(h*math.pow(scale, j)), int(w*math.pow(scale, j)))
img = tf.image.resize(img, size=res)
img = G([img, tf.zeros_like(img)])
image = np.squeeze(img)
image = (np.clip(image, -1.0, 1.0) + 1.0) * 127.5
image = Image.fromarray(image.astype(np.uint8))
image.save(save_dir+'/result/'+str(i)+'_%dx%d.jpg'%res)
Mit anderen Worten, ich habe es so geändert, dass der letzte Bildausgabeteil einen Schritt nach innen ist und das Bild im Zwischenzustand ausgegeben wird. So können Sie die folgende Abbildung erhalten. Wenn man dies betrachtet, ist dieses gelegentliche Lernen wie in Referenz ③ unten gezeigt. "→ Hierarchische Patch-GANs Erfassen Sie Funktionen verschiedener Maßstäbe, während Sie die Auflösung eines groben Bildes schrittweise erhöhen Stellen Sie ein kleines Empfangsfeld so ein, dass nicht das gesamte Bild gespeichert wird. " Sie können sehen, dass. 【Referenz】 ③ SinGAN-Papierkommentar
Also habe ich versucht, den großen Imayuyu herauszufordern. Unten sind die Ergebnisse. ⇒ Wenn Sie es das letzte Mal zu stark erweitert haben, wird eine Linie angezeigt, die jedoch ruhig aussieht, und ich denke, es ist ein Erfolg.
Mayuyu | |
---|---|
128 |
|
161 | |
203 | |
255 | |
322 | |
406 | |
511 | |
645 | |
812 | |
1023 |
・ Ich habe die sinGAN-Tensorflow-Version ausprobiert ・ Ich habe versucht, das Bild des Zwischenlernens auszugeben ・ Ich habe ein relativ gutes Ergebnis mit "Challenge Big Imayuyu" erzielt.
・ Ich möchte sinGAN ein bisschen mehr beherrschen
Die modifizierte Version der Tensorflow-Version lautet wie folgt
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 30, 20, 3)] 0
__________________________________________________________________________________________________
input_2 (InputLayer) [(None, 30, 20, 3)] 0
__________________________________________________________________________________________________
tf_op_layer_add (TensorFlowOpLa [(None, 30, 20, 3)] 0 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
conv_block_0_conv_0 (Conv2D) (None, 30, 20, 32) 896 tf_op_layer_add[0][0]
__________________________________________________________________________________________________
conv_block_0_BN_0 (BatchNormali (None, 30, 20, 32) 128 conv_block_0_conv_0[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 30, 20, 32) 0 conv_block_0_BN_0[0][0]
conv_block_1_BN_1[0][0]
conv_block_2_BN_2[0][0]
conv_block_3_BN_3[0][0]
__________________________________________________________________________________________________
conv_block_1_conv_1 (Conv2D) (None, 30, 20, 32) 9248 leaky_re_lu[0][0]
__________________________________________________________________________________________________
conv_block_1_BN_1 (BatchNormali (None, 30, 20, 32) 128 conv_block_1_conv_1[0][0]
__________________________________________________________________________________________________
conv_block_2_conv_2 (Conv2D) (None, 30, 20, 32) 9248 leaky_re_lu[1][0]
__________________________________________________________________________________________________
conv_block_2_BN_2 (BatchNormali (None, 30, 20, 32) 128 conv_block_2_conv_2[0][0]
__________________________________________________________________________________________________
conv_block_3_conv_3 (Conv2D) (None, 30, 20, 32) 9248 leaky_re_lu[2][0]
__________________________________________________________________________________________________
conv_block_3_BN_3 (BatchNormali (None, 30, 20, 32) 128 conv_block_3_conv_3[0][0]
__________________________________________________________________________________________________
conv_block_4_conv_4 (Conv2D) (None, 30, 20, 3) 867 leaky_re_lu[3][0]
__________________________________________________________________________________________________
tf_op_layer_Tanh (TensorFlowOpL [(None, 30, 20, 3)] 0 conv_block_4_conv_4[0][0]
__________________________________________________________________________________________________
tf_op_layer_add_1 (TensorFlowOp [(None, 30, 20, 3)] 0 tf_op_layer_Tanh[0][0]
input_1[0][0]
==================================================================================================
Total params: 30,019
Trainable params: 29,763
Non-trainable params: 256
__________________________________________________________________________________________________
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 30, 20, 3)] 0
__________________________________________________________________________________________________
conv_block_0_conv_0 (Conv2D) (None, 30, 20, 32) 896 input_1[0][0]
__________________________________________________________________________________________________
conv_block_0_BN_0 (BatchNormali (None, 30, 20, 32) 128 conv_block_0_conv_0[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 30, 20, 32) 0 conv_block_0_BN_0[0][0]
conv_block_1_BN_1[0][0]
conv_block_2_BN_2[0][0]
conv_block_3_BN_3[0][0]
__________________________________________________________________________________________________
conv_block_1_conv_1 (Conv2D) (None, 30, 20, 32) 9248 leaky_re_lu[4][0]
__________________________________________________________________________________________________
conv_block_1_BN_1 (BatchNormali (None, 30, 20, 32) 128 conv_block_1_conv_1[0][0]
__________________________________________________________________________________________________
conv_block_2_conv_2 (Conv2D) (None, 30, 20, 32) 9248 leaky_re_lu[5][0]
__________________________________________________________________________________________________
conv_block_2_BN_2 (BatchNormali (None, 30, 20, 32) 128 conv_block_2_conv_2[0][0]
__________________________________________________________________________________________________
conv_block_3_conv_3 (Conv2D) (None, 30, 20, 32) 9248 leaky_re_lu[6][0]
__________________________________________________________________________________________________
conv_block_3_BN_3 (BatchNormali (None, 30, 20, 32) 128 conv_block_3_conv_3[0][0]
__________________________________________________________________________________________________
conv_block_4_conv_4 (Conv2D) (None, 30, 20, 1) 289 leaky_re_lu[7][0]
==================================================================================================
Total params: 29,441
Trainable params: 29,185
Non-trainable params: 256
__________________________________________________________________________________________________
Die ursprüngliche Tensorflow-Version lautet wie folgt
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, None, None, 0
__________________________________________________________________________________________________
input_2 (InputLayer) [(None, None, None, 0
__________________________________________________________________________________________________
tf_op_layer_add (TensorFlowOpLa [(None, None, None, 0 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
conv_block_0_conv_0 (Conv2D) (None, None, None, 3 896 tf_op_layer_add[0][0]
__________________________________________________________________________________________________
conv_block_0_BN_0 (BatchNormali (None, None, None, 3 128 conv_block_0_conv_0[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) (None, None, None, 3 0 conv_block_0_BN_0[0][0]
conv_block_1_BN_1[0][0]
conv_block_2_BN_2[0][0]
conv_block_3_BN_3[0][0]
__________________________________________________________________________________________________
conv_block_1_conv_1 (Conv2D) (None, None, None, 3 9248 leaky_re_lu[0][0]
__________________________________________________________________________________________________
conv_block_1_BN_1 (BatchNormali (None, None, None, 3 128 conv_block_1_conv_1[0][0]
__________________________________________________________________________________________________
conv_block_2_conv_2 (Conv2D) (None, None, None, 3 9248 leaky_re_lu[1][0]
__________________________________________________________________________________________________
conv_block_2_BN_2 (BatchNormali (None, None, None, 3 128 conv_block_2_conv_2[0][0]
__________________________________________________________________________________________________
conv_block_3_conv_3 (Conv2D) (None, None, None, 3 9248 leaky_re_lu[2][0]
__________________________________________________________________________________________________
conv_block_3_BN_3 (BatchNormali (None, None, None, 3 128 conv_block_3_conv_3[0][0]
__________________________________________________________________________________________________
conv_block_4_conv_4 (Conv2D) (None, None, None, 3 867 leaky_re_lu[3][0]
__________________________________________________________________________________________________
tf_op_layer_Tanh (TensorFlowOpL [(None, None, None, 0 conv_block_4_conv_4[0][0]
__________________________________________________________________________________________________
tf_op_layer_add_1 (TensorFlowOp [(None, None, None, 0 tf_op_layer_Tanh[0][0]
input_1[0][0]
==================================================================================================
Total params: 30,019
Trainable params: 29,763
Non-trainable params: 256
__________________________________________________________________________________________________
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, None, None, 0
__________________________________________________________________________________________________
conv_block_0_conv_0 (Conv2D) (None, None, None, 3 896 input_1[0][0]
__________________________________________________________________________________________________
conv_block_0_BN_0 (BatchNormali (None, None, None, 3 128 conv_block_0_conv_0[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) (None, None, None, 3 0 conv_block_0_BN_0[0][0]
conv_block_1_BN_1[0][0]
conv_block_2_BN_2[0][0]
conv_block_3_BN_3[0][0]
__________________________________________________________________________________________________
conv_block_1_conv_1 (Conv2D) (None, None, None, 3 9248 leaky_re_lu[4][0]
__________________________________________________________________________________________________
conv_block_1_BN_1 (BatchNormali (None, None, None, 3 128 conv_block_1_conv_1[0][0]
__________________________________________________________________________________________________
conv_block_2_conv_2 (Conv2D) (None, None, None, 3 9248 leaky_re_lu[5][0]
__________________________________________________________________________________________________
conv_block_2_BN_2 (BatchNormali (None, None, None, 3 128 conv_block_2_conv_2[0][0]
__________________________________________________________________________________________________
conv_block_3_conv_3 (Conv2D) (None, None, None, 3 9248 leaky_re_lu[6][0]
__________________________________________________________________________________________________
conv_block_3_BN_3 (BatchNormali (None, None, None, 3 128 conv_block_3_conv_3[0][0]
__________________________________________________________________________________________________
conv_block_4_conv_4 (Conv2D) (None, None, None, 1 289 leaky_re_lu[7][0]
==================================================================================================
Total params: 29,441
Trainable params: 29,185
Non-trainable params: 256
__________________________________________________________________________________________________
Block 1 / 7: 100%|████████████████████████████████████████████████████████| 2000/2000 [01:32<00:00, 21.58it/s, g_loss=[0.9006956], d_loss=[-0.02634283]]
Tatsächlich hat die Tensorflow-Version des Modells eine Tensorgröße, die mit zunehmendem Eingabebild zunimmt, wie unten gezeigt. Und jedes Modell hat die gleiche Anzahl von Parametern, wenn die Größe zunimmt.
(base) C:\Users\user\SinGAN_tf_impl-master>python main.py "SR" "Input/images/mayuyu128.jpg "
2019-12-30 23:23:33.694956: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 20, 20, 3)] 0
__________________________________________________________________________________________________
input_2 (InputLayer) [(None, 20, 20, 3)] 0
__________________________________________________________________________________________________
tf_op_layer_add (TensorFlowOpLa [(None, 20, 20, 3)] 0 input_1[0][0]
input_2[0][0]
__________________________________________________________________________________________________
conv_block_0_conv_0 (Conv2D) (None, 20, 20, 32) 896 tf_op_layer_add[0][0]
__________________________________________________________________________________________________
conv_block_0_BN_0 (BatchNormali (None, 20, 20, 32) 128 conv_block_0_conv_0[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 20, 20, 32) 0 conv_block_0_BN_0[0][0]
conv_block_1_BN_1[0][0]
conv_block_2_BN_2[0][0]
conv_block_3_BN_3[0][0]
__________________________________________________________________________________________________
conv_block_1_conv_1 (Conv2D) (None, 20, 20, 32) 9248 leaky_re_lu[0][0]
__________________________________________________________________________________________________
conv_block_1_BN_1 (BatchNormali (None, 20, 20, 32) 128 conv_block_1_conv_1[0][0]
__________________________________________________________________________________________________
conv_block_2_conv_2 (Conv2D) (None, 20, 20, 32) 9248 leaky_re_lu[1][0]
__________________________________________________________________________________________________
conv_block_2_BN_2 (BatchNormali (None, 20, 20, 32) 128 conv_block_2_conv_2[0][0]
__________________________________________________________________________________________________
conv_block_3_conv_3 (Conv2D) (None, 20, 20, 32) 9248 leaky_re_lu[2][0]
__________________________________________________________________________________________________
conv_block_3_BN_3 (BatchNormali (None, 20, 20, 32) 128 conv_block_3_conv_3[0][0]
__________________________________________________________________________________________________
conv_block_4_conv_4 (Conv2D) (None, 20, 20, 3) 867 leaky_re_lu[3][0]
__________________________________________________________________________________________________
tf_op_layer_Tanh (TensorFlowOpL [(None, 20, 20, 3)] 0 conv_block_4_conv_4[0][0]
__________________________________________________________________________________________________
tf_op_layer_add_1 (TensorFlowOp [(None, 20, 20, 3)] 0 tf_op_layer_Tanh[0][0]
input_1[0][0]
==================================================================================================
Total params: 30,019
Trainable params: 29,763
Non-trainable params: 256
__________________________________________________________________________________________________
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 20, 20, 3)] 0
__________________________________________________________________________________________________
conv_block_0_conv_0 (Conv2D) (None, 20, 20, 32) 896 input_1[0][0]
__________________________________________________________________________________________________
conv_block_0_BN_0 (BatchNormali (None, 20, 20, 32) 128 conv_block_0_conv_0[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 20, 20, 32) 0 conv_block_0_BN_0[0][0]
conv_block_1_BN_1[0][0]
conv_block_2_BN_2[0][0]
conv_block_3_BN_3[0][0]
__________________________________________________________________________________________________
conv_block_1_conv_1 (Conv2D) (None, 20, 20, 32) 9248 leaky_re_lu[4][0]
__________________________________________________________________________________________________
conv_block_1_BN_1 (BatchNormali (None, 20, 20, 32) 128 conv_block_1_conv_1[0][0]
__________________________________________________________________________________________________
conv_block_2_conv_2 (Conv2D) (None, 20, 20, 32) 9248 leaky_re_lu[5][0]
__________________________________________________________________________________________________
conv_block_2_BN_2 (BatchNormali (None, 20, 20, 32) 128 conv_block_2_conv_2[0][0]
__________________________________________________________________________________________________
conv_block_3_conv_3 (Conv2D) (None, 20, 20, 32) 9248 leaky_re_lu[6][0]
__________________________________________________________________________________________________
conv_block_3_BN_3 (BatchNormali (None, 20, 20, 32) 128 conv_block_3_conv_3[0][0]
__________________________________________________________________________________________________
conv_block_4_conv_4 (Conv2D) (None, 20, 20, 1) 289 leaky_re_lu[7][0]
==================================================================================================
Total params: 29,441
Trainable params: 29,185
Non-trainable params: 256
__________________________________________________________________________________________________
Block 1 / 9: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [01:13<00:00, 27.10it/s, g_loss=[7.9145103], d_loss=[-0.0302126]]
Model: "model_2"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_3 (InputLayer) [(None, 25, 25, 3)] 0
__________________________________________________________________________________________________
input_4 (InputLayer) [(None, 25, 25, 3)] 0
__________________________________________________________________________________________________
tf_op_layer_add_2 (TensorFlowOp [(None, 25, 25, 3)] 0 input_3[0][0]
input_4[0][0]
__________________________________________________________________________________________________
conv_block_0_conv_0 (Conv2D) (None, 25, 25, 32) 896 tf_op_layer_add_2[0][0]
__________________________________________________________________________________________________
conv_block_0_BN_0 (BatchNormali (None, 25, 25, 32) 128 conv_block_0_conv_0[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) multiple 0 conv_block_0_BN_0[0][0]
conv_block_1_BN_1[0][0]
conv_block_2_BN_2[0][0]
conv_block_3_BN_3[0][0]
__________________________________________________________________________________________________
conv_block_1_conv_1 (Conv2D) (None, 25, 25, 32) 9248 leaky_re_lu[8][0]
__________________________________________________________________________________________________
conv_block_1_BN_1 (BatchNormali (None, 25, 25, 32) 128 conv_block_1_conv_1[0][0]
__________________________________________________________________________________________________
conv_block_2_conv_2 (Conv2D) (None, 25, 25, 32) 9248 leaky_re_lu[9][0]
__________________________________________________________________________________________________
conv_block_2_BN_2 (BatchNormali (None, 25, 25, 32) 128 conv_block_2_conv_2[0][0]
__________________________________________________________________________________________________
conv_block_3_conv_3 (Conv2D) (None, 25, 25, 32) 9248 leaky_re_lu[10][0]
__________________________________________________________________________________________________
conv_block_3_BN_3 (BatchNormali (None, 25, 25, 32) 128 conv_block_3_conv_3[0][0]
__________________________________________________________________________________________________
conv_block_4_conv_4 (Conv2D) (None, 25, 25, 3) 867 leaky_re_lu[11][0]
__________________________________________________________________________________________________
tf_op_layer_Tanh_1 (TensorFlowO [(None, 25, 25, 3)] 0 conv_block_4_conv_4[0][0]
__________________________________________________________________________________________________
tf_op_layer_add_3 (TensorFlowOp [(None, 25, 25, 3)] 0 tf_op_layer_Tanh_1[0][0]
input_3[0][0]
==================================================================================================
Total params: 30,019
Trainable params: 29,763
Non-trainable params: 256
__________________________________________________________________________________________________
Model: "model_3"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_3 (InputLayer) [(None, 25, 25, 3)] 0
__________________________________________________________________________________________________
conv_block_0_conv_0 (Conv2D) (None, 25, 25, 32) 896 input_3[0][0]
__________________________________________________________________________________________________
conv_block_0_BN_0 (BatchNormali (None, 25, 25, 32) 128 conv_block_0_conv_0[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) multiple 0 conv_block_0_BN_0[0][0]
conv_block_1_BN_1[0][0]
conv_block_2_BN_2[0][0]
conv_block_3_BN_3[0][0]
__________________________________________________________________________________________________
conv_block_1_conv_1 (Conv2D) (None, 25, 25, 32) 9248 leaky_re_lu[12][0]
__________________________________________________________________________________________________
conv_block_1_BN_1 (BatchNormali (None, 25, 25, 32) 128 conv_block_1_conv_1[0][0]
__________________________________________________________________________________________________
conv_block_2_conv_2 (Conv2D) (None, 25, 25, 32) 9248 leaky_re_lu[13][0]
__________________________________________________________________________________________________
conv_block_2_BN_2 (BatchNormali (None, 25, 25, 32) 128 conv_block_2_conv_2[0][0]
__________________________________________________________________________________________________
conv_block_3_conv_3 (Conv2D) (None, 25, 25, 32) 9248 leaky_re_lu[14][0]
__________________________________________________________________________________________________
conv_block_3_BN_3 (BatchNormali (None, 25, 25, 32) 128 conv_block_3_conv_3[0][0]
__________________________________________________________________________________________________
conv_block_4_conv_4 (Conv2D) (None, 25, 25, 1) 289 leaky_re_lu[15][0]
==================================================================================================
Total params: 29,441
Trainable params: 29,185
Non-trainable params: 256
__________________________________________________________________________________________________
Block 2 / 9: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [01:42<00:00, 19.48it/s, g_loss=[0.7259917], d_loss=[-0.00471149]]
Model: "model_4"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_5 (InputLayer) [(None, 32, 32, 3)] 0
__________________________________________________________________________________________________
...
Model: "model_5"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_5 (InputLayer) [(None, 32, 32, 3)] 0
__________________________________________________________________________________________________
...
Block 3 / 9: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [03:49<00:00, 8.71it/s, g_loss=[0.05491346], d_loss=[-0.00068739]]
Model: "model_6"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_7 (InputLayer) [(None, 40, 40, 3)] 0
__________________________________________________________________________________________________
...
Model: "model_7"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_7 (InputLayer) [(None, 40, 40, 3)] 0
__________________________________________________________________________________________________
...
Block 4 / 9: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [05:04<00:00, 6.56it/s, g_loss=[0.13994163], d_loss=[-0.00033907]]
Model: "model_8"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_9 (InputLayer) [(None, 50, 50, 3)] 0
__________________________________________________________________________________________________
...
Model: "model_9"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_9 (InputLayer) [(None, 50, 50, 3)] 0
__________________________________________________________________________________________________
...
Block 5 / 9: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [07:46<00:00, 4.29it/s, g_loss=[0.1438144], d_loss=[-0.00011725]]
Model: "model_10"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_11 (InputLayer) [(None, 64, 64, 3)] 0
__________________________________________________________________________________________________
...
Model: "model_11"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_11 (InputLayer) [(None, 64, 64, 3)] 0
__________________________________________________________________________________________________
...
Block 6 / 9: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [12:43<00:00, 2.62it/s, g_loss=[0.09378527], d_loss=[-7.251864e-05]]
Model: "model_12"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_13 (InputLayer) [(None, 80, 80, 3)] 0
__________________________________________________________________________________________________
...
Model: "model_13"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_13 (InputLayer) [(None, 80, 80, 3)] 0
__________________________________________________________________________________________________
...
Block 7 / 9: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [19:51<00:00, 1.68it/s, g_loss=[0.1352475], d_loss=[-0.00010792]]
Model: "model_14"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_15 (InputLayer) [(None, 101, 101, 3) 0
__________________________________________________________________________________________________
...
Block 8 / 9: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [32:49<00:00, 1.02it/s, g_loss=[0.12389164], d_loss=[-2.1162363e-05]]
Model: "model_16"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_17 (InputLayer) [(None, 128, 128, 3) 0
__________________________________________________________________________________________________
...
Model: "model_17"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_17 (InputLayer) [(None, 128, 128, 3) 0
__________________________________________________________________________________________________
...
Block 9 / 9: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 2000/2000 [50:12<00:00, 1.51s/it, g_loss=[0.13538168], d_loss=[2.7423768e-05]]
Recommended Posts