[PYTHON] Calculate on multiple GPUs using Tensorflow 2's Mirrored Strategy

Overview

Very easy. Just build a network within the scope of tf.distribute.MirroredStrategy tf.distribute.MirroredStrategy. You only need to change a few lines.

Implementation example using Keras API

Here, as a simple example, we are building a simple network with only one hidden layer.

import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.optimizers import Adam


with tf.distribute.MirroredStrategy().scope():

    #Build a network with this block
    x = Input(32, dtype=tf.int32, name='x')
    hidden = Dense(20, activation='relu', name='hidden')(x)
    y = Dense(5, activation='sigmoid', name='y')(hidden)
    model = Model(inputs=x, outputs=y)
    model.compile(
        optimizer=Adam(lr=0.001),
        loss='binary_crossentropy',
    )

model.fit(
    x_train, y_train,
    epochs=10,
    batch_size=16,
)

Relation

The official documentation is as follows.

If you want to use the Keras API as it is, as introduced in this article, but if you implement a custom training loop, there are some additional points to consider. In that case, please refer to the above document.

multi_gpu_model () is scheduled to be abolished after April 2020.

Recommended Posts

Calculate on multiple GPUs using Tensorflow 2's Mirrored Strategy
Try using Bash on Windows 10 2 (TensorFlow installation)
Error, warning when using TensorFlow on Mac
A addictive story when using tensorflow on Android
Notes for using TensorFlow on Bash on Ubuntu on Windows