W&B The web version of the tensor board. You can see the differences on the online page, and you can also save the results of old experiments. This is a convenient site for recording machine learning logs and viewing results.
I will write an article referring to the following page. wandb Quickstart
Use pip to install the wandb library.
pip install wandb
Since it is a web application, membership registration is required. It has a limited capacity, but is free for student and academic use. You can register from the following page. I can do it with my github account, so I was able to do it right away.
At the terminal of the environment to be used next
wandb login
Then, the following page will appear.
wandb: You can find your API key in your browser here: https://app.wandb.ai/authorize
wandb: Paste an API key from your profile and hit enter:
When accessing the https://app.wandb.ai/authorize page from a PC with an account registered There is an API key of about 40 digits, so copy it and paste it at enter: above. This will link the server etc. with the web account.
First, add the following code at the beginning.
# Inside my model training code
import wandb
wandb.init(project="my-project")
You can summarize the results by project name.
If there is an argument you want to save, you can save it with the config method.
wandb.config.dropout = 0.2
wandb.config.hidden_layer_size = 128
Next, enter the recording code in the training data.
def my_train_loop():
for epoch in range(10):
loss = 0 # change as appropriate :)
wandb.log({'epoch': epoch, 'loss': loss})
Normally, put wandb.log ({dict}) in the part where print (loss) is performed. Graphs are created for the arguments passed in dict.
You can save the learning log by putting the following code at the end.
wandb.save("mymodel.h5")
test.py
import wandb
wandb.init(project="test-project")
lr = 0.1
wandb.config.lr = lr
def my_train_loop():
loss = 10
for epoch in range(10):
loss = loss * lr # change as appropriate :)
wandb.log({'epoch': epoch, 'loss': loss})
wandb.save("mymodel.h5")
def main():
my_train_loop()
if __name__ == "__main__":
main()
Run the above code and When you access the W & B page,
You can get the transition of loss like this. The reason why there is epoch is basically that the horizontal axis is the number of steps. Therefore, by recording epoch as the horizontal axis when creating a graph, In this way you can create graphs such as epoch vs loss.
The lr added to the config is also recorded, which is useful when reproducing.
Actually, it's the second day since I touched it, but there isn't much code to add, and it's quick to get a graph. Also, by recording on the web, you can collect and store the results on various servers in one. Please try it!
Recommended Posts