In this previous article, I wrote about how to integrate the history of training machine learning models using MLflow on Databricks into a notebook.
Using MLflow with Databricks ① --Experiment tracking on notebook-
Databricks managed MLflow allows you to compare training model parameters and metrics, model staging, etc. on the UI.
In this article, I write about the part that visualizes and compares the parameters and metrics for each experiment.
From the screen where you checked the metrics for each experiment on the notebook in the previous article, click the red frame in the figure.
You will be taken to a screen where information for each experiment is summarized. You'll see metrics, parameters, integrated notebooks, and more.
Scroll down to save model and preference data, screenshots of experimental results, etc. as artifact files.
When MLflow is integrated into the notebook, the experiment id is automatically assigned, and the run_id is assigned and managed for each execution result. Artifact files are stored in a directory within DBFS.
From the notebook, click the red frame in the figure below to the right of "Runs".
A list of each experiment is displayed.
Select the experiment you want to compare and "Compare".
You can compare parameters and metrics. Clicking on each run id will take you to the individual page for the experiment above.
Scroll down to visualize and compare parameters and metrics.
This time, we visualized and compared experimental parameters and metrics on the Databricks UI. Next time, I would like to write about staging to move the trained model to production.
Previous article: Using MLflow with Databricks ① --Experiment tracking on notebook-
Recommended Posts