[PYTHON] Get an evaluation score for each task in DeepChem's multitask learning

Introduction

When multitasking is done with DeepChem, the evaluate method of the deepchem.models.Model class gives the average score of each task. However, it is humanity to know what happened to the score of each task. I've investigated how to do this this time, so make a note of it.

environment

Method

All you have to do is specify per_task_metrics = True when executing the evaluate method. Then, as the second return value, the score for each task is obtained.

validation_score, validation_par_task_score = model.evaluate(validation_set, metrics, transformers, per_task_metrics=True)
print(validation_par_task_score)

For example, if the evaluation index is roc_auc and there are 9 tasks, the roc_auc of each task can be obtained in the following format.

{'mean-roc_auc_score': array([0.77601105, 0.80917502, 0.85473596, 0.8459161 , 0.73406951,
       0.77492466, 0.65670436, 0.7812783 , 0.80639215])}

in conclusion

DeepChem is a deep library that is hard to get at first, but you can do what you want with a lot of trial and error.

Recommended Posts

Get an evaluation score for each task in DeepChem's multitask learning
Build an interactive environment for machine learning in Python
Get the number of occurrences for each element in the list
Create an empty array in Numpy to add rows for each loop