As you can see from the code below, the final layer is Liner and does not include the softmax layer. https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py
I thought, "Well, is this okay?"
It was all written below. https://discuss.pytorch.org/t/torchvision-models-dont-have-softmax-layer/18071
When learning, nn.CrossEntoropyLoss ()
is used, but it is not necessary because it consists of nn.LogSoftmax
and nn.NLLLoss.
.
When inferring, if you want the probability of each class, you need nn.functional.softmax ()
, but if you want to predict, you can omit the idx of the largest value in torch.max etc.
At the time of classification, I thought that the final layer was softmax due to brain death, but if I think about it carefully, it is only necessary for loss calculation, so I thought "I see."
Recommended Posts