After trying various things, in sklearn, the objective variable can be set to [-1,0,1] etc., but in Chainer it is better to set it to [0,1,2]. Perhaps something is wrong with the calculation of the loss function.
It is often written in samples as shown below.
class ChainerClassifier(BaseChainerEstimator, base.ClassifierMixin):
def predict(self, x_data):
return BaseChainerEstimator.predict(self, x_data).argmax(1) #argmax returns the largest index in the rows of the matrix. So the class is 0 to 1,Must be 2
Note that the argmax function returns an index as mentioned in the comments, so it would be strange if the objective variable started with -1.
With max pooling, pooling may stick out. In other words, there is no problem even if the pooling size is even for an odd number of columns or rows. And vice versa.
However, in average pooling, the number of columns and rows must be a multiple of the pooling size.
Maybe max can be calculated at least, but average cannot be calculated.
In addition, max pooling is slower.
Recommended Posts