Hello, in my project we are working on pixelwise binary classification. We have a small training set, we have augmented our set with transformations methods. For the loss function, we are working with Binary Cross Entropy (torch.nn.BCEWithLogitsLoss).
In our testing, we could not manage to reach a lower loss than 0.66. We have tried to tune the learning rate but it did not help. The data is normalized. We tried to tune the pos_weight argument of the BCEWithLogtisLoss function. We are using standard Unet.
What are some example of techniques that we could use to improve our results?
Dealing with unbalanced pixelwise classification
Hello, in my project we are working on pixelwise binary classification. We have a small training set, we have augmented our set with transformations methods. For the loss function, we are working with Binary Cross Entropy (torch.nn.BCEWithLogitsLoss).
In our testing, we could not manage to reach a lower loss than 0.66. We have tried to tune the learning rate but it did not help. The data is normalized. We tried to tune the pos_weight argument of the BCEWithLogtisLoss function. We are using standard Unet.
What are some example of techniques that we could use to improve our results?
Add comment