Hello,
I am doing project 2 text classification and I was trying to do a neural network. When I train my model, the train accuracy increases while validation accuracy stays always around the same value (~0.67). At the first epoch the training is similar, around 0.66 and then increases to 0.9 but the validation stays fluctuating around 0.67. I don't understand why. I tried to add kernel contraint to avoid overfitting but it doesn't seem to work. Do you have any idea how I could fix that ?
it's normal that the training accuracy keeps increasing and the test/val saturates at some level, so i don't think anything strange is at play here.
to improve the test acc, various things can be tried, such as other architectures (i don't have info here), or regularization such as drop-out, or using better input features, or transfer learning etc
Neural network accuracy
Hello,
I am doing project 2 text classification and I was trying to do a neural network. When I train my model, the train accuracy increases while validation accuracy stays always around the same value (~0.67). At the first epoch the training is similar, around 0.66 and then increases to 0.9 but the validation stays fluctuating around 0.67. I don't understand why. I tried to add kernel contraint to avoid overfitting but it doesn't seem to work. Do you have any idea how I could fix that ?
it's normal that the training accuracy keeps increasing and the test/val saturates at some level, so i don't think anything strange is at play here.
to improve the test acc, various things can be tried, such as other architectures (i don't have info here), or regularization such as drop-out, or using better input features, or transfer learning etc
Add comment