Grading a project that focuses on investigating the effect of a design choice
Hello,
We are working on a project that investigates the effect of different loss functions on a neural network. Hence, we need to vary the loss functions across experiments (and keep all the other hyperparameters/network architecture the same). Since the choice of hyperparameters does not relate to the project's focus, do we still need to justify our hyperparameter choices (by, say, cross validation / grid search)? Are you looking to grade specific things such was standardization / grid search / cross validation / network architecture choice, or does the grading criteria differ from one project to another?
Grading a project that focuses on investigating the effect of a design choice
Hello,
We are working on a project that investigates the effect of different loss functions on a neural network. Hence, we need to vary the loss functions across experiments (and keep all the other hyperparameters/network architecture the same). Since the choice of hyperparameters does not relate to the project's focus, do we still need to justify our hyperparameter choices (by, say, cross validation / grid search)? Are you looking to grade specific things such was standardization / grid search / cross validation / network architecture choice, or does the grading criteria differ from one project to another?
Many thanks!
2
yes, because likely there will be some interaction between hyperparameters (such as the architecture) and choice of loss function?
but yes there is no hard criterion list but we value the scientific contributions of each project individually
Add comment