Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

Should we provide Jupyter notebooks in submission?

Hello,

  • Should we also include code and Jupyter notebooks where we worked on multiple methods until finding the best one?
  • In the run.py, how should we proceed to recreate our results if for example we worked with ridge regression and found some best weights?
  • Should we manually save the weights and use them to redo our prediction?

That begs the first question answer, as this doesn't show at all how we coded -> Jupyter notebooks are really how much work we did, step by step.

  • So should we copy some of the code and steps from Jupyter notebooks into python files, and reference them in the report if you just want python files?
  • Or in general how will you evaluate the code? Will you look if we have multiple Jupyter notebooks for different methods?

[admin adjustment: improved readability]

@theophane_boris_victor_emile_b said:
Hello,

  • Should we also include code and Jupyter notebooks where we worked on multiple methods until finding the best one?

Not strictly necessary, but encouraged for understanding your work better and for reproducibility.

  • In the run.py, how should we proceed to recreate our results if for example we worked with ridge regression and found some best weights?
  • Should we manually save the weights and use them to redo our prediction?

run.py should be able to train the model and find the weights. But if your model takes a long time (say more than 10 minutes) to train, you can provide an option to use saved weights, as that would make it easier for us to quickly check your results. However, run.py should be able to train, and we should be able to check this if we decide to. Provide all instructions for both options (training and saved) in the ReadMe.

That begs the first question answer, as this doesn't show at all how we coded -> Jupyter notebooks are really how much work we did, step by step.

See above, training your best model should be part of run.py

  • So should we copy some of the code and steps from Jupyter notebooks into python files, and reference them in the report if you just want python files?
  • Or in general how will you evaluate the code? Will you look if we have multiple Jupyter notebooks for different methods?

To check your code we will definitely do the following two things :

  1. Automatically grade the functions in implementations.py

  2. Run the script run.py according to instructions in ReadMe and see if it reproduces your AICrowd submission score, and check if it can train your best model.

You can submit additional Jupyter notebooks to help us to better understand your work, but we may not have time to go through them.

[admin adjustment: improved readability]

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification