Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

Interpretation of grades of Project 1

Hi!

We just received the feedback of the project 1, but the way the grades are presented is confusing :

  • For "PDF report score", there is a number N followed by a percentage P: does this mean that the grade is N or that the grade is P*6
  • For "Code quality score", there is a letter (B in our case) and a comment "Adequate (full score)". Does that mean that the grade for the report is something like 4 (with A=6, D=0) or is 6 ("full score") ?

Would it be possible to shed light on this matter ?
Best,
Samuel

Copying TA Aswin's replies here:

For "PDF report score", there is a number N followed by a percentage P: does this mean that the grade is N or that the grade is P*6. For "Code quality score", there is a letter (B in our case) and a comment "Adequate (full score)".

  • Report: graded on 9 levels from 20% to 100% (inclusive).
  • Code: graded on 3 levels from D to B (full score) with A being for bonus.

Does that mean that the grade for the report is something like 4 (with A=6, D=0) or is 6 ("full score") ?

No, a grade on a /6 scale is not given for the project individually. This is given after everything (project 1, project 2 and the final exam) is considered.

This is way clearer now, thank you!

What characteristic made some codes have bonuses?
What could we do in the second project to get a bonus on the code?

What characteristic made some codes have bonuses [for project 1]?

First, all basics need to be satisfied for a full score: correctness of implementation, run.py quality, reproducibility, good documentation, good coding style.
Second, for the code bonus, exceptional documentation and coding style (see below).

What could we do in the second project to get a bonus on the code?

Write exceptional code. Next to the training/evaluation/testing code being provided and correct (to get to the basic requirements) here are some recommendations:

  • Very easily reproducible and configurable: configurable/default input arguments, setting seeds (so far as possible), (small) pre-trained models and/or automatic downloading of datasets (e.g. from Google drive), generating paper figures, ... for example see the code part of the Machine Learning Paper Paper Reproducibility Checklist (also useful for the report)
  • Very readable code: appropriate decomposition, and for example adhere to the PEP 8 Style guide
  • Very good documentation: provide a good Readme, document input arguments and returns of most (important) functions (see also PEP-257), occasional in-code comments serving as clarifications (e.g. for a long training loop).

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification