Print this chapterPrint this chapter

CodeRunner Documentation (V3.0.0)

8.1 Per-test-case template grading

When the 'Per-test-case template grader' is selected as the grader the per-test-case template changes its role to that of a grader for a particular test case. The combinator template is not used and the per-test-case template is applied to each test case in turn. The output of the run is not passed to the grader but is taken as the grading result for the corresponding row of the result table. The output from the template-generated program must now be a JSON-encoded object (such as a dictionary, in Python) containing at least a 'fraction' field, which is multiplied by TEST.mark to decide how many marks the test case is awarded. It should usually also contain a 'got' field, which is the value displayed in the 'Got' column of the results table. The other columns of the results table (testcode, stdin, expected) can also be defined by the custom grader and will be used instead of the values from the test case. As an example, if the output of the program is the string

{"fraction":0.5, "got": "Half the answers were right!"}

half marks would be given for that particular test case and the 'Got' column would display the text "Half the answers were right!".

For even more flexibility the result_columns field in the question editing form can be used to customise the display of the test case in the result table. That field allows the author to define an arbitrary number of arbitrarily named result-table columns and to specify using printf style formatting how the attributes of the grading output object should be formatted into those columns. For more details see the section on result-table customisation.

Writing a grading template that executes the student's code is, however, rather difficult as the generated program needs to be robust against errors in the submitted code. The template-grader should always return a JSON object and should not generate any stderr output.

Sometimes the author of a template grader wishes to abort the testing of the program after a test case, usually the first, e.g. when pre-checks on the acceptability of a student submission fail. This can be achieved by defining in the output JSON object an extra attribute abort, giving it the boolean value true. If such an attribute is defined, any supplied fraction value will be ignored, the test case will be marked wrong (equivalent to fraction = 0) and all further test cases will be skipped. For example:

{"fraction":0.0, "got":"Invalid submission!", "abort":true}

Note to Python programmers: the Python literal is True rather than true, so if generating JSON with json.dumps(), you need to write

json.dumps({"fraction":0.0, "got":"Invalid submission!", "abort":True})