Creating my own result columns

Creating my own result columns

por Heli Virtanen -
Número de respuestas: 1

((As requested by Richard, posting my question also here.))

I just recently started trying out Coderunner for a Moodle quiz and run into a problem that I don't know how to solve or even if it is possible to do what I want with the CodeRunner as it is now.

So, I have (mostly) built a custom template as our students will not actually do code, they will return their simulation netlist (text) which will then be compared to the given netlist. The template is written with Python3 and student answers are then split into lines (string) and then checked against the given testcode (=actual netlist).

Now, the problem comes when you need to give somekind of feedback, as there are a number of ways you can do stuff in the simulator that I would like to give full marks, but also give a feedback e.g. "Next time, use this  [simulation command] instead of [simulation command used]" or something similar. But if I just print this info while doing the tests, the exact match grader will (of course) give 0 points, as it is not excatly matched as the expected result is.

I was thinking of going over this problem by adding a "results column" item called feedback, but I don't know how to get the template to give this info which is gathered in a feedback list while running the test for student code.

I also thought about just updating one the existing fields (like 'extra' for example, as it won't be used for the code otherwise), but I can't seem to find info on how to do that either. I tried modifying the info on "an advanced grading-template example", but I didn't manage to make that work.

Any help would be greatly appreciated.

Best regards,
Heli

En respuesta a Heli Virtanen

Re: Creating my own result columns

por Richard Lobb -

Thanks for posting a great question, Heli.

Wow - you just recently starting using CodeRunner and you're trying template graders?! That's called "Jumping in at the deep end"!

Certainly you can achieve what you want, but you do have to use a template grader. Template graders are simple enough in principle - you just have to construct a program the output of which is a JSON record describing a line of the result table - but are a bit tricky to implement when students supply the code.

The template grader example in the documentation isn't a very good one, particularly not for your needs, which are actually much simpler. [The documented version is not made any easier by the fact that  '<' and '>' characters in the input HTML are not being displayed in the version on coderunner.org.nz :-(]

Here's an example of a template grader, closer to your needs. I'm assuming any simulation parameters are given to the student, so there's only a single testcase, in which the 'expected' field is the expected output. For simplicity I'm assuming that's always exactly 5 lines but it's trivial to generalise. I'm giving one mark for each correct line.

The 'Result columns' field of the question (in the 'customisation' part of the question authoring form) is

[["Expected", "expected"], ["Got", "got"], ["Comment", "comment"], ["Mark", "fraction"]]

and the template is (**edit** fixed wrong indentation on last statement):

import json
got = """{{ STUDENT_ANSWER | e('py') }}"""
expected = """{{ TEST.expected | e('py') }}"""
got_lines = got.split('\n')
expected_lines = expected.split('\n')
mark = 0
if len(got_lines) != 5:
    comment = "Expected 5 lines, got {}".format(len(got_lines))
else:
    comment = ''
    for i in range(5):
        if got_lines[i] == expected_lines[i]:
            mark += 1
            comment += "Line {} right\n".format(i)
        else:
            comment += "Line {} wrong\n".format(i)

print(json.dumps({'expected': expected, 'got': got,
      'comment': comment, 'fraction': mark / 5}))

The output if the student's answer is correct is then, say:

Simple template grader example: right output


and if they make some mistakes:

Simple template grader example: partial marks

Obviously you can extend this to handle more elaborate marking rubrics.

Hope that helps.

Enjoy

Richard