Marking based on the submitted program's performance

Marking based on the submitted program's performance

von Michael Fairbank -
Anzahl Antworten: 10

I am wondering whether I can write test code which directly outputs the mark to give to the student. 

For example, suppose the student submits a program called "PacManAgent", which is a piece of program logic to self-play pacman.  I want the auto-marking code to run their agent through 100 games of pacman through a pacman engine, and then print a single number: a number calculated from their average pacman score obtained in those 100 games.

Can I make that final displayed number become their mark for this moodle question?

The only way I can currently work out how to do this is to discretise the scores printed into various different text strings, e.g. "A,B,C,...,Z"; and write multiple test cases; one to identify each possibility, giving a different score for each test case. 

Another use case might be "optimise this function to make it run as fast as possible". Then the marking script runs their code and measures timings and gives them a score based on that.

Als Antwort auf Michael Fairbank

Re: Marking based on the submitted program's performance

von Richard Lobb -

It sounds like what you want is a template grader, where the template code not only runs the student submission (if that's what's required) but also does the grading.

Check out the sections on Grading with templates and Template grader examples in the documentation.

Als Antwort auf Richard Lobb

Re: Marking based on the submitted program's performance

von Michael Fairbank -

Thank you Richard, for a great product.

I tried that today and it worked initially, but now it's mysteriously stopped working.

For my entire test-case 1 script I have

import json
print(json.dumps({'fraction': 0.25, 'message':"blah"}))

This gives the error message:

Error in question
Exception: Unknown field name (message) in combinator grader output in /var/www/moodle/question/type/coderunner/classes/jobrunner.php:288
For the results_heading fields, and customisation settings, I have what is shown in the attachment customisation_output.png

So I'm not sure why it's complaining about the "comment" field because it exists in the "result columns".

And then when I remove the message field from the json output, i.e. use print(json.dumps({'fraction': 0.25})), I now get no message at all shown to the user on question preview, just a wide pink bar shown where the results table should appear (see attachment preview_output.png)

I'm not sure what plugin version I'm using, but it's fairly recent (<4 months old)

Anhang customisation_output.png
Anhang preview_output.png
Als Antwort auf Michael Fairbank

Re: Marking based on the submitted program's performance

von Michael Fairbank -

Sorry I got the description slightly wrong above.

When I use the code

print(json.dumps({'fraction': 0.25,'comment':"Information"}))

The error message is

Exception: Unknown field name (comment) in combinator grader output in /var/www/moodle/question/type/coderunner/classes/jobrunner.php:288

Despite the result_columns field being "[["Expected", "expected"], ["Got", "got"], ["Comment", "comment"], ["Mark", "awarded"]]", which doesn't seem right?

Thank you again.




Als Antwort auf Michael Fairbank

Re: Marking based on the submitted program's performance

von Richard Lobb -

There are two different types of template grader: per-test template graders and combinator template graders. The 'is-combinator' checkbox selects which type. You have apparently selected a combinator template grader. As per the documentation:

"In this mode, the JSON string output by the template grader should again contain a 'fraction' field, this time for the total mark, and may contain zero or more of 'prologuehtml', 'testresults', 'epiloguehtml' 'showoutputonly' and 'showdifferences' attributes. The 'prologuehtml' and 'epiloguehtml' fields are html that is displayed respectively before and after the (optional) result table. The 'testresults' field, if given, is a list of lists used to display some sort of result table. The first row is the column-header row and all other rows define the table body. Two special column header values exist: 'iscorrect' and 'ishidden'. The 'iscorrect' column(s) are used to display ticks or crosses for 1 or 0 row values respectively. The 'ishidden' column isn't actually displayed but 0 or 1 values in the column can be used to turn on and off row visibility. Students do not see hidden rows but markers and other staff do."

The error you're getting is because you have a JSON object field (attribute) that is not one of the allowed set: fraction, prologuehtml, testresults, epiloguehtml, showoutputonly and showdifferences.

The result_columns field is ignored for combinator template graders, because the result table is fully described within the output.

With a per-test-case template grader, however, the template runs and grades just a single test case and then:

"... the output from the program must be a JSON string that defines one row of the test results table. [Remember that per-test templates are expanded and run once for each test case.] The JSON object must contain at least a 'fraction' field, which is multiplied by TEST.mark to decide how many marks the test case is awarded. It should usually also contain a 'got' field, which is the value displayed in the 'Got' column of the results table. The other columns of the results table (testcode, stdin, expected) can, if desired, also be defined by the template grader and will then be used instead of the values from the test case. As an example, if the output of the program is the string

{"fraction":0.5, "got": "Half the answers were right!"}

half marks would be given for that particular test case and the 'Got' column would display the text "Half the answers were right!".

For even more flexibility the result_columns field in the question editing form can be used to customise the display of the test case in the result table. That field allows the author to define an arbitrary number of arbitrarily named result-table columns and to specify using printf style formatting how the attributes of the grading output object should be formatted into those columns. For more details see the section on result-table customisation."

I'm not sure which type of grader you wish to use, but your JSON output appears more consistent with the per-test-case type. Perhaps just uncheck the Is Combinator checkbox for a start?

Als Antwort auf Richard Lobb

Re: Marking based on the submitted program's performance

von Michael Fairbank -

Hi Richard, Thank you for your help.  I'm still struggling with this.  No worries, this is just a "it would be cool to have" piece of extra functionality - I'm already getting good use out of the standard "exact match" marking mode for 99% of questions.

It would be great if you could make a 4th coderunner youtube video on how to do template-grader questions?

The problems I'm having currently are:

1. If I use the is_combinator option, and have a simple test case of just "import json
print(json.dumps({"fraction":0.5}))", then when I preview the question I cannot see a mark received of 0.5.  In fact I see nothing - just a pink bar (see "pink_bar_is_combinator.png")

2. If I don't use "is_combinator," then I find my test script code doesn't seem to be run by the template at all.  See "non_combinator.png".

3. I've found if I use non-combinator and customise the template to add my own marking code there (instead of in the test cases) then my marking code does run.  E.g. I can just append "import json
print(json.dumps({"fraction":1}))" to the end of the template and the student always scores 100%, but then the test case 1's "expected and got" messages come up and don't make much sense.  But then why are the test cases there for non-combinator mode?

To clarify this, in future versions, perhaps when the pink bar error happens an error message explaining it could appear? And perhaps when is_combinator is on, perhaps the results_columns field could be greyed out if it has no use?

Thank you again.

Michael


Anhang non_combinator_output.png
Anhang pink_bar_is_combinator.png
Als Antwort auf Michael Fairbank

Re: Marking based on the submitted program's performance

von Matthew Toohey -


Hi Michael

Re point 1: Moodle actually does not seem to shows the marks for a question in preview mode (even regular coderunner questions but you tend not to notice because you get the nice table). If the question were in an actual quiz then you would see the mark. However, what you probably want to do here is output the mark in a very clear way to the student. Fortunately with the template combinator grader you get to control the exact out put the student will see! Say you have a way of marking student code that produces a mark out of 100. Then you might like to try something like the following.


import json

# Have some code to compute the students mark.
student_mark = 78

output = {
    'fraction': student_mark/100,
    'prologuehtml': f"<h2> Your code received {student_mark}/100 marks</h2>"
}

print(json.dumps(output))

This would output something like the following

PartialMarks

and with the fraction set to 0.78 they would get 78% of the marks assigned to that question in the quiz.

Unfortunately we can't doing anything about the red banner. As far as I am aware, coderunner will always display a red background if the marks for the 'fraction' of the question is less than 1. Otherwise it will be green (see below). This is something I would liked to see fixed because in cases such as this a mark of 78/100 might be quite good and it seems a bit harsh to highlight it in red.

Full Marks
Ultimately you can include any html you like in the 'prologuehtml' field to format the display in any way you like. From the simple output I have shown above to very complicated output with more feedback or a breakdown of the marks a student is receiving.

You can also include a customised version of the table coderunner would normally output using the 'testresults' field. For example in one of my questions that consists of multiple parts I use the template grader to output a table that looks as follows.
Table

In this case the 'testresults' field is being set to a 2d list representing the table that would look as follows (note the iscorrect column header is a special header recognised by code runner to display either ticks or crosses in that column).

[
["Part #", "Outcome", "iscorrect"],
["Part 1", "Correct", True],
["Part 2", "Correct", True],
["Part 3", "Incorrect", False],
["Part 4", "Correct", True],
]
You could put what ever you liked in this table. 
If you have set the 'prologuehtml' field (as I did in my earlier example) then this would come before the table. If you wanted to have some html/text after the table you could use the 'epiloguehtml' field.


Hope this is helpful



Cheers,

Matthew



Als Antwort auf Matthew Toohey

Re: Marking based on the submitted program's performance

von Michael Fairbank -

Thank you Matthew.  That works really well for me.

So my solution was to use "template-grader", with "is combinator", and with test-case1's script incorporating the code you gave me Matthew.

Should we add this example to the user-guide for coderunner?  Let me know if you'd like me to have a stab at it.

Als Antwort auf Michael Fairbank

Re: Marking based on the submitted program's performance

von Matthew Toohey -

Hi Michael

That's good to here!

Just an FYI that generally if I were to write a question like this, I would replace the template code completely with the code I showed you (and not use the test case boxes for anything). I've attached an export of an example question below.



Matthew

Als Antwort auf Matthew Toohey

Re: Marking based on the submitted program's performance

von Richard Lobb -

Hi Michael

Sorry this has proved so difficult. I clearly need to put some effort into improving the documentation on template grades. I've been meaning to make some videos on the subject for quite some time; your gentle nudge in that direction should help to get it done, though it probably won't happen until the end of this academic year (end of next month, more or less).

Happily, Matthew Toohey has come to your rescue (thanks Matthew) and it seems like you're making progress again. I do encourage you to keep exploring the possibilities. Virtually all our question types now use combinator template graders; they're hard to get your head around at first but they give you the ultimate in flexibility, with complete control over the execution and grading of a student's submission and the feedback that you provide. If you want to see just how insanely complex such question types can become, and also how powerful, check out our python3_stage1 question type on github.

Keep having fun 🙂

Richard

Als Antwort auf Richard Lobb

Re: Marking based on the submitted program's performance

von Michael Fairbank -
Thank you both.  I've got results tables displaying now in template-grader with isCombinator!
I believe use of CodeRunner will help our students a lot.  Thank you!
Best wishes,
Michael