I'm just getting started with CodeRunner, and I liked the idea of using some randomization in questions (as described in the docs here).
However, it feels somewhat limited regarding what I will be able to randomize, since I need to be able to express the "expected output" as a TWIG expression.
As a simple example, consider the prompt "Write a program that prints 'hello' repeatedly, {{NUM}} times" where {{NUM}} is a randomized parameter (say between 6 and 10).
For the solution code, I can fill in:
for i in range({{NUM}}): print('hello')
However, for the expected output, I believe I would need to use a {% for ... %} loop. While this seems possible in this case, the TWIG templating language has its limitations, and it feels cumbersome to have to rewrite a generalized solution to the problem in both Python AND TWIG.
What I would like to have happen is that the output from each students code will get compared to the output from the (templated) instructor solution that I wrote in Python, for each of specified input test cases... but I don't want to have to write anything in the "expected output" cases.
I believe this should be possibly in CodeRunner by using a sufficiently advanced grading template grader. Suggestions for how to accomplish this would be welcome!
Also, I am curious about whether other question authors agree that having a grading system like this would be extremely helpful for writing randomized questions more quickly/easily. If so, perhaps someone could create a new prototype that provides this capability out-of-the-box?