python3_w_input - checking students' answers

python3_w_input - checking students' answers

by Hilla Moshieff -
Number of replies: 2

Hello,

When I check students' answers to python3_w_input type questions I find that the students must write their input prompt  and print instructions exactly as I expect them otherwise their answers are treated as mistakes. For instance, if they write their prompt as "Your number pls" and I check for "Your number please" their answer is treated as a mistake. 

I know this will be much less of a problem when they start writing functions but for the beginning of the course, I need them to write small programs. 

Can you please suggest a solution to this?

Thank you very much!

Hilla Moshieff  

In reply to Hilla Moshieff

Re: python3_w_input - checking students' answers

by Dr. Frank Diegmüller -
Hi,
is there a solution for this behaviour right now?
after 2 years ..
Thanks
Frank
In reply to Dr. Frank Diegmüller

Re: python3_w_input - checking students' answers

by Richard Lobb -
Is there a solution? That depends on exactly what you think the problem is, and on how you think it should be solved.

CodeRunner is best regarded as a framework for writing questions that can be graded by a computer program. It comes with a small set of very basic question types in various languages, but this is just the starting point. Most teachers find they need to customise the existing questions to suit their needs. Or, more generally, to write their own question types.

My own personal view of your perceived problem is that if you ask a student to prompt for input with some string, and they use some other string, then their answer is wrong and they should get zero marks. Of course, it needs to be made clear exactly what is wrong, so they can fix it and get the marks. The Show differences button helps in this. And of course you can adjust the penalty regime to be as generous as you like, including setting it to 0 so there are no penalties for wrong submissions.

But I accept I'm a university teacher and life at the coalface in primary or secondary schools may be different. Even some of my tertiary colleagues don't agree with my stance. And certainly you wish to avoid annoying students more than necessary, although programming is fundamentally a precise, pedantic and frequently annoying process no matter what you do.

If you don't like my stance, then you'll have to define exactly what you will accept as right answers, and customise or write your own question types to implement what you want. For example, if you tell the student to prompt with "What is your age?" and they prompt with "How old are you?" are you happy? How about if they prompt with "What is your name?" (when meaning to ask for your age), or "Get knotted, teacher". Still good? If you're happy with any of those answers, then you presumably don't care what the prompt is, in which case why are you even asking them for a prompt? Alternatively, if you want the prompt to be "sensible and appropriate" then you have an Artificial Intelligence problem that even Alexa or Google Assistant will struggle with.

Philosophical rant aside, here are some ideas you might wish to explore:

  1. Stick with the existing question python3_w_input question type but use very simple one-word prompts to minimise scope for error. For example, rather than "What is your age in years?", require the prompt "Age? ". Emphasise to students they must use the prompt they're given. Getting students to read the spec and implement exactly what is asked for is surely useful training?

  2. If you really don't care what prompt students use, customise the question by clicking the Customise checkbox. Change the rewritten version of the input function in the question template to something like:
    def input(prompt=''):
        s = __saved_input__("<prompt ignored> ")
        print(s)
        return s
    

    The check for the output exactly matching your expected output (which of course should be edited to contain the same "prompt ignored" string) will now succeed regardless of what prompt the student uses.

  3. Use a regular-expression grader rather than an exact match grader. For example, if any prompt containing the word 'age' or 'Age' will suffice, the expected line might be something like '.*[aA]ge.* 13' (where the standard input for the test was the string '13'). This lets you define exactly what you will accept but has the huge disadvantage that the students probably won't understand the regular expression and so won't be able to fix their wrong answers. It could be useful in a classroom context, however, if you're on hand to explain what to do. Or you could hide the 'expected' column altogether (see the documentation).

  4. Write a customised template grader that defines exactly what you will accept (and perhaps even award part marks for near misses). See the documentation. This is the best solution in the long run, but does require a high level of familiarity with CodeRunner. 

  5. If you've adopted approaches 2, 3 or 4, consider saving the modified question as a new question type. Again, see the documentation.

To summarise: you're in the driver's seat here. Implement what you want.