CodeRunner is best regarded as a framework for writing questions that can be graded by a computer program. It comes with a small set of very basic question types in various languages, but this is just the starting point. Most teachers find they need to customise the existing questions to suit their needs. Or, more generally, to write their own question types.
My own personal view of your perceived problem is that if you ask a student to prompt for input with some string, and they use some other string, then their answer is wrong and they should get zero marks. Of course, it needs to be made clear exactly what is wrong, so they can fix it and get the marks. The Show differences button helps in this. And of course you can adjust the penalty regime to be as generous as you like, including setting it to 0 so there are no penalties for wrong submissions.
- Stick with the existing question python3_w_input question type but use very simple one-word prompts to minimise scope for error. For example, rather than "What is your age in years?", require the prompt "Age? ". Emphasise to students they must use the prompt they're given. Getting students to read the spec and implement exactly what is asked for is surely useful training?
- If you really don't care what prompt students use, customise the question by clicking the Customise checkbox. Change the rewritten version of the input function in the question template to something like:
def input(prompt=''): s = __saved_input__("<prompt ignored> ") print(s) return s
The check for the output exactly matching your expected output (which of course should be edited to contain the same "prompt ignored" string) will now succeed regardless of what prompt the student uses.
Use a regular-expression grader rather than an exact match grader. For example, if any prompt containing the word 'age' or 'Age' will suffice, the expected line might be something like '.*[aA]ge.* 13' (where the standard input for the test was the string '13'). This lets you define exactly what you will accept but has the huge disadvantage that the students probably won't understand the regular expression and so won't be able to fix their wrong answers. It could be useful in a classroom context, however, if you're on hand to explain what to do. Or you could hide the 'expected' column altogether (see the documentation).
Write a customised template grader that defines exactly what you will accept (and perhaps even award part marks for near misses). See the documentation. This is the best solution in the long run, but does require a high level of familiarity with CodeRunner.
If you've adopted approaches 2, 3 or 4, consider saving the modified question as a new question type. Again, see the documentation.
To summarise: you're in the driver's seat here. Implement what you want.