## Question Authors' Forum

### Customizable (per test case) error messages.

by Mark Stern -
Number of replies: 4

I currently have an SQL test where the student supplies a set of DDL commands to create a schema, followed by a set of test cases to check that the student has done it correctly. If the student has made a mistake, he/she will see that there is a mismatch between the output of an (apparently) obscure query and the expected result, which will probably be completely unintelligible to the student. Is there a way to attach a more meaningful error message to each test case that will be displayed in the event of a mismatch?

### Re: Customizable (per test case) error messages.

by Richard Lobb -

For each testcase, the existing SQL question type (click Customise to view it) runs an SQL script consisting of:

1. A prelude that turns on column heads, sets column widths etc
2. The contents of the extra field from the test case
4. The contents of the testcode field from the testcase

If you have complex test code that is likely to confuse the student then you might want to put it in the extra field where the student doesn't see it. But then you'll probably have to edit the template to put the extra field after the student answer.

With procedural languages in this scenario, I usually use the testcode column just as a comment to explain what the test is doing, e.g.

# Checking that floodle has a blahblah

I then set the Expected field of the testcase to just OK and define the code in the extra field so that it outputs OK if the test passes or some informative message if not.

With SQL, constructing test code that outputs either a simple 'success' indicator or an informative error message is much more challenging. I've never taught SQL using CodeRunner and I'm not too great at SQL. You may be able to find a way to do it, but if it were me I'd probably further change the template to analyse the output from the test using Python. For example, rather than just

print(output)

in the template, I'd have something like

print(output_validator(output))

where output_validator is a function, which I'd define earlier in the template, that inspects the output from the SQL run and returns either 'OK' or an informative error message. To allow the validator to handle multiple test cases, you could also pass testcode and/or extra as parameters to the output_validator function. Or, as an alternative approach, you could go back to putting your test SQL code in the testcode field and define the extra field to be Python code that does the output validation.

Longer term, if you're doing complex things like this, you'll want to define your own question type, using template parameters to control options, rather than editing the template for each question.

### Re: Customizable (per test case) error messages.

by Mark Stern -
Hi Richard,

Thanks for the response. I do not think I really understand what this extra field is, but I am not sure it would help. Do you mean I should put stored functions in there, so that the student will see that I am calling the function but not see what is inside the function? The complex test code is different for every test case (OK, that is an exaggeration) so I do not think it would help. Am I missing something?

A similar problem exists with the output_validator suggestion. I do not think I could write an output_validator function that would work for all test cases.

My current thinking is to rewrite the test cases themselves to output 'OK' or an error message. I can use the IIF function in sqlite. However, regex matching may be tricky.

### Re: Customizable (per test case) error messages.

by Matthew Toohey -
Hi Mark

I think you have the right idea. The extra field Richard is referring to is the "Extra template data" field, which is per test case. The idea would be to do what you describe with printing an error message or 'OK' but to put this code in the "Extra template data" of the test case. This allows you to hide, potentially confusing, complexity of the test case from the student.

However, at the moment the code entered in this field will be run before the student's answer is run. You would want it to be run after the student's answer is evaluated (which is when the usual test code is run). This would require a small modification in the question's template.

For more complex testing, it may even be worth considering making the code entered in the "Extra template data" be evaluated as python. This would allow for more complicated test logic (invisible to the student).

If you are still unsure and provide an example of a question you want to achieve this for. I may be able to show you how to modify it to hide any complexity in the test case.

Matthew