If you're asking questions where the students have to print numbers, then you're fundamentally stuck with having to compare text. There are ways to improve the comparison but none are exactly trivial. The example above could be handled by using a regular expression grader rather than an exact match grader, e.g. with a regular expression like 3(\.0+)?
. But exposing regular expressions to the student's view in the Expected column causes confusion unless you're prepared to put the time into explaining what they mean.
With a lot more effort you can write your own custom template grader that collects the student's output in some way (e.g. via a Python subprocess), extracts the numbers and checks if they match the required numbers to some tolerance.
One of my colleagues goes even further. She has a Matlab question type that runs the sample answer, extracts all the numbers from it, then runs the student's code and extracts the numbers from that, and finally compares the two. In this way she can print out very informative feedback and the same question type can be used for a large range of problems.
Personally I try to avoid the whole issue by posing questions in a different way. For example, "Write a program that computes ... and prints the answer with exactly 2 digits of precision after the decimal point." [Even this can be problematic in certain cases; you need to be a bit careful in your choice of test data.]
Even better: use write a function questions, where the student function returns a numeric value. Then you can just write tests like:
expected_ans = 3.14159
if abs(student_func(input_data) - expected_ans) > 0.0001:
print("Correct")
else:
print("Error")
where the Expected output is the word Correct. Other techniques are possible by using the Extra field of the test data to hold hidden test code, perhaps hiding some of the standard columns in the result table. Have fun exploring the space of possibilities :)
Richard