Hi,

Is there any way to test turtle drawing question in python? For example, the question may ask students to draw a triangle with three vertices at (0,0), (50, 50), (60,60), so that the position of the triangle is fixed.

Any template that we can use?

Thanks.

Hi Patrick

Good question - I've wondered about this, too. But from the silence it looks like no one has a solution for you.

It would be easy enough to build a mock turtle (echoes of Alice In Wonderland!) in the template to record what turtle commands the student code issued. **But** validating those against a reference solution is difficult, at least in the general case.

Your triangle example, with integer coordinates, is relatively straightforward but even that has complications. The three lines could be drawn in any order and drawn in any direction. They could be drawn as individual lines with integer coordinates using turtle.goto() or using floating point distances with turtle.forward(). Or a filled polygon could be drawn with those three vertices - would that be accepted? Furthermore, it can be difficult to issue helpful error messages to the student when answers are deemed wrong: it's very frustrating for a student simply to be told their answer is wrong without a clear reason. Did they get marked wrong because they used 1.414 instead of math.sqrt(2) or did they draw a mirror image of the required answer or ... ?

If you make any progress on this, please tell us about it here :)

Richard

Hi Richard,

Thanks for your insight. I agree with you that it is quite difficult.

Let me try first. I will update you if I make any progress.

Patrick

With kind regards

Marta Rutkowska

To share my approach, I use pixel by pixel comparison. In the customisation section, I add code something like this:

import sys

import matplotlib.pyplot as plt

from matplotlib.pyplot import imread

from scipy.linalg import norm

from scipy import sum, average

import numpy as np

#from skimage.measure import compare_ssim as ssim

def ans(y_vals):

plt.plot(y_vals)

return plt

def save_plt(plt, fname):

plt.savefig(fname)

plt.close()

def main(file1, file2):

# read images as 2D arrays (convert to grayscale for simplicity)

img1 = normalize(imread(file1).astype(float))

img2 = normalize(imread(file2).astype(float))

#return ssim(img1, img2, multichannel=True)

diff = img1 - img2

# compare

n_0 = norm(diff.ravel(), 0)

return n_0 * 1.0 / img1.size

def normalize(arr):

rng = arr.max() - arr.min()

if rng == 0:

rng = 1

amin = arr.min()

return (arr - amin) * 255 / rng

{{ STUDENT_ANSWER }}

The student will have to return the plt object in order to save the image they created using their function. Then I test the cost like this:

y_vals = [1, 4, 9, 16]

plt1 = make_graph(y_vals)

save_plt(plt1, "output.png")

plt2 = ans(y_vals)

save_plt(plt2, "ans.png")

val = main("output.png", "ans.png")

if val <= 0.0001:

print("Pass!")

else:

print("Your graph is different to the expected output, difference: {:.4f}".format(val))

Here I can set a threshold value to adjust how much it could differ. The above example just draws a simple line so I expect a lot less difference so set a low threshold.

However, I do have issues as mentioned by Richard. For example, I examine students to add xticks but sometimes the location of them are shifted by 1, causing a lot of pixel differences. In such case I actually retrieve the xtick values and compare with expected values. Like this, each question can add addtional checks to ensure the produced answer is correct. Nevertheless, it works in some simple cases but would need a good check with test cases and anticipated student submissions to ensure students are marked correctly.