Check test cases with randomly generated data

Check test cases with randomly generated data

by Matthew Payne -
Number of replies: 4

Hi, 

I am creating a quiz for students which requires them to write a function findLOBF(x,y) in Python3. The function takes two vectors, and the students must use these to develop a simple line of best fit for the data. The students are given a dataset to download to help develop their code, but when submitting to Coderunner the wish is to be able to test their code using a dataset with random noise added, so everyone will have a different answer. I am currently trying to achieve this by having the following code in the Testcase 1 box to generate the x and y arrays:

x = np.linspace(0, 3, 16, endpoint = True)

y = np.array([])

for entry in x:

    y = np.append(y, 0.67*np.exp(entry) + 3)

random_noise = np.random.normal(0,1, y.shape)

y = y + random_noise

print(np.around(findLOBF(x,y), 4))

This generates a random testcase each time as expected, however I can't find a way to generate the expected output answer, as it will convert whatever I paste in that box to a string. Is there a way to have the expected output change to match the dataset each time the code is run?

Many thanks,

Matt

In reply to Matthew Payne

Re: Check test cases with randomly generated data

by Robert Mařík -
Hello, you can compute the expected output by yourselves, compare with the output from the answer and return True, if there is a match. You can also fix the output by setting fixed seed.

Robert
In reply to Robert Mařík

Re: Check test cases with randomly generated data

by Matthew Payne -
Thankyou Robert, that worked perfectly.
In reply to Robert Mařík

Re: Check test cases with randomly generated data

by Srdjan Vukmirovic -
I am also interested in this topic.
In what field should I compute the answer and return True? Is there any example?