Files and random numbers.

Files and random numbers.

by Alex Shpeht -
Number of replies: 9

Dear developers,

First of all, thanks for your plugin.

We use it to teach programming at school in C ++, Pascal, Python.

There are still a lot of questions and we need help.

Perhaps these questions arise not only for us, but also for many other teachers. It would be great to add detailed descriptions to the documentation.

2 questions:

1. It is necessary to check the operation of the program on several files (read numbers or lines from the file and process them). We have learned how to check the program on one file.

2. Random input of numbers into the program and checking its work.

In reply to Alex Shpeht

Re: Files and random numbers.

by Вася Пупкин -
Did you manage to use multiple files?
In reply to Вася Пупкин

Re: Files and random numbers.

by Richard Lobb -

The easiest ways to test a program with multiple different input files are:

  1. Have the program read from standard input and paste the different test data sets into the Input fields of the test cases in the author editing form. When CodeRunner runs questions that have non-empty input, it creates a data file for each of the input data sets and runs the program multiple times with standard input redirected to each of the data files in turn (equivalent to prog < data.txt at the command line)
  2. Get the students to read the name of the file to be processed from standard input, upload a set of data files to the Support Files, and in the various test cases set the Input for the test to be the name of one of the data files.
  3. Get the students to instead write a function that takes the name of the test file as a parameter.
If you've asked students to read from a filename like data.txt that is hard-coded into their program, you need to write your own template to set up the test data for each test case. For example, you might have support files test1.txt, test2.txt etc, and set the Extra field of each test case to the name of the file you wish to test with. The template code (which now needs to be a per-test-case template, not a combinator template) could then copy the required support file to data.txt, copy the students answer to a file (or whatever) and then compile (if necessary) and run the program with that data file. But that's far more complicated.
In reply to Вася Пупкин

Re: Files and random numbers.

by Alex Shpeht -

Added to the template:

if {{TEST.testcode}} == 1:

    shutil.copy(r'input1.txt', r'input.txt')

elif {{TEST.testcode}} == 2:

    shutil.copy(r'input2.txt', r'input.txt')

elif {{TEST.testcode}} == 3:

    shutil.copy(r'input3.txt', r'input.txt')

elif {{TEST.testcode}} == 4:

    shutil.copy(r'input4.txt', r'input.txt')

elif {{TEST.testcode}} == 5:

    shutil.copy(r'input5.txt', r'input.txt')

    and added the necessary files. Students in their program open only one file - "input.txt"

I do not know whether I did it right or not, but it works

In reply to Alex Shpeht

Re: Files and random numbers.

by Alex Shpeht -

But I still haven't figured out how to set random values and check the correctness of the answer.

For example, the task:

Fill the array with random numbers from A to B. Find the sum of the array elements.

In tests, set different values for A and B. But here's how to check the correctness of the answer ...

In reply to Alex Shpeht

Re: Files and random numbers.

by Richard Lobb -

Well done on your input file switcheroo - very cunning.

I don't think I really understand what you're after with this random question. How does the student code get hold of the numbers A and B? If it's a pure 'write a program' question, they'd have to read them from a file or standard input.

I attach a question that seems to fulfil the above spec, but I'm at all sure if it's what you're after. Did you want each student to see different random numbers in their tests? That's possible, but it does complicate things and doesn't make the question any more robust against cheating - if one student copies another's code, it will still work in their variant.

I built the question without filling in the 'Expected' fields of the tests, ran the sample answer with Validate On Save on, and just clicked the buttons to copy the expected answers in.

I doubt it's what you want, but it's the starting point for a discussion.

In reply to Richard Lobb

Re: Files and random numbers.

by Alex Shpeht -
With your example, we realized that the meaning of tasks for the checking system is lost. It is easier to check the code "manually".

There is another question, although not on these topics.
If a large number of numbers are output, some of the numbers are skipped. Writes "...snip..." and continues the output sequence.

How to fix it?
In reply to Alex Shpeht

Re: Files and random numbers.

by Richard Lobb -
There has to be some form of limit on the output that's displayed in a cell of the result table. Too much output would flood the browser and and waste space in the database (since all output is recorded there for every run). Simply truncating the output can lose you important output at the end of the run, such as an exception being thrown. So I chose to snip the middle out of the output and mark it as '<...snip...>'. I also limit line lengths and the number of lines.

If you really want to do your own thing you can write a combinator template grader, which can generate whatever output it likes, although there is fundamentally a limit on the amount of output a run can generate, enforced by Jobe.

I suggest trying to find a way to pose a question in such a way that a correct solution doesn't generate screeds of output. What is the learning outcome of the question you're asking the student? Does it really need megabytes of output to verify that the learning outcome has been achieved?
In reply to Richard Lobb

Re: Files and random numbers.

by Albert Levi -
So what triggers to snip the output? I could not find information about this. If I know it, I can avoid having snip in the expected output.
In reply to Albert Levi

Re: Files and random numbers.

by Mike McDowell -
You could capture the output into a list then assess it with your test (this works per test in the test code), limiting the printed output:

# Capture stdout
import sys, io
old_stdout = sys.stdout
new_stdout = io.StringIO()
sys.stdout = new_stdout

# Now run the student methods/code while output is off

# Restore stdout when you're ready to allow output again
sys.stdout = old_stdout
output = new_stdout.getvalue()

# Split captured output into lines for analysis
lines = output.splitlines()

# Output the results or feedback from your examination of lines/output for your test