Assuming this is a one-off question, the easiest approach would be to create a standard Python question and customise it with a template along the following lines:
def A(data): ... def B(data): ... def C(data): ... {{ STUDENT_ANSWER }} A(data) B(data) C(data)
However, that isn't very robust against students entering syntactically incorrect code or not defining a variable data. A slightly better approach would do some prechecking of what the student submitted, e.g.
student_answer = """{{ STUDENT_ANSWER | e('py') }}""" ... code to check the string student_answer, issuing error ... messages if not. def A(data): ... def B(data): ... def C(data): ... exec(student_answer) A(data) B(data) C(data)
Even safer would be to require the student to supply the list as an expression, rather than defining a specific variable. Then you can do
data = eval(student_answer)
instead.
Although you don't want marking, I think it would be discouraging for students to have their answer marked wrong regardless, so you could switch from the usual exact match grader to a regular expression grader, accepting ".*". Then any submission would be marked right.