Question Authors' Forum

Checking student answers vs solution output, for randomization

Picture of Jenny Harlow
Re: Checking student answers vs solution output, for randomization
by Jenny Harlow - Tuesday, 1 January 2019, 1:45 PM

Just adding to Richard's response to support his suggestions on students and copying/cheating.  One can spend a lot of time trying to make it harder to cheat or copy -- it basically never stops and only some cheating is brain-dead copying.  I have done my share of that and I eventually realised that it made my attitude very negative and I was not spending nearly as much time trying to make better questions and quizzes for the students who actually did want to learn. 

I've found it much more effective all round to give a clear early message about the stupidity of copying and to back that up early by analysing code submissions for copying and also having an early test on the basics that they just have to know, and then to spend my time helping the willing students and challenging the good ones, rather than trying to stop the others from shooting themselves in the foot. 

In terms of analysing submissions for copying, all the submissions are available to you to download and it seems to get around pretty quickly that I mean business when I just do some simple checks using cheat-checking code that Richard developed and pull up those caught out.  I actually only really focus on picking up the really dumb identical code copiers, but -- as I warn the students with a big grin -- I can do pretty much any analysis I like and take as long as I like about it, and having nice timestamped electronic evidence tends to make disciplinary processes quite straightforward if we choose to take it to that level...

So far I have not used randomisation and I am keen to bring it in, but I have to say that that is more because I'm interested and because it could help to make sets of "drill questions" for students practising for tests than because of the possible cheating deterrent.