Hi all,
I teach programming modules and have been using CodeRunner for a few years now for automated assessment. It's become a core part of how I deliver and grade micro-assessments.
I recently put together an open-source tool that generates CodeRunner questions using Claude (Anthropic's LLM). It works as a "skill", a structured prompt file containing all the CodeRunner XML formatting rules, test case design patterns, and question authoring best practices. You give it a topic, language, and difficulty level, and it produces a complete Moodle XML file ready for import.
Each generated question includes question text with clear instructions and example usage, answer preloads with skeleton code and guiding comments, a complete model solution, visible and hidden test cases, a penalty regime (3 free attempts, then 10% increasing), general feedback, and live autocompletion enabled in the editor.
It currently supports Java (java_class, java_program, java_method), Python (python3, python3_w_input), C/C++, and several other CodeRunner types.
The tool doesn't completely replace writing questions by hand but it does speed up the process significantly. I'd recommend reviewing generated questions before releasing them to students. It's great at generating batches of questions across a topic; say, 10 array questions at varying difficulty. In my experience, Claude Opus 4.6 with extended thinking and the new 1 million context window produces excellent results.
The repo is here: https://github.com/danielcregg/moodle-coderunner
It works with both Claude Code (CLI) and Claude Desktop. Installation is straightforward — the README has full instructions.
I'd welcome any feedback, suggestions, or feature requests. If anyone has ideas for supporting additional CodeRunner features like randomisation, custom prototypes, or precheck modes, I'd be happy to look at incorporating them.
RegardsDaniel