CodeRunner is very tightly integrated within Moodle and there is no API for remote question submission. Grading is initiated by a button click within a question in a quiz and there are lots of Moodle-related constraints. Obviously the student must be enrolled in the course but they also need to be currently attempting that quiz, and must not have exceeded the time limit set on their attempt. The need to be currently logged into the server and their session key accompanies the POST of the submission. So I think the short answer to your question is that while anything is possible in principle, remote submission in the way you're hoping for is not practicable.
Comments along the lines of "the code works when I run it, why doesn't it work in CodeRunner?" are to some extent inevitable. Beginner students often hack away at their code until it works on the example tests they're given and are then annoyed when it fails additional tests. They often refer to the additional tests as "hidden" (expressing their indignation) even though they're exposed (usually) in the result table.
"Passing when I run it, failing in CodeRunner" is even more inevitable in our courses, as we add extra tests that are not available in the IDE. We require code to be compliant with industry standard style checkers (currently pylint in the case of Python although we're shifting to ruff next year) and it's not practicable to ask students to install such style checkers on their home machine. We also impose lots of other checks, such as on function length, use of global variables, mandated closing of files etc.
We start students off using the Scratchpad UI, which allows them to program directly in the web page. We also don't impose any style checks initially. When we do turn on style checking we also turn on prechecking, so that the style checks are all done penalty-free, prior to running their code against tests. About a third of the way through the course we disable the scratchpad (we call it "removing the training wheels") and require the students to develop in an IDE instead. By now they are, hopefully, used to the idea of extra tests on style as well as test-based correctness.
Comments along the lines of "the code works when I run it, why doesn't it work in CodeRunner?" are to some extent inevitable. Beginner students often hack away at their code until it works on the example tests they're given and are then annoyed when it fails additional tests. They often refer to the additional tests as "hidden" (expressing their indignation) even though they're exposed (usually) in the result table.
"Passing when I run it, failing in CodeRunner" is even more inevitable in our courses, as we add extra tests that are not available in the IDE. We require code to be compliant with industry standard style checkers (currently pylint in the case of Python although we're shifting to ruff next year) and it's not practicable to ask students to install such style checkers on their home machine. We also impose lots of other checks, such as on function length, use of global variables, mandated closing of files etc.
We start students off using the Scratchpad UI, which allows them to program directly in the web page. We also don't impose any style checks initially. When we do turn on style checking we also turn on prechecking, so that the style checks are all done penalty-free, prior to running their code against tests. About a third of the way through the course we disable the scratchpad (we call it "removing the training wheels") and require the students to develop in an IDE instead. By now they are, hopefully, used to the idea of extra tests on style as well as test-based correctness.