I have installed jobe server on my local system and installed the numpy in it but I am getting an error saying segmentation fault.
I am attaching the error screenshot below
This is my configuration setting
Has anyone run into the same problem or anyone knows how to solve this?
Any help would be really appreciated.
Regards,
numpy is rather resource hungry. You will need to raise the memory limit in the runguard sandbox on Jobe. If you're going to be using numpy a lot you'll probably want to create a new python3 prototype (editing the existing one isn't recommended) with increased limits, but for now I suggest you experiment by editing just that particular question. In the author edit form, click the Customise checkbox, then open the Advanced Customisation panel. Increase the memory limit to 500 MB as shown in the image below. The image also shows an increased setting for the process limit (numprocs) but that shouldn't be necessary unless you have a huge number of cores on your Jobe server.
Richard
I have been having the same problem as given in this thread for the past few days. This is my second Coderunner setup: the first one did not have this issue. Maybe the version of the operating system (ubuntu 18 i think) has something to do with it as well.
Anyway, the solution proposed by Richard above did not work for me. After some googling, I found a hack:
https://stackoverflow.com/questions/52026652/openblas-blas-thread-init-pthread-create-resource-temporarily-unavailable
Basically, we need to add the following in the code:
import os
os.environ['OPENBLAS_NUM_THREADS'] = '1'
before
import numpy as np
It probably has to do something with the operating environment where Coderunner is installed. Hope this helps!
Regards
Padmanabhan
IIT Mandi, India
Belated thanks, Padmanabhan. Could be useful to anyone for whom increasing numprocs doesn't work (though I don't really understand why not).
Richard
I just got the same error myself, where increasing numprocs didn't work. It turned out that the task wasn't actually running out of processes or threads at all; rather, it was running out of memory while creating new worker threads. So the error message was rather a red herring. Just increasing the memory limit solved it.