You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are trying to launch a singularity image container with SLURM. Jupyterhub is installed in a virtual machine and launch the singularity image containing jupyterlab in a job. The slurm job is correctly launched but it encounters an error before the process is created inside the SLURM job.
From what we can read in the logs, it seems that batchspawner is expecting a python script to launch, but the command line created use the singularity binary.
Something to note is that batchspawner worked with singularity in 0.8.2 but not in version 1.1.0. We think that it's because the batchspawner wrapper is waiting for a python script. Do you think it could work if we wrap the call to the singularity binary with a python script ? Or is there some other way to make them work together ?
Expected behaviour
The job is launched and we get access to the jupyterlab inside the singularity image.
Actual behaviour
The job encounters an error. We get a python error in the slurm logs :
Traceback (most recent call last):
File "/softs/rh7/conda-envs/pangeo_latest/bin/batchspawner-singleuser", line 6, in <module>
main()
File "/softs/rh7/conda-envs/pangeo_202202/lib/python3.9/site-packages/batchspawner/singleuser.py", line 23, in main
run_path(cmd_path, run_name="__main__")
File "/softs/rh7/conda-envs/pangeo_202202/lib/python3.9/runpy.py", line 269, in run_path
code, fname = _get_code_from_file(run_name, path_name)
File "/softs/rh7/conda-envs/pangeo_202202/lib/python3.9/runpy.py", line 244, in _get_code_from_file
code = compile(f.read(), fname, 'exec')
ValueError: source code string cannot contain null bytes
srun: error: node539: task 0: Exited with exit code 1
How to reproduce
Request a job running a singularity image using batchspawner.
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! 👋
Bug description
We are trying to launch a singularity image container with SLURM. Jupyterhub is installed in a virtual machine and launch the singularity image containing jupyterlab in a job. The slurm job is correctly launched but it encounters an error before the process is created inside the SLURM job.
From what we can read in the logs, it seems that batchspawner is expecting a python script to launch, but the command line created use the singularity binary.
Something to note is that batchspawner worked with singularity in 0.8.2 but not in version 1.1.0. We think that it's because the batchspawner wrapper is waiting for a python script. Do you think it could work if we wrap the call to the singularity binary with a python script ? Or is there some other way to make them work together ?
Expected behaviour
The job is launched and we get access to the jupyterlab inside the singularity image.
Actual behaviour
The job encounters an error. We get a python error in the slurm logs :
How to reproduce
Request a job running a singularity image using batchspawner.
Configuration
Logs
The text was updated successfully, but these errors were encountered: