You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
usually, when I execute scancel jupyter_job_id, spawning is immediately stopped and I see that message on the browser too.
Now, the job is canceled (squeue is empty) but in the browser I still see `Your server is starting up.
You will be redirected automatically when it's ready for you.
Expected behaviour
spawning stops when scancel is executed.
Your personal set up
OS: CentOS7
jupyterhub: 1.3.0
batchspawner (SlurmSpawner): 1.1.0
python: 3.6
is this related to the batchspawner or jupyterhub? for information, everything was working fine before the upgrade (jupyterhub and batchspawner).
The text was updated successfully, but these errors were encountered:
I can confirm that I have the same problem using PBSSpawner, sometimes (rarely) my notebooks loading gets stuck on "Your server is starting up. You will be redirected automatically when it's ready for you." If I use qdel when it is saying "Pending in queue" the spawning stops fine, when I use it during "...server is starting up...", it get stuck (probably until some http timeout or start timeout runs out).
But also after a successful notebook start up, it would be nice if there was a way to signal jupyterhub that the notebooks is ending (either by a walltime - maybe by setting a lifespan for the notebook when it starts so that it properly ends before walltime is reached, or simply by deleting the job), I do get an error window that the connection has been interupted, but the notebook still works in kinda zombie like state, you cannot access source files or browse, but it still keeps the notebooks open with nothing to do in it.
So I wish there was a way to set some lifespan to a notebook server that it can properly end before walltime is reached and somehow warn the user that "in 1 minute the notebook will shut down", and also if the notebook server ends unexpectedly that the jupyterhub is able to recover.
Bug description
usually, when I execute
scancel
jupyter_job_id, spawning is immediately stopped and I see that message on the browser too.Now, the job is canceled (squeue is empty) but in the browser I still see `Your server is starting up.
You will be redirected automatically when it's ready for you.
Expected behaviour
spawning stops when scancel is executed.
Your personal set up
is this related to the batchspawner or jupyterhub? for information, everything was working fine before the upgrade (jupyterhub and batchspawner).
The text was updated successfully, but these errors were encountered: