-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
404 Trying to connect to single-user notebook page #222
Comments
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗 |
I find that usually the 404 on the spawn has something to do with a missing environment variable that describes one of the communication paths/routes. We've got some add-on scripts that attempt to verify that the variables are set and if not, set them. The following are ones that I've made sure are accessible in our environment to work right (and this can be a little tricky as every once in a while, we had some environment sanitization for various reasons.
|
Hi @jbaksta , thanks for your response, this looks very promising! |
Well, I had a test cluster set up w/ Docker basically that was extremely vanilla Slurm installation when I did this. Then I just started comparing the outputs of the environment already knowing that we didn't allow many environment variables to be passed to the jobs because of OS differences (SLES submission host, CentOS batch nodes). So I hacked on our batch_script to eventually make it look like my vanilla job instance. Most of the variables rely on some notion of your site setup and the jupyterhub API routes I guess. Some of them are just proper combinations of the others; some are something you can probably regenerate from your |
I just logged into the working production server and looked at the environment variables. Hardcoding JUPYTERHUB_SERVICE_PREFIX to "/user/[my email]" on line 805 of spawner.py (for the benefit of future readers) caused it to start working. Obviously this isn't a final fix, but now that the cause of the issue has been isolated I can solve it. Thank you so much, I have been trying to figure this out for months!! |
Under normal circumstances, I'd think it should work without modifications. But often times, this ends up being something missing in the |
Alas, for software like this meant for HPC setups with significant user configuration expected, "normal circumstances" seem to be mostly theoretical.... Anyway, thanks for the additional info. Since it's unclear if this is a real bug or just user error, I'll leave it up to you and the other Batchspawner devs whether to close this issue. |
I just installed Jupyterhub 1.4.1, batchspawner 1.1.0 and wrapspawner and encounterd the same http error. I had to set:
I our previous environment with HUB 1.1.0, and batchspawner/wrapspawner installed from git I did not encounter this problems |
That's the exact same thing I did as my final fix, just in Bash instead of Python. So maybe it isn't just me. @basvandervlies are you willing to provide more information about your setup? |
@aerobinsonIV of course. We have HPC cluster we run multiple jupyterhub setups for course/training and production. For each jupyterhub setup we can specify which version/software must be used and we generate the config files based on templates and json data. For this we have written a services framework:
This framework will generate the apache, systemd unit files , jupyterhub start script and the jupyterhub configuration file. I am now testing the jupytherhub 14.1 version with batchspawner and wrapspawner and encountered that it would not start . That is fixed by the patch from @jbaksta. After that we got
|
(Note: I've removed my username, domain name, and indentifiable IP addresses just to be safe.)
Bug description
After the SLURM process is started, the user is directed to a 404 page.
Expected behaviour
Actual behaviour
How to reproduce
This happens every time I try to use Jupyterhub. Probably system-specific, because it's unavoidable.
Setup
OS:
CentOS 7
Version(s):
Jupyterhub 1.4.1, Python 3.9.4, Conda 4.10.1, Batchspawner 1.1.0
Other relevant info:
I have access to a working equivalent of this setup (using an older version of Jupyterhub). Most of the GET requests that 404 on the new version to
/user/[email protected]/[path]
on the working version, but the new version lacks the/user/[email protected]
part.The working version sets cookies in the way described here: https://jupyterhub.readthedocs.io/en/0.7.2/howitworks.html. The new version sets them differently.
A lot of 302s occur in response to the get requests, promptng my browser to try the specified location, which 404s.
I have attachted a HAR file showing the interaction between my browser (Firefox 89.0) and the site up to when the problem occurs.
Logs
Full environment
conda list output
Configuration
Logs
This is the log from the compute node that the single-user notebook app runs on (errors at the end are from pressing ctrl+c):
This is the output from running the
jupyterhub
command on the login node:The text was updated successfully, but these errors were encountered: