Skip to content
This repository has been archived by the owner on Jun 27, 2020. It is now read-only.

hardcoded NFS IP's #2

Open
SvenDowideit opened this issue Mar 19, 2018 · 1 comment
Open

hardcoded NFS IP's #2

SvenDowideit opened this issue Mar 19, 2018 · 1 comment

Comments

@SvenDowideit
Copy link

I thought I'd be able to just clone and make run as the docs say - but it looks like (I've only just begun to dig) the nfs container (and jupyterhub-network) don't have the IP addresses you've put in .env - and so the hub container can't start.
Sadly, Docker Swarm's error reporting is about as unhelpful as possible - the only state info I can derive, is from

dow184@TOWER-SL:~/src/jupyterhub/jupyterhub-deploy-swarm$ docker service scale jupyterhub_jupyterhub=1
jupyterhub_jupyterhub scaled to 1
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
overall progress: 0 out of 1 tasks 
1/1: starting container failed: error while mounting volume '/var/lib/docker/vo… 
@wakonp
Copy link
Owner

wakonp commented Mar 21, 2018

Hi Sven,

first of all thanks for showing interest in my repo. I am currently updating the Jupyterhub Dockerfile from jupyterhub\juypterhub-onbuild to jupyterhub\jupyterhub base image. I guess you checked out the incomplete state of my work. If you want to try out a working state please checkout the tag v.0.2 or wait until I push v.1.0. I should be finished until Friday evening or at least this weekend.

To make things clear for you:

the nfs container (and jupyterhub-network) don't have the IP addresses you've put in .env - and so the hub container can't start.

Of course this isn't the case. You first have to modify the config of your setup. In the .env file you must change the NFSSERVER_IP to the IP Address of the Docker Node running the nfs-container. Also change the NFS_CONFIG_HOSTS in the .envNFS to the IPs of your Docker Nodes in your swarm.

To make thinks clear. The nfs-container is a "simple" container on a Docker Host. The NFS Port (2049) of the nfs-container is mapped to the Docker host machine on port (2049 or defined with NFS_PORT in the .env file). After starting the container a NFS client can connect to the NFSServer inside the nfs-container by providing the Docker host machine's IP and the NFS port (2049 or defined with NFS_PORT in .env file).

When launching the jupyterhub as a Docker Service from the Docker Host, the Docker Host creates Docker Volumes (with type nfs4) mapped to the nfs-container. So the Docker Host needs to able to access this nfs-container. Furthermore, when the jupyterhub service creates jupyterhub notebook services on other Docker Nodes in your swarm, these Docker Nodes also need to be able to connected to the container on the "Main Node". So they connect to it by using the "Main Node's IP Address aka NFSSERVER_IP" and the port (2049 or defined with NFS_PORT in .env file).

I hope this could clear things up for you. I know that the documentation is not very helpful right know, but I will updated it as soon as I completed v.1.0

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants