You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/howto/features/storage-quota.md
+8-9Lines changed: 8 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,18 +26,17 @@ ebs_volumes = {
26
26
27
27
This will create a disk with a size of 100GB for the `staging` hub that we can reference when configuring the NFS server.
28
28
29
+
## Enabling jupyterhub-home-nfs
29
30
30
-
## Enabling jupyter-home-nfs
31
+
To be able to configure per-user storage quotas, we need to run an in-cluster NFS server using [`jupyterhub-home-nfs`](https://github.com/sunu/jupyterhub-home-nfs). This can be enabled by setting `jupyterhub-home-nfs.enabled` to `true` in the hub's values file.
31
32
32
-
To be able to configure per-user storage quotas, we need to run an in-cluster NFS server using [`jupyter-home-nfs`](https://github.com/sunu/jupyter-home-nfs). This can be enabled by setting `jupyter-home-nfs.enabled` to `true` in the hub's values file.
33
-
34
-
jupyter-home-nfs expects a reference to an pre-provisioned disk. Here's an example of how to configure that on AWS and GCP.
33
+
jupyterhub-home-nfs expects a reference to an pre-provisioned disk. Here's an example of how to configure that on AWS and GCP.
35
34
36
35
`````{tab-set}
37
36
````{tab-item} AWS
38
37
:sync: aws-key
39
38
```yaml
40
-
jupyter-home-nfs:
39
+
jupyterhub-home-nfs:
41
40
enabled: true
42
41
eks:
43
42
enabled: true
@@ -48,7 +47,7 @@ jupyter-home-nfs:
48
47
````{tab-item} GCP
49
48
:sync: gcp-key
50
49
```yaml
51
-
jupyter-home-nfs:
50
+
jupyterhub-home-nfs:
52
51
enabled: true
53
52
gke:
54
53
enabled: true
@@ -63,7 +62,7 @@ These changes can be deployed by running the following command:
63
62
deployer deploy <cluster_name><hub_name>
64
63
```
65
64
66
-
Once these changes are deployed, we should have a new NFS server running in our cluster through the `jupyter-home-nfs` Helm chart. We can get the IP address of the NFS server by running the following commands:
65
+
Once these changes are deployed, we should have a new NFS server running in our cluster through the `jupyterhub-home-nfs` Helm chart. We can get the IP address of the NFS server by running the following commands:
Now we can set quotas for each user and configure the path to monitor for storage quota enforcement.
122
121
123
-
This can be done by updating `basehub.jupyter-home-nfs.quotaEnforcer` in the hub's values file. For example, to set a quota of 10GB for all users on the `staging` hub, we would add the following to the hub's values file:
122
+
This can be done by updating `basehub.jupyterhub-home-nfs.quotaEnforcer` in the hub's values file. For example, to set a quota of 10GB for all users on the `staging` hub, we would add the following to the hub's values file:
0 commit comments