Skip to content

Commit ed38919

Browse files
committed
Update documentation to reference renamed jupyterhub-home-nfs chart
1 parent 255c5ab commit ed38919

File tree

2 files changed

+10
-11
lines changed

2 files changed

+10
-11
lines changed

docs/howto/features/storage-quota.md

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -26,18 +26,17 @@ ebs_volumes = {
2626

2727
This will create a disk with a size of 100GB for the `staging` hub that we can reference when configuring the NFS server.
2828

29+
## Enabling jupyterhub-home-nfs
2930

30-
## Enabling jupyter-home-nfs
31+
To be able to configure per-user storage quotas, we need to run an in-cluster NFS server using [`jupyterhub-home-nfs`](https://github.com/sunu/jupyterhub-home-nfs). This can be enabled by setting `jupyterhub-home-nfs.enabled` to `true` in the hub's values file.
3132

32-
To be able to configure per-user storage quotas, we need to run an in-cluster NFS server using [`jupyter-home-nfs`](https://github.com/sunu/jupyter-home-nfs). This can be enabled by setting `jupyter-home-nfs.enabled` to `true` in the hub's values file.
33-
34-
jupyter-home-nfs expects a reference to an pre-provisioned disk. Here's an example of how to configure that on AWS and GCP.
33+
jupyterhub-home-nfs expects a reference to an pre-provisioned disk. Here's an example of how to configure that on AWS and GCP.
3534

3635
`````{tab-set}
3736
````{tab-item} AWS
3837
:sync: aws-key
3938
```yaml
40-
jupyter-home-nfs:
39+
jupyterhub-home-nfs:
4140
enabled: true
4241
eks:
4342
enabled: true
@@ -48,7 +47,7 @@ jupyter-home-nfs:
4847
````{tab-item} GCP
4948
:sync: gcp-key
5049
```yaml
51-
jupyter-home-nfs:
50+
jupyterhub-home-nfs:
5251
enabled: true
5352
gke:
5453
enabled: true
@@ -63,7 +62,7 @@ These changes can be deployed by running the following command:
6362
deployer deploy <cluster_name> <hub_name>
6463
```
6564

66-
Once these changes are deployed, we should have a new NFS server running in our cluster through the `jupyter-home-nfs` Helm chart. We can get the IP address of the NFS server by running the following commands:
65+
Once these changes are deployed, we should have a new NFS server running in our cluster through the `jupyterhub-home-nfs` Helm chart. We can get the IP address of the NFS server by running the following commands:
6766

6867
```bash
6968
# Authenticate with the cluster
@@ -120,10 +119,10 @@ deployer deploy <cluster_name> <hub_name>
120119

121120
Now we can set quotas for each user and configure the path to monitor for storage quota enforcement.
122121

123-
This can be done by updating `basehub.jupyter-home-nfs.quotaEnforcer` in the hub's values file. For example, to set a quota of 10GB for all users on the `staging` hub, we would add the following to the hub's values file:
122+
This can be done by updating `basehub.jupyterhub-home-nfs.quotaEnforcer` in the hub's values file. For example, to set a quota of 10GB for all users on the `staging` hub, we would add the following to the hub's values file:
124123

125124
```yaml
126-
jupyter-home-nfs:
125+
jupyterhub-home-nfs:
127126
quotaEnforcer:
128127
hardQuota: "10" # in GB
129128
path: "/export/staging"

terraform/aws/variables.tf

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -304,7 +304,7 @@ variable "ebs_volumes" {
304304
description = <<-EOT
305305
Deploy one or more AWS ElasticBlockStore volumes.
306306
307-
This provisions a managed EBS volume that can be used by jupyter-home-nfs server
308-
to store home directories for users.
307+
This provisions a managed EBS volume that can be used by jupyterhub-home-nfs
308+
server to store home directories for users.
309309
EOT
310310
}

0 commit comments

Comments
 (0)