Skip to content

Commit

Permalink
[DATALAD RUNCMD] run codespell throughout but ignore fail
Browse files Browse the repository at this point in the history
=== Do not change lines below ===
{
 "chain": [],
 "cmd": "codespell -w || :",
 "exit": 0,
 "extra_inputs": [],
 "inputs": [],
 "outputs": [],
 "pwd": "."
}
^^^ Do not change lines above ^^^
  • Loading branch information
yarikoptic committed Jan 25, 2024
1 parent e766f84 commit 93fe345
Show file tree
Hide file tree
Showing 53 changed files with 72 additions and 72 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Infrastructure for deployments

This repository contains deployment infrastucture and documentation for a federation of JupyterHubs that 2i2c manages for various communities.
This repository contains deployment infrastructure and documentation for a federation of JupyterHubs that 2i2c manages for various communities.

See [the infrastructure documentation](https://infrastructure.2i2c.org) for more information.

Expand Down
2 changes: 1 addition & 1 deletion config/clusters/2i2c-aws-us/go-bgc.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ jupyterhub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee
# https://cloud.google.com/kubernetes-engine/docs/concepts/plan-node-sizes.
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/2i2c-aws-us/itcoocean.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ jupyterhub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee
# https://cloud.google.com/kubernetes-engine/docs/concepts/plan-node-sizes.
Expand Down
6 changes: 3 additions & 3 deletions config/clusters/2i2c-aws-us/ncar-cisl.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,8 +41,8 @@ basehub:
- read:org
Authenticator:
admin_users:
- kcote-ncar # Ken Cote, Initial adminstrator
- NicholasCote # Nicholas Cote, Initial adminstrator
- kcote-ncar # Ken Cote, Initial administrator
- NicholasCote # Nicholas Cote, Initial administrator
- nwehrheim # Nick Wehrheim, Community representative
singleuser:
image:
Expand All @@ -60,7 +60,7 @@ basehub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee.
# - CPU requests are lower than the description, with a factor of
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/catalystproject-africa/must.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ jupyterhub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee.
# - CPU requests are lower than the description, with a factor of
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/catalystproject-africa/nm-aist.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ jupyterhub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee.
# - CPU requests are lower than the description, with a factor of
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/catalystproject-africa/staging.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ jupyterhub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee.
# - CPU requests are lower than the description, with a factor of
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/catalystproject-latam/common.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ jupyterhub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee
# https://cloud.google.com/kubernetes-engine/docs/concepts/plan-node-sizes.
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/leap/common.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ basehub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee.
# - CPU requests are lower than the description, with a factor of
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/linked-earth/common.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ basehub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee.
# - CPU requests are lower than the description, with a factor of
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/nasa-cryo/common.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ basehub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee.
# - CPU requests are lower than the description, with a factor of
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/qcl/common.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ jupyterhub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee
# https://cloud.google.com/kubernetes-engine/docs/concepts/plan-node-sizes.
Expand Down
2 changes: 1 addition & 1 deletion config/clusters/smithsonian/common.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ basehub:
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# what's found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee.
# - CPU requests are lower than the description, with a factor of
Expand Down
2 changes: 1 addition & 1 deletion deployer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -383,7 +383,7 @@ Once you run this command, run `export DOCKER_HOST=tcp://localhost:23760` in ano
docker daemon.

#### `exec shell`
This exec sub-command can be used to aquire a shell in various places of the infrastructure.
This exec sub-command can be used to acquire a shell in various places of the infrastructure.

##### `exec shell hub`

Expand Down
2 changes: 1 addition & 1 deletion deployer/commands/generate/billing/importers.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ def get_shared_cluster_hub_costs(cluster, start_month, end_month):
# Rename project to use hub names
totals["project"] = totals["hub"]
totals.drop("hub", axis=1)
# Calcluate cost from utilization
# Calculate cost from utilization
# Needs to account for uptime checks and 2i2c paid for stuff
totals["cost"] = totals["utilization"].multiply(
totals["total_with_credits"].astype(float), axis=0
Expand Down
2 changes: 1 addition & 1 deletion deployer/commands/generate/dedicated_cluster/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ def generate_support_files(cluster_config_directory, vars):
- `config/<cluster_name>/support.values.yaml`
- `config/<cluster_name>/enc-support.secret.values.yaml`
"""
# Generate the suppport values file `support.values.yaml`
# Generate the support values file `support.values.yaml`
print_colour("Generating the support values file...", "yellow")
with open(
REPO_ROOT_PATH / "config/clusters/templates/common/support.values.yaml"
Expand Down
4 changes: 2 additions & 2 deletions deployer/commands/grafana/tokens.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def get_deployer_token(sa_endpoint, sa_id, headers):
)
if not response.ok:
print(
f"An error occured when retrieving the tokens the service account with id {sa_id}.\n"
f"An error occurred when retrieving the tokens the service account with id {sa_id}.\n"
f"Error was {response.text}."
)
response.raise_for_status()
Expand All @@ -144,7 +144,7 @@ def create_deployer_token(sa_endpoint, sa_id, headers):

if not response.ok:
print(
"An error occured when creating the token for the deployer service account.\n"
"An error occurred when creating the token for the deployer service account.\n"
f"Error was {response.text}."
)
response.raise_for_status()
Expand Down
2 changes: 1 addition & 1 deletion deployer/commands/validate/cluster.schema.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ properties:
type: string
description: |
Status code expected from hitting the health checkpoint for
this hub. Defaults to 200, can be overriden in case we have
this hub. Defaults to 200, can be overridden in case we have
basic auth setup for the entire hub
domain:
type: string
Expand Down
4 changes: 2 additions & 2 deletions docs/contributing/code-review.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ or can wait for review. That said, sometimes the only way to
understand the impact of a change is to merge and see how things go,
so use your best judgment!

Here is a list of things you can clearly, unambigously self merge without
Here is a list of things you can clearly, unambiguously self merge without
any approval.

1. Updating admin users for a hub
Expand Down Expand Up @@ -119,7 +119,7 @@ To deploy changes to the authentication workflow, follow these steps:
- cluster: `utoronto`, hub: `staging` (Azure AD)
- cluster: `2i2c`, hub: `staging` (CILogon)
1. **Login into the staging hubs**. Try logging in into the hubs where you deployed your changes.
1. **Start a server**. Afer you've logged into the hub, make sure everything works as expected by spinning up a server.
1. **Start a server**. After you've logged into the hub, make sure everything works as expected by spinning up a server.
1. **Post the status of the manual steps above**. In your PR's top comment, post the hubs where you've deployed the changes and whether or not they are functioning properly.
1. **Wait for review and approval**. Leave the PR open for other team members to review and approve.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
This is used in two places:
- docs/_static/hub-table.json is published with the docs and meant for re-use in other parts of 2i2c
- docs/_static/hub-table.json is published with the docs and meant for reuse in other parts of 2i2c
- docs/tmp/hub-table.csv is read by reference/hubs.md to create a list of hubs
"""
import pandas as pd
Expand Down Expand Up @@ -81,7 +81,7 @@ def build_hub_list_entry(
def build_hub_statistics_df(df):
# Write some quick statistics for display
# Calculate total number of community hubs by removing staging and demo hubs
# Remove `staging` hubs to count the total number of communites we serve
# Remove `staging` hubs to count the total number of communities we serve
filter_out = ["staging", "demo"]
community_hubs = df.loc[
df["name"].map(lambda a: all(ii not in a.lower() for ii in filter_out))
Expand Down Expand Up @@ -167,7 +167,7 @@ def main():
write_to_json_and_csv_files(df, "hub-table")
write_to_json_and_csv_files(community_hubs_by_cluster, "hub-stats")

print("Finished updating list of hubs and statics tables...")
print("Finished updating list of hubs and statistics tables...")


if __name__ == "__main__":
Expand Down
4 changes: 2 additions & 2 deletions docs/helper-programs/generate-hub-features-table.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
This is used in two places:
- docs/_static/hub-options-table.json is published with the docs and meant for re-use in other parts of 2i2c
- docs/_static/hub-options-table.json is published with the docs and meant for reuse in other parts of 2i2c
- docs/tmp/hub-options-table.csv is read by reference/options.md to create a list of hubs
"""
import hcl2
Expand Down Expand Up @@ -171,7 +171,7 @@ def build_options_list_entry(hub, hub_count, values_files_features, terraform_fe
"user buckets (scratch/persistent)": terraform_features.get(
hub["name"], {}
).get("user_buckets", False),
"requestor pays for buckets storage": terraform_features.get(
"requester pays for buckets storage": terraform_features.get(
hub["name"], {}
).get("requestor_pays", False),
"authenticator": values_files_features["authenticator"],
Expand Down
2 changes: 1 addition & 1 deletion docs/howto/features/anonymized-usernames.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ useful privacy guarantees to be worth it. Those are:
2. We live in a world where user data leaks are a fact of life, and you can buy
tons of user identifiers for pretty cheap. This may also happen to *us*, and
we may unintentionally leak data too! So users should still be hard to
de-anonymize when the attacker has in their posession the following:
de-anonymize when the attacker has in their possession the following:

1. List of user identifiers (emails, usernames, numeric user ids,
etc) from *other data breaches*.
Expand Down
2 changes: 1 addition & 1 deletion docs/howto/features/cloud-access.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,6 @@ This AWS IAM Role is managed via terraform.
If the hub is a `daskhub`, nest the config under a `basehub` key
```
7. Get this change deployed, and users should now be able to use the requestor pays feature!
7. Get this change deployed, and users should now be able to use the requester pays feature!
Currently running users might have to restart their pods for the change to take effect.
2 changes: 1 addition & 1 deletion docs/howto/manage-domains/redirects.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,5 @@ redirects:
```
You can add any number of such redirects. They will all be `302 Temporary`
redirects, in case we want to re-use the old domain for something else in
redirects, in case we want to reuse the old domain for something else in
the future.
8 changes: 4 additions & 4 deletions docs/howto/troubleshoot/cilogon-user-accounts.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# CILogon: switch Identity Providers or user accounts

By default, logging in with a particular user account will persist your credentials in future sessions.
This means that you'll automatically re-use the same institutional and user account when you access the hub's home page.
This means that you'll automatically reuse the same institutional and user account when you access the hub's home page.

## Switch Identity Providers

1. **Logout of the Hub** using the logout button or by going to `https://{hub-name}/hub/logout`.
2. **Clear browser cookies** (optional). If the user asked CILogon to re-use the same Identity Provider connection when they logged in, they'll need to [clear browser cookies](https://www.lifewire.com/how-to-delete-cookies-2617981) for <https://cilogon.org>.
2. **Clear browser cookies** (optional). If the user asked CILogon to reuse the same Identity Provider connection when they logged in, they'll need to [clear browser cookies](https://www.lifewire.com/how-to-delete-cookies-2617981) for <https://cilogon.org>.

```{figure} ../../images/cilogon-remember-this-selection.png
The dialog box that allows you to re-use the same Identity Provider.
The dialog box that allows you to reuse the same Identity Provider.
```

Firefox example:
Expand Down Expand Up @@ -40,6 +40,6 @@ If you see a 403 error page, this means that the account you were using to login
```{figure} ../../images/403-forbidden.png
```

If you think this is an error, and the account should have been allowed, then contact the hub adminstrator/s.
If you think this is an error, and the account should have been allowed, then contact the hub administrator/s.

If you used the wrong user account, you can log in using another account by following the steps in [](troubleshoot:cilogon:switch-user-accounts).
2 changes: 1 addition & 1 deletion docs/howto/troubleshoot/logs/kubectl-logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ The following commands require passing the namespace where a specific pod is run
```

### Kubernetes pod logs
You can access any pod's logs by using the `kubectl logs` commands. Bellow are some of the most common debugging commands.
You can access any pod's logs by using the `kubectl logs` commands. Below are some of the most common debugging commands.
```{tip}
1. The `--follow` flag
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Finally, we should check what quotas are enforced on the project and increase th
```{warning}
This must be only done if it is a **new** billing account handled by 2i2c for a specific project,
rather than just for a new project under the same billing account. This is a somewhat rare occurance!
rather than just for a new project under the same billing account. This is a somewhat rare occurrence!
If there is already billing export set up for this **billing account** as you try
to complete these steps, do not change it and raise an issue for engineering to
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# Configure and deploy the `support` chart

The `support` chart is a helm chart maintained by the 2i2c Engineers that consists of common tools used to support JupyterHub deployments in the cloud.
These tools are [`ingress-nginx`](https://kubernetes.github.io/ingress-nginx/), for controlling ingresses and load balancing; [`cert-manager`](https://cert-manager.io/docs/), for automatically provisioning TLS certificates from [Let's Encrypt](https://letsencrypt.org/); [Prometheus](https://prometheus.io/), for scraping and storing metrics from the cluster and hub; and [Grafana](https://grafana.com/), for visualising the metrics retreived by Prometheus.
These tools are [`ingress-nginx`](https://kubernetes.github.io/ingress-nginx/), for controlling ingresses and load balancing; [`cert-manager`](https://cert-manager.io/docs/), for automatically provisioning TLS certificates from [Let's Encrypt](https://letsencrypt.org/); [Prometheus](https://prometheus.io/), for scraping and storing metrics from the cluster and hub; and [Grafana](https://grafana.com/), for visualising the metrics retrieved by Prometheus.

This section will walk you through how to deploy the support chart on a cluster.

Expand Down
6 changes: 3 additions & 3 deletions docs/hub-deployment-guide/hubs/other-hub-ops/delete-hub.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ If you'd like to delete a hub, there are a few steps that we need to take:

## 1. Manage existing data

The existing data should either be migrated to another place or should be deleted, depending on what has been aggreed to with the Community Representative.
The existing data should either be migrated to another place or should be deleted, depending on what has been agreed to with the Community Representative.

If the data should be migrated from the hub before decommissioning, then make sure that a 2i2c Engineer has access to the destination in order to complete the data migration.

Expand Down Expand Up @@ -76,12 +76,12 @@ This will clean up some of the hub values related to auth and must be done prior
If the hub remains listed in its cluster's `cluster.yaml` file, the hub could be
redeployed by any merged PR triggering our CI/CD pipeline.

Open a decomissioning PR that removes the appropriate hub entry from the
Open a decommissioning PR that removes the appropriate hub entry from the
`config/clusters/$CLUSTER_NAME/cluster.yaml` file and associated
`*.values.yaml` files no longer referenced in the `cluster.yaml` file.

You can continue with the steps below before the PR is merged, but be ready to
re-do them if the CI/CD pipeline was triggered before the decomissioning PR was
re-do them if the CI/CD pipeline was triggered before the decommissioning PR was
merged.

## 4. Delete the Helm release and namespace
Expand Down
Loading

0 comments on commit 93fe345

Please sign in to comment.