Nebari CLI¶
+Nebari CLI¶
nebari¶
+nebari¶
Nebari CLI 🪴
nebari [OPTIONS] COMMAND [ARGS]...
nebariOptions
-
---import-plugin <plugins>¶
+--import-plugin <plugins>¶
Import nebari plugin
- Default:
@@ -58,7 +59,7 @@ nebari
-
---exclude-stage <excluded_stages>¶
+--exclude-stage <excluded_stages>¶
Exclude nebari stage(s) by name or regex
- Default:
@@ -68,7 +69,7 @@ nebari
-
---exclude-default-stages¶
+--exclude-default-stages¶
Exclude default nebari included stages
-deploy¶
+deploy¶
Deploy the Nebari cluster from your [purple]nebari-config.yaml[/purple] file.
nebari deploy [OPTIONS]
@@ -86,35 +87,30 @@ deployOptions
-
--c, --config <config_filename>¶
+-c, --config <config_filename>¶
Required nebari configuration yaml file path
-
--o, --output <output_directory>¶
+-o, --output <output_directory>¶
output directory
- Default:
-./
+'./'
-
---dns-provider <dns_provider>¶
+--dns-provider <dns_provider>¶
dns provider to use for registering domain name mapping ⚠️ moved to dns.provider in nebari-config.yaml
-
-- Default:
-False
-
-
-
---dns-auto-provision¶
+--dns-auto-provision¶
Attempt to automatically provision DNS, currently only available for cloudflare ⚠️ moved to dns.auto_provision in nebari-config.yaml
- Default:
@@ -125,7 +121,7 @@ deploy
-
---disable-prompt¶
+--disable-prompt¶
Disable human intervention
- Default:
@@ -136,7 +132,7 @@ deploy
-
---disable-render¶
+--disable-render¶
Disable auto-rendering in deploy stage
- Default:
@@ -147,7 +143,7 @@ deploy
-
---disable-checks¶
+--disable-checks¶
Disable the checks performed after each stage
- Default:
@@ -158,7 +154,7 @@ deploy
-
---skip-remote-state-provision¶
+--skip-remote-state-provision¶
Skip terraform state deployment which is often required in CI once the terraform remote state bootstrapping phase is complete
- Default:
@@ -169,7 +165,7 @@ deploy
-destroy¶
+destroy¶
Destroy the Nebari cluster from your [purple]nebari-config.yaml[/purple] file.
nebari destroy [OPTIONS]
@@ -177,24 +173,24 @@ destroyOptions
-
--c, --config <config_filename>¶
+-c, --config <config_filename>¶
Required nebari configuration file path
-
--o, --output <output_directory>¶
+-o, --output <output_directory>¶
output directory
- Default:
-./
+'./'
-
---disable-render¶
+--disable-render¶
Disable auto-rendering before destroy
- Default:
@@ -205,7 +201,7 @@ destroy
-
---disable-prompt¶
+--disable-prompt¶
Destroy entire Nebari cluster without confirmation request. Suggested for CI use.
- Default:
@@ -216,13 +212,13 @@ destroy
-dev¶
+dev¶
Development tools and advanced features.
nebari dev [OPTIONS] COMMAND [ARGS]...
-keycloak-api¶
+keycloak-api¶
Interact with the Keycloak REST API directly.
This is an advanced tool which can have potentially destructive consequences.
Please use this at your own risk.
@@ -232,26 +228,27 @@ keycloak-apiOptions
-info¶
+info¶
+Display information about installed Nebari plugins and their configurations.
nebari info [OPTIONS]
-init¶
+init¶
Create and initialize your [purple]nebari-config.yaml[/purple] file.
This command will create and initialize your [purple]nebari-config.yaml[/purple] :sparkles:
This file contains all your Nebari cluster configuration details and,
@@ -261,13 +258,13 @@
initnebari init [OPTIONS] [CLOUD_PROVIDER]:[local|existing|do|aws|gcp|azure]
+nebari init [OPTIONS] [CLOUD_PROVIDER]:[local|existing|aws|gcp|azure]
Options
-
---guided-init, --no-guided-init¶
+--guided-init, --no-guided-init¶
[bold green]START HERE[/bold green] - this will guide you step-by-step to generate your [purple]nebari-config.yaml[/purple]. It is an [i]alternative[/i] to passing the options listed below.
- Default:
@@ -278,42 +275,48 @@ init
-
--p, --project-name, --project <project_name>¶
+-p, --project-name, --project <project_name>¶
Required
+
+-
+--region <region>¶
+The region you want to deploy your Nebari cluster to (if deploying to the cloud)
+
+
-
---auth-provider <auth_provider>¶
-options: [‘password’, ‘GitHub’, ‘Auth0’, ‘custom’]
+--auth-provider <auth_provider>¶
+options: [‘password’, ‘GitHub’, ‘Auth0’]
- Default:
-AuthenticationEnum.password
+<AuthenticationEnum.password: 'password'>
- Options:
-password | GitHub | Auth0 | custom
+password | GitHub | Auth0
-
---auth-auto-provision, --no-auth-auto-provision¶
+--auth-auto-provision, --no-auth-auto-provision¶
- Default:
False
@@ -323,19 +326,15 @@ init
-
---repository <repository>¶
-options: [‘github.com’, ‘gitlab.com’]
-
-- Options:
-github.com | gitlab.com
-
-
+--repository <repository>¶
+Github repository URL to be initialized with –repository-auto-provision
-
---repository-auto-provision, --no-repository-auto-provision¶
-
+--repository-auto-provision, --no-repository-auto-provision¶
+Initialize the GitHub repository provided by –repository (GitHub credentials required)
+
- Default:
False
@@ -344,11 +343,11 @@ init
-
---ci-provider <ci_provider>¶
+--ci-provider <ci_provider>¶
options: [‘github-actions’, ‘gitlab-ci’, ‘none’]
- Default:
-CiEnum.none
+<CiEnum.none: 'none'>
- Options:
github-actions | gitlab-ci | none
@@ -358,11 +357,11 @@ init
-
---terraform-state <terraform_state>¶
+--terraform-state <terraform_state>¶
options: [‘remote’, ‘local’, ‘existing’]
- Default:
-TerraformStateEnum.remote
+<TerraformStateEnum.remote: 'remote'>
- Options:
remote | local | existing
@@ -372,22 +371,23 @@ init
-
---kubernetes-version <kubernetes_version>¶
-
+--kubernetes-version <kubernetes_version>¶
+The Kubernetes version you want to deploy your Nebari cluster to, leave blank for latest version
+
- Default:
-latest
+'latest'
-
---disable-prompt, --no-disable-prompt¶
+--disable-prompt, --no-disable-prompt¶
- Default:
False
@@ -395,13 +395,30 @@ init
+-
+-s, --config-set <config_set>¶
+Apply a pre-defined set of nebari configuration options.
+
+
-
--o, --output <output>¶
+-o, --output <output>¶
Output file path for the rendered config file.
- Default:
-nebari-config.yaml
+PosixPath('nebari-config.yaml')
+
+
+
+
+
+-
+-e, --explicit¶
+Write explicit nebari config file (advanced users only).
+
+- Default:
+0
@@ -409,19 +426,20 @@ initArguments
-
-CLOUD_PROVIDER¶
+CLOUD_PROVIDER¶
Optional argument
+options: [‘local’, ‘existing’, ‘aws’, ‘gcp’, ‘azure’]
-keycloak¶
+keycloak¶
Interact with the Nebari Keycloak identity and access management tool.
nebari keycloak [OPTIONS] COMMAND [ARGS]...
-adduser¶
+adduser¶
Add a user to Keycloak. User will be automatically added to the [italic]analyst[/italic] group.
nebari keycloak adduser [OPTIONS]
@@ -429,19 +447,19 @@ adduserOptions
-
--c, --config <config_filename>¶
+-c, --config <config_filename>¶
Required nebari configuration file path
-export-users¶
+export-users¶
Export the users in Keycloak.
nebari keycloak export-users [OPTIONS]
@@ -449,24 +467,24 @@ export-usersOptions
-
--c, --config <config_filename>¶
+-c, --config <config_filename>¶
Required nebari configuration file path
-
---realm <realm>¶
+--realm <realm>¶
realm from which users are to be exported
- Default:
-nebari
+'nebari'
-listusers¶
+listusers¶
List the users in Keycloak.
nebari keycloak listusers [OPTIONS]
@@ -474,14 +492,28 @@ listusersOptions
-
--c, --config <config_filename>¶
+-c, --config <config_filename>¶
Required nebari configuration file path
+
+
+
+plugin¶
+Interact with nebari plugins
+nebari plugin [OPTIONS] COMMAND [ARGS]...
+
+
+
+list¶
+List installed plugins
+nebari plugin list [OPTIONS]
+
+
-render¶
+render¶
Dynamically render the Terraform scripts and other files from your [purple]nebari-config.yaml[/purple] file.
nebari render [OPTIONS]
@@ -489,24 +521,24 @@ renderOptions
-
--o, --output <output_directory>¶
+-o, --output <output_directory>¶
output directory
- Default:
-./
+'./'
-
--c, --config <config_filename>¶
+-c, --config <config_filename>¶
Required nebari configuration yaml file path
-
---dry-run¶
+--dry-run¶
simulate rendering files without actually writing or updating any files
- Default:
@@ -517,7 +549,7 @@ render
-support¶
+support¶
Support tool to write all Kubernetes logs locally and compress them into a zip file.
The Nebari team recommends k9s to manage and inspect the state of the cluster.
However, this command occasionally helpful for debugging purposes should the logs need to be shared.
@@ -527,24 +559,24 @@ supportOptions
-
--c, --config <config_filename>¶
+-c, --config <config_filename>¶
Required nebari configuration file path
-
--o, --output <output>¶
+-o, --output <output>¶
output filename
- Default:
-./nebari-support-logs.zip
+'./nebari-support-logs.zip'
-upgrade¶
+upgrade¶
Upgrade your [purple]nebari-config.yaml[/purple].
Upgrade your [purple]nebari-config.yaml[/purple] after an nebari upgrade. If necessary, prompts users to perform manual upgrade steps required for the deploy process.
See the project [green]RELEASE.md[/green] for details.
@@ -554,13 +586,13 @@ upgradeOptions
-
--c, --config <config_filename>¶
+-c, --config <config_filename>¶
Required nebari configuration file path
-
---attempt-fixes¶
+--attempt-fixes¶
Attempt to fix the config for any incompatibilities between your old and new Nebari versions.
- Default:
@@ -571,7 +603,7 @@ upgrade
-validate¶
+validate¶
Validate the values in the [purple]nebari-config.yaml[/purple] file are acceptable.
nebari validate [OPTIONS]
@@ -579,13 +611,13 @@ validateOptions
-
--c, --config <config_filename>¶
+-c, --config <config_filename>¶
Required nebari configuration yaml file path, please pass in as -c/–config flag
-
---enable-commenting¶
+--enable-commenting¶
Toggle PR commenting on GitHub Actions
- Default:
@@ -603,7 +635,7 @@ validate
+
Nebari CLI documentation
@@ -614,7 +646,16 @@ Nebari CLI documentation
-Navigation
+
+
+
+
+
+
+Navigation
-
- Quick search
-
-
-
-
-
@@ -673,11 +708,11 @@ Quick search
- ©2023, Nebari.
+ ©2023, Nebari.
|
- Powered by Sphinx 6.1.3
- & Alabaster 0.7.13
+ Powered by Sphinx 8.1.3
+ & Alabaster 1.0.0
|
=3.9.0,<4.0.0",
"questionary==2.0.0",
@@ -79,7 +81,7 @@ dependencies = [
"ruamel.yaml==0.18.6",
"typer==0.9.0",
"packaging==23.2",
- "typing-extensions==4.11.0",
+ "typing-extensions>=4.11.0",
]
[project.optional-dependencies]
@@ -87,7 +89,6 @@ dev = [
"black==22.3.0",
"coverage[toml]",
"dask-gateway",
- "diagrams",
"escapism",
"importlib-metadata<5.0",
"mypy==1.6.1",
diff --git a/src/_nebari/cli.py b/src/_nebari/cli.py
index de91cc1853..6bf030ae26 100644
--- a/src/_nebari/cli.py
+++ b/src/_nebari/cli.py
@@ -10,7 +10,7 @@
class OrderCommands(TyperGroup):
def list_commands(self, ctx: typer.Context):
"""Return list of commands in the order appear."""
- return list(self.commands)
+ return list(self.commands)[::-1]
def version_callback(value: bool):
@@ -65,6 +65,7 @@ def common(
[],
"--import-plugin",
help="Import nebari plugin",
+ callback=import_plugin,
),
excluded_stages: typing.List[str] = typer.Option(
[],
diff --git a/src/_nebari/config_set.py b/src/_nebari/config_set.py
new file mode 100644
index 0000000000..95413ea1a7
--- /dev/null
+++ b/src/_nebari/config_set.py
@@ -0,0 +1,54 @@
+import logging
+import pathlib
+from typing import Optional
+
+from packaging.requirements import SpecifierSet
+from pydantic import BaseModel, ConfigDict, field_validator
+
+from _nebari._version import __version__
+from _nebari.utils import yaml
+
+logger = logging.getLogger(__name__)
+
+
+class ConfigSetMetadata(BaseModel):
+ model_config: ConfigDict = ConfigDict(extra="allow", arbitrary_types_allowed=True)
+ name: str # for use with guided init
+ description: Optional[str] = None
+ nebari_version: str | SpecifierSet
+
+ @field_validator("nebari_version")
+ @classmethod
+ def validate_version_requirement(cls, version_req):
+ if isinstance(version_req, str):
+ version_req = SpecifierSet(version_req, prereleases=True)
+
+ return version_req
+
+ def check_version(self, version):
+ if not self.nebari_version.contains(version, prereleases=True):
+ raise ValueError(
+ f'Nebari version "{version}" is not compatible with '
+ f'version requirement {self.nebari_version} for "{self.name}" config set.'
+ )
+
+
+class ConfigSet(BaseModel):
+ metadata: ConfigSetMetadata
+ config: dict
+
+
+def read_config_set(config_set_filepath: str):
+ """Read a config set from a config file."""
+
+ filename = pathlib.Path(config_set_filepath)
+
+ with filename.open() as f:
+ config_set_yaml = yaml.load(f)
+
+ config_set = ConfigSet(**config_set_yaml)
+
+ # validation
+ config_set.metadata.check_version(__version__)
+
+ return config_set
diff --git a/src/_nebari/constants.py b/src/_nebari/constants.py
index 6e57519fee..a4f81c354c 100644
--- a/src/_nebari/constants.py
+++ b/src/_nebari/constants.py
@@ -1,31 +1,27 @@
-CURRENT_RELEASE = "2024.9.1"
+CURRENT_RELEASE = "2025.2.1"
HELM_VERSION = "v3.15.3"
KUSTOMIZE_VERSION = "5.4.3"
-# NOTE: Terraform cannot be upgraded further due to Hashicorp licensing changes
-# implemented in August 2023.
-# https://www.hashicorp.com/license-faq
-TERRAFORM_VERSION = "1.5.7"
+OPENTOFU_VERSION = "1.8.3"
KUBERHEALTHY_HELM_VERSION = "100"
# 04-kubernetes-ingress
DEFAULT_TRAEFIK_IMAGE_TAG = "2.9.1"
-HIGHEST_SUPPORTED_K8S_VERSION = ("1", "29", "2")
+HIGHEST_SUPPORTED_K8S_VERSION = ("1", "31") # specify Major and Minor version
DEFAULT_GKE_RELEASE_CHANNEL = "UNSPECIFIED"
DEFAULT_NEBARI_DASK_VERSION = CURRENT_RELEASE
DEFAULT_NEBARI_IMAGE_TAG = CURRENT_RELEASE
DEFAULT_NEBARI_WORKFLOW_CONTROLLER_IMAGE_TAG = CURRENT_RELEASE
-DEFAULT_CONDA_STORE_IMAGE_TAG = "2024.3.1"
+DEFAULT_CONDA_STORE_IMAGE_TAG = "2025.2.1"
LATEST_SUPPORTED_PYTHON_VERSION = "3.10"
# DOCS
-DO_ENV_DOCS = "https://www.nebari.dev/docs/how-tos/nebari-do"
AZURE_ENV_DOCS = "https://www.nebari.dev/docs/how-tos/nebari-azure"
AWS_ENV_DOCS = "https://www.nebari.dev/docs/how-tos/nebari-aws"
GCP_ENV_DOCS = "https://www.nebari.dev/docs/how-tos/nebari-gcp"
@@ -34,4 +30,3 @@
AWS_DEFAULT_REGION = "us-east-1"
AZURE_DEFAULT_REGION = "Central US"
GCP_DEFAULT_REGION = "us-central1"
-DO_DEFAULT_REGION = "nyc3"
diff --git a/src/_nebari/initialize.py b/src/_nebari/initialize.py
index 7745df2a98..7566fe7b44 100644
--- a/src/_nebari/initialize.py
+++ b/src/_nebari/initialize.py
@@ -8,21 +8,16 @@
import pydantic
import requests
-from _nebari import constants
+from _nebari import constants, utils
+from _nebari.config_set import read_config_set
from _nebari.provider import git
from _nebari.provider.cicd import github
-from _nebari.provider.cloud import (
- amazon_web_services,
- azure_cloud,
- digital_ocean,
- google_cloud,
-)
+from _nebari.provider.cloud import amazon_web_services, azure_cloud, google_cloud
from _nebari.provider.oauth.auth0 import create_client
from _nebari.stages.bootstrap import CiEnum
from _nebari.stages.infrastructure import (
DEFAULT_AWS_NODE_GROUPS,
DEFAULT_AZURE_NODE_GROUPS,
- DEFAULT_DO_NODE_GROUPS,
DEFAULT_GCP_NODE_GROUPS,
node_groups_to_dict,
)
@@ -53,6 +48,7 @@ def render_config(
region: str = None,
disable_prompt: bool = False,
ssl_cert_email: str = None,
+ config_set: str = None,
) -> Dict[str, Any]:
config = {
"provider": cloud_provider,
@@ -117,22 +113,7 @@ def render_config(
),
}
- if cloud_provider == ProviderEnum.do:
- do_region = region or constants.DO_DEFAULT_REGION
- do_kubernetes_versions = kubernetes_version or get_latest_kubernetes_version(
- digital_ocean.kubernetes_versions()
- )
- config["digital_ocean"] = {
- "kubernetes_version": do_kubernetes_versions,
- "region": do_region,
- "node_groups": node_groups_to_dict(DEFAULT_DO_NODE_GROUPS),
- }
-
- config["theme"]["jupyterhub"][
- "hub_subtitle"
- ] = f"{WELCOME_HEADER_TEXT} on Digital Ocean"
-
- elif cloud_provider == ProviderEnum.gcp:
+ if cloud_provider == ProviderEnum.gcp:
gcp_region = region or constants.GCP_DEFAULT_REGION
gcp_kubernetes_version = kubernetes_version or get_latest_kubernetes_version(
google_cloud.kubernetes_versions(gcp_region)
@@ -197,13 +178,17 @@ def render_config(
config["certificate"] = {"type": CertificateEnum.letsencrypt.value}
config["certificate"]["acme_email"] = ssl_cert_email
+ if config_set:
+ config_set = read_config_set(config_set)
+ config = utils.deep_merge(config, config_set.config)
+
# validate configuration and convert to model
from nebari.plugins import nebari_plugin_manager
try:
config_model = nebari_plugin_manager.config_schema.model_validate(config)
except pydantic.ValidationError as e:
- print(str(e))
+ raise e
if repository_auto_provision:
match = re.search(github_url_regex, repository)
@@ -245,16 +230,7 @@ def github_auto_provision(config: pydantic.BaseModel, owner: str, repo: str):
try:
# Secrets
- if config.provider == ProviderEnum.do:
- for name in {
- "AWS_ACCESS_KEY_ID",
- "AWS_SECRET_ACCESS_KEY",
- "SPACES_ACCESS_KEY_ID",
- "SPACES_SECRET_ACCESS_KEY",
- "DIGITALOCEAN_TOKEN",
- }:
- github.update_secret(owner, repo, name, os.environ[name])
- elif config.provider == ProviderEnum.aws:
+ if config.provider == ProviderEnum.aws:
for name in {
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
diff --git a/src/_nebari/provider/cicd/github.py b/src/_nebari/provider/cicd/github.py
index d091d1d027..92d3b853e9 100644
--- a/src/_nebari/provider/cicd/github.py
+++ b/src/_nebari/provider/cicd/github.py
@@ -117,12 +117,6 @@ def gha_env_vars(config: schema.Main):
env_vars["ARM_CLIENT_SECRET"] = "${{ secrets.ARM_CLIENT_SECRET }}"
env_vars["ARM_SUBSCRIPTION_ID"] = "${{ secrets.ARM_SUBSCRIPTION_ID }}"
env_vars["ARM_TENANT_ID"] = "${{ secrets.ARM_TENANT_ID }}"
- elif config.provider == schema.ProviderEnum.do:
- env_vars["AWS_ACCESS_KEY_ID"] = "${{ secrets.AWS_ACCESS_KEY_ID }}"
- env_vars["AWS_SECRET_ACCESS_KEY"] = "${{ secrets.AWS_SECRET_ACCESS_KEY }}"
- env_vars["SPACES_ACCESS_KEY_ID"] = "${{ secrets.SPACES_ACCESS_KEY_ID }}"
- env_vars["SPACES_SECRET_ACCESS_KEY"] = "${{ secrets.SPACES_SECRET_ACCESS_KEY }}"
- env_vars["DIGITALOCEAN_TOKEN"] = "${{ secrets.DIGITALOCEAN_TOKEN }}"
elif config.provider == schema.ProviderEnum.gcp:
env_vars["GOOGLE_CREDENTIALS"] = "${{ secrets.GOOGLE_CREDENTIALS }}"
env_vars["PROJECT_ID"] = "${{ secrets.PROJECT_ID }}"
diff --git a/src/_nebari/provider/cloud/amazon_web_services.py b/src/_nebari/provider/cloud/amazon_web_services.py
index 1123c07fe0..dee4df891c 100644
--- a/src/_nebari/provider/cloud/amazon_web_services.py
+++ b/src/_nebari/provider/cloud/amazon_web_services.py
@@ -2,6 +2,7 @@
import os
import re
import time
+from dataclasses import dataclass
from typing import Dict, List, Optional
import boto3
@@ -23,25 +24,19 @@ def check_credentials() -> None:
@functools.lru_cache()
def aws_session(
- region: Optional[str] = None, digitalocean_region: Optional[str] = None
+ region: Optional[str] = None,
) -> boto3.Session:
"""Create a boto3 session."""
- if digitalocean_region:
- aws_access_key_id = os.environ["SPACES_ACCESS_KEY_ID"]
- aws_secret_access_key = os.environ["SPACES_SECRET_ACCESS_KEY"]
- region = digitalocean_region
- aws_session_token = None
- else:
- check_credentials()
- aws_access_key_id = os.environ["AWS_ACCESS_KEY_ID"]
- aws_secret_access_key = os.environ["AWS_SECRET_ACCESS_KEY"]
- aws_session_token = os.environ.get("AWS_SESSION_TOKEN")
-
- if not region:
- raise ValueError(
- "Please specify `region` in the nebari-config.yaml or if initializing the nebari-config, set the region via the "
- "`--region` flag or via the AWS_DEFAULT_REGION environment variable.\n"
- )
+ check_credentials()
+ aws_access_key_id = os.environ["AWS_ACCESS_KEY_ID"]
+ aws_secret_access_key = os.environ["AWS_SECRET_ACCESS_KEY"]
+ aws_session_token = os.environ.get("AWS_SESSION_TOKEN")
+
+ if not region:
+ raise ValueError(
+ "Please specify `region` in the nebari-config.yaml or if initializing the nebari-config, set the region via the "
+ "`--region` flag or via the AWS_DEFAULT_REGION environment variable.\n"
+ )
return boto3.Session(
region_name=region,
@@ -121,6 +116,35 @@ def instances(region: str) -> Dict[str, str]:
return {t: t for t in instance_types}
+@dataclass
+class Kms_Key_Info:
+ Arn: str
+ KeyUsage: str
+ KeySpec: str
+ KeyManager: str
+
+
+@functools.lru_cache()
+def kms_key_arns(region: str) -> Dict[str, Kms_Key_Info]:
+ """Return dict of available/enabled KMS key IDs and associated KeyMetadata for the AWS region."""
+ session = aws_session(region=region)
+ client = session.client("kms")
+ kms_keys = {}
+ # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kms/client/list_keys.html
+ for key in client.list_keys().get("Keys"):
+ key_id = key["KeyId"]
+ # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kms/client/describe_key.html#:~:text=Response%20Structure
+ key_data = client.describe_key(KeyId=key_id).get("KeyMetadata")
+ if key_data.get("Enabled"):
+ kms_keys[key_id] = Kms_Key_Info(
+ Arn=key_data.get("Arn"),
+ KeyUsage=key_data.get("KeyUsage"),
+ KeySpec=key_data.get("KeySpec"),
+ KeyManager=key_data.get("KeyManager"),
+ )
+ return kms_keys
+
+
def aws_get_vpc_id(name: str, namespace: str, region: str) -> Optional[str]:
"""Return VPC ID for the EKS cluster namedd `{name}-{namespace}`."""
cluster_name = f"{name}-{namespace}"
@@ -682,21 +706,17 @@ def aws_delete_s3_objects(
bucket_name: str,
endpoint: Optional[str] = None,
region: Optional[str] = None,
- digitalocean_region: Optional[str] = None,
):
"""
Delete all objects in the S3 bucket.
- NOTE: This method is shared with Digital Ocean as their "Spaces" is S3 compatible and uses the same API.
-
Parameters:
bucket_name (str): S3 bucket name
- endpoint (str): S3 endpoint URL (required for Digital Ocean spaces)
+ endpoint (str): S3 endpoint URL
region (str): AWS region
- digitalocean_region (str): Digital Ocean region
"""
- session = aws_session(region=region, digitalocean_region=digitalocean_region)
+ session = aws_session(region=region)
s3 = session.client("s3", endpoint_url=endpoint)
try:
@@ -749,22 +769,18 @@ def aws_delete_s3_bucket(
bucket_name: str,
endpoint: Optional[str] = None,
region: Optional[str] = None,
- digitalocean_region: Optional[str] = None,
):
"""
Delete S3 bucket.
- NOTE: This method is shared with Digital Ocean as their "Spaces" is S3 compatible and uses the same API.
-
Parameters:
bucket_name (str): S3 bucket name
- endpoint (str): S3 endpoint URL (required for Digital Ocean spaces)
+ endpoint (str): S3 endpoint URL
region (str): AWS region
- digitalocean_region (str): Digital Ocean region
"""
- aws_delete_s3_objects(bucket_name, endpoint, region, digitalocean_region)
+ aws_delete_s3_objects(bucket_name, endpoint, region)
- session = aws_session(region=region, digitalocean_region=digitalocean_region)
+ session = aws_session(region=region)
s3 = session.client("s3", endpoint_url=endpoint)
try:
diff --git a/src/_nebari/provider/cloud/commons.py b/src/_nebari/provider/cloud/commons.py
index 566b2029a4..d2bed87c48 100644
--- a/src/_nebari/provider/cloud/commons.py
+++ b/src/_nebari/provider/cloud/commons.py
@@ -6,9 +6,7 @@
def filter_by_highest_supported_k8s_version(k8s_versions_list):
filtered_k8s_versions_list = []
for k8s_version in k8s_versions_list:
- version = tuple(
- filter(None, re.search(r"(\d+)\.(\d+)(?:\.(\d+))?", k8s_version).groups())
- )
+ version = tuple(filter(None, re.search(r"(\d+)\.(\d+)", k8s_version).groups()))
if version <= HIGHEST_SUPPORTED_K8S_VERSION:
filtered_k8s_versions_list.append(k8s_version)
return filtered_k8s_versions_list
diff --git a/src/_nebari/provider/cloud/digital_ocean.py b/src/_nebari/provider/cloud/digital_ocean.py
deleted file mode 100644
index 3e4a507be6..0000000000
--- a/src/_nebari/provider/cloud/digital_ocean.py
+++ /dev/null
@@ -1,131 +0,0 @@
-import functools
-import os
-import tempfile
-import typing
-
-import kubernetes.client
-import kubernetes.config
-import requests
-
-from _nebari.constants import DO_ENV_DOCS
-from _nebari.provider.cloud.amazon_web_services import aws_delete_s3_bucket
-from _nebari.provider.cloud.commons import filter_by_highest_supported_k8s_version
-from _nebari.utils import check_environment_variables, set_do_environment
-from nebari import schema
-
-
-def check_credentials() -> None:
- required_variables = {
- "DIGITALOCEAN_TOKEN",
- "SPACES_ACCESS_KEY_ID",
- "SPACES_SECRET_ACCESS_KEY",
- }
- check_environment_variables(required_variables, DO_ENV_DOCS)
-
-
-def digital_ocean_request(url, method="GET", json=None):
- BASE_DIGITALOCEAN_URL = "https://api.digitalocean.com/v2/"
-
- for name in {"DIGITALOCEAN_TOKEN"}:
- if name not in os.environ:
- raise ValueError(
- f"Digital Ocean api requests require environment variable={name} defined"
- )
-
- headers = {"Authorization": f'Bearer {os.environ["DIGITALOCEAN_TOKEN"]}'}
-
- method_map = {
- "GET": requests.get,
- "DELETE": requests.delete,
- }
-
- response = method_map[method](
- f"{BASE_DIGITALOCEAN_URL}{url}", headers=headers, json=json
- )
- response.raise_for_status()
- return response
-
-
-@functools.lru_cache()
-def _kubernetes_options():
- return digital_ocean_request("kubernetes/options").json()
-
-
-def instances():
- return _kubernetes_options()["options"]["sizes"]
-
-
-def regions():
- return _kubernetes_options()["options"]["regions"]
-
-
-def kubernetes_versions() -> typing.List[str]:
- """Return list of available kubernetes supported by cloud provider. Sorted from oldest to latest."""
- supported_kubernetes_versions = sorted(
- [_["slug"].split("-")[0] for _ in _kubernetes_options()["options"]["versions"]]
- )
- filtered_versions = filter_by_highest_supported_k8s_version(
- supported_kubernetes_versions
- )
- return [f"{v}-do.0" for v in filtered_versions]
-
-
-def digital_ocean_get_cluster_id(cluster_name):
- clusters = digital_ocean_request("kubernetes/clusters").json()[
- "kubernetes_clusters"
- ]
-
- cluster_id = None
- for cluster in clusters:
- if cluster["name"] == cluster_name:
- cluster_id = cluster["id"]
- break
-
- return cluster_id
-
-
-def digital_ocean_get_kubeconfig(cluster_id: str):
- kubeconfig_content = digital_ocean_request(
- f"kubernetes/clusters/{cluster_id}/kubeconfig"
- ).content
-
- with tempfile.NamedTemporaryFile(delete=False) as temp_kubeconfig:
- temp_kubeconfig.write(kubeconfig_content)
-
- return temp_kubeconfig.name
-
-
-def digital_ocean_delete_kubernetes_cluster(cluster_name: str):
- cluster_id = digital_ocean_get_cluster_id(cluster_name)
- digital_ocean_request(f"kubernetes/clusters/{cluster_id}", method="DELETE")
-
-
-def digital_ocean_cleanup(config: schema.Main):
- """Delete all Digital Ocean resources created by Nebari."""
-
- name = config.project_name
- namespace = config.namespace
-
- cluster_name = f"{name}-{namespace}"
- tf_state_bucket = f"{cluster_name}-terraform-state"
- do_spaces_endpoint = "https://nyc3.digitaloceanspaces.com"
-
- cluster_id = digital_ocean_get_cluster_id(cluster_name)
- if cluster_id is None:
- return
-
- kubernetes.config.load_kube_config(digital_ocean_get_kubeconfig(cluster_id))
- api = kubernetes.client.CoreV1Api()
-
- labels = {"component": "singleuser-server", "app": "jupyterhub"}
-
- api.delete_collection_namespaced_pod(
- namespace=namespace,
- label_selector=",".join([f"{k}={v}" for k, v in labels.items()]),
- )
-
- set_do_environment()
- aws_delete_s3_bucket(
- tf_state_bucket, digitalocean=True, endpoint=do_spaces_endpoint
- )
- digital_ocean_delete_kubernetes_cluster(cluster_name)
diff --git a/src/_nebari/provider/cloud/google_cloud.py b/src/_nebari/provider/cloud/google_cloud.py
index 6b54e40e9d..5317cb1528 100644
--- a/src/_nebari/provider/cloud/google_cloud.py
+++ b/src/_nebari/provider/cloud/google_cloud.py
@@ -51,19 +51,47 @@ def regions() -> Set[str]:
return {region.name for region in response}
+@functools.lru_cache()
+def instances(region: str) -> set[str]:
+ """Return a set of available compute instances in a region."""
+ credentials, project_id = load_credentials()
+ zones_client = compute_v1.services.region_zones.RegionZonesClient(
+ credentials=credentials
+ )
+ instances_client = compute_v1.MachineTypesClient(credentials=credentials)
+ zone_list = zones_client.list(project=project_id, region=region)
+ zones = [zone for zone in zone_list]
+ instance_set: set[str] = set()
+ for zone in zones:
+ instance_list = instances_client.list(project=project_id, zone=zone.name)
+ for instance in instance_list:
+ instance_set.add(instance.name)
+ return instance_set
+
+
@functools.lru_cache()
def kubernetes_versions(region: str) -> List[str]:
"""Return list of available kubernetes supported by cloud provider. Sorted from oldest to latest."""
credentials, project_id = load_credentials()
client = container_v1.ClusterManagerClient(credentials=credentials)
response = client.get_server_config(
- name=f"projects/{project_id}/locations/{region}"
+ name=f"projects/{project_id}/locations/{region}", timeout=300
)
supported_kubernetes_versions = response.valid_master_versions
return filter_by_highest_supported_k8s_version(supported_kubernetes_versions)
+def get_patch_version(full_version: str) -> str:
+ return full_version.split("-")[0]
+
+
+def get_minor_version(full_version: str) -> str:
+ patch_version = get_patch_version(full_version)
+ parts = patch_version.split(".")
+ return f"{parts[0]}.{parts[1]}"
+
+
def cluster_exists(cluster_name: str, region: str) -> bool:
"""Check if a GKE cluster exists."""
credentials, project_id = load_credentials()
diff --git a/src/_nebari/provider/terraform.py b/src/_nebari/provider/opentofu.py
similarity index 62%
rename from src/_nebari/provider/terraform.py
rename to src/_nebari/provider/opentofu.py
index 59d88e76dd..78936d1808 100644
--- a/src/_nebari/provider/terraform.py
+++ b/src/_nebari/provider/opentofu.py
@@ -18,39 +18,39 @@
logger = logging.getLogger(__name__)
-class TerraformException(Exception):
+class OpenTofuException(Exception):
pass
def deploy(
directory,
- terraform_init: bool = True,
- terraform_import: bool = False,
- terraform_apply: bool = True,
- terraform_destroy: bool = False,
+ tofu_init: bool = True,
+ tofu_import: bool = False,
+ tofu_apply: bool = True,
+ tofu_destroy: bool = False,
input_vars: Dict[str, Any] = {},
state_imports: List[Any] = [],
):
- """Execute a given terraform directory.
+ """Execute a given directory with OpenTofu infrastructure configuration.
Parameters:
- directory: directory in which to run terraform operations on
+ directory: directory in which to run tofu operations on
- terraform_init: whether to run `terraform init` default True
+ tofu_init: whether to run `tofu init` default True
- terraform_import: whether to run `terraform import` default
+ tofu_import: whether to run `tofu import` default
False for each `state_imports` supplied to function
- terraform_apply: whether to run `terraform apply` default True
+ tofu_apply: whether to run `tofu apply` default True
- terraform_destroy: whether to run `terraform destroy` default
+ tofu_destroy: whether to run `tofu destroy` default
False
input_vars: supply values for "variable" resources within
terraform module
state_imports: (addr, id) pairs for iterate through and attempt
- to terraform import
+ to tofu import
"""
with tempfile.NamedTemporaryFile(
mode="w", encoding="utf-8", suffix=".tfvars.json"
@@ -58,25 +58,25 @@ def deploy(
json.dump(input_vars, f.file)
f.file.flush()
- if terraform_init:
+ if tofu_init:
init(directory)
- if terraform_import:
+ if tofu_import:
for addr, id in state_imports:
tfimport(
addr, id, directory=directory, var_files=[f.name], exist_ok=True
)
- if terraform_apply:
+ if tofu_apply:
apply(directory, var_files=[f.name])
- if terraform_destroy:
+ if tofu_destroy:
destroy(directory, var_files=[f.name])
return output(directory)
-def download_terraform_binary(version=constants.TERRAFORM_VERSION):
+def download_opentofu_binary(version=constants.OPENTOFU_VERSION):
os_mapping = {
"linux": "linux",
"win32": "windows",
@@ -94,73 +94,72 @@ def download_terraform_binary(version=constants.TERRAFORM_VERSION):
"arm64": "arm64",
}
- download_url = f"https://releases.hashicorp.com/terraform/{version}/terraform_{version}_{os_mapping[sys.platform]}_{architecture_mapping[platform.machine()]}.zip"
- filename_directory = Path(tempfile.gettempdir()) / "terraform" / version
- filename_path = filename_directory / "terraform"
+ download_url = f"https://github.com/opentofu/opentofu/releases/download/v{version}/tofu_{version}_{os_mapping[sys.platform]}_{architecture_mapping[platform.machine()]}.zip"
+
+ filename_directory = Path(tempfile.gettempdir()) / "opentofu" / version
+ filename_path = filename_directory / "tofu"
if not filename_path.is_file():
logger.info(
- f"downloading and extracting terraform binary from url={download_url} to path={filename_path}"
+ f"downloading and extracting opentofu binary from url={download_url} to path={filename_path}"
)
with urllib.request.urlopen(download_url) as f:
bytes_io = io.BytesIO(f.read())
download_file = zipfile.ZipFile(bytes_io)
- download_file.extract("terraform", filename_directory)
+ download_file.extract("tofu", filename_directory)
filename_path.chmod(0o555)
return filename_path
-def run_terraform_subprocess(processargs, **kwargs):
- terraform_path = download_terraform_binary()
- logger.info(f" terraform at {terraform_path}")
- exit_code, output = run_subprocess_cmd([terraform_path] + processargs, **kwargs)
+def run_tofu_subprocess(processargs, **kwargs):
+ tofu_path = download_opentofu_binary()
+ logger.info(f" tofu at {tofu_path}")
+ exit_code, output = run_subprocess_cmd([tofu_path] + processargs, **kwargs)
if exit_code != 0:
- raise TerraformException("Terraform returned an error")
+ raise OpenTofuException("OpenTofu returned an error")
return output
def version():
- terraform_path = download_terraform_binary()
- logger.info(f"checking terraform={terraform_path} version")
+ tofu_path = download_opentofu_binary()
+ logger.info(f"checking opentofu={tofu_path} version")
- version_output = subprocess.check_output([terraform_path, "--version"]).decode(
- "utf-8"
- )
+ version_output = subprocess.check_output([tofu_path, "--version"]).decode("utf-8")
return re.search(r"(\d+)\.(\d+).(\d+)", version_output).group(0)
def init(directory=None, upgrade=True):
- logger.info(f"terraform init directory={directory}")
- with timer(logger, "terraform init"):
+ logger.info(f"tofu init directory={directory}")
+ with timer(logger, "tofu init"):
command = ["init"]
if upgrade:
command.append("-upgrade")
- run_terraform_subprocess(command, cwd=directory, prefix="terraform")
+ run_tofu_subprocess(command, cwd=directory, prefix="tofu")
def apply(directory=None, targets=None, var_files=None):
targets = targets or []
var_files = var_files or []
- logger.info(f"terraform apply directory={directory} targets={targets}")
+ logger.info(f"tofu apply directory={directory} targets={targets}")
command = (
["apply", "-auto-approve"]
+ ["-target=" + _ for _ in targets]
+ ["-var-file=" + _ for _ in var_files]
)
- with timer(logger, "terraform apply"):
- run_terraform_subprocess(command, cwd=directory, prefix="terraform")
+ with timer(logger, "tofu apply"):
+ run_tofu_subprocess(command, cwd=directory, prefix="tofu")
def output(directory=None):
- terraform_path = download_terraform_binary()
+ tofu_path = download_opentofu_binary()
- logger.info(f"terraform={terraform_path} output directory={directory}")
- with timer(logger, "terraform output"):
+ logger.info(f"tofu={tofu_path} output directory={directory}")
+ with timer(logger, "tofu output"):
return json.loads(
subprocess.check_output(
- [terraform_path, "output", "-json"], cwd=directory
+ [tofu_path, "output", "-json"], cwd=directory
).decode("utf8")[:-1]
)
@@ -168,61 +167,61 @@ def output(directory=None):
def tfimport(addr, id, directory=None, var_files=None, exist_ok=False):
var_files = var_files or []
- logger.info(f"terraform import directory={directory} addr={addr} id={id}")
+ logger.info(f"tofu import directory={directory} addr={addr} id={id}")
command = ["import"] + ["-var-file=" + _ for _ in var_files] + [addr, id]
logger.error(str(command))
- with timer(logger, "terraform import"):
+ with timer(logger, "tofu import"):
try:
- run_terraform_subprocess(
+ run_tofu_subprocess(
command,
cwd=directory,
- prefix="terraform",
+ prefix="tofu",
strip_errors=True,
timeout=30,
)
- except TerraformException as e:
+ except OpenTofuException as e:
if not exist_ok:
raise e
-def show(directory=None, terraform_init: bool = True) -> dict:
+def show(directory=None, tofu_init: bool = True) -> dict:
- if terraform_init:
+ if tofu_init:
init(directory)
- logger.info(f"terraform show directory={directory}")
+ logger.info(f"tofu show directory={directory}")
command = ["show", "-json"]
- with timer(logger, "terraform show"):
+ with timer(logger, "tofu show"):
try:
output = json.loads(
- run_terraform_subprocess(
+ run_tofu_subprocess(
command,
cwd=directory,
- prefix="terraform",
+ prefix="tofu",
strip_errors=True,
capture_output=True,
)
)
return output
- except TerraformException as e:
+ except OpenTofuException as e:
raise e
def refresh(directory=None, var_files=None):
var_files = var_files or []
- logger.info(f"terraform refresh directory={directory}")
+ logger.info(f"tofu refresh directory={directory}")
command = ["refresh"] + ["-var-file=" + _ for _ in var_files]
- with timer(logger, "terraform refresh"):
- run_terraform_subprocess(command, cwd=directory, prefix="terraform")
+ with timer(logger, "tofu refresh"):
+ run_tofu_subprocess(command, cwd=directory, prefix="tofu")
def destroy(directory=None, targets=None, var_files=None):
targets = targets or []
var_files = var_files or []
- logger.info(f"terraform destroy directory={directory} targets={targets}")
+ logger.info(f"tofu destroy directory={directory} targets={targets}")
command = (
[
"destroy",
@@ -232,8 +231,8 @@ def destroy(directory=None, targets=None, var_files=None):
+ ["-var-file=" + _ for _ in var_files]
)
- with timer(logger, "terraform destroy"):
- run_terraform_subprocess(command, cwd=directory, prefix="terraform")
+ with timer(logger, "tofu destroy"):
+ run_tofu_subprocess(command, cwd=directory, prefix="tofu")
def rm_local_state(directory=None):
diff --git a/src/_nebari/stages/base.py b/src/_nebari/stages/base.py
index cef1322e95..bcc6bb82bf 100644
--- a/src/_nebari/stages/base.py
+++ b/src/_nebari/stages/base.py
@@ -11,7 +11,7 @@
from kubernetes import client, config
from kubernetes.client.rest import ApiException
-from _nebari.provider import helm, kubernetes, kustomize, terraform
+from _nebari.provider import helm, kubernetes, kustomize, opentofu
from _nebari.stages.tf_objects import NebariTerraformState
from nebari.hookspecs import NebariStage
@@ -248,7 +248,7 @@ def tf_objects(self) -> List[Dict]:
def render(self) -> Dict[pathlib.Path, str]:
contents = {
- (self.stage_prefix / "_nebari.tf.json"): terraform.tf_render_objects(
+ (self.stage_prefix / "_nebari.tf.json"): opentofu.tf_render_objects(
self.tf_objects()
)
}
@@ -283,19 +283,19 @@ def deploy(
self,
stage_outputs: Dict[str, Dict[str, Any]],
disable_prompt: bool = False,
- terraform_init: bool = True,
+ tofu_init: bool = True,
):
deploy_config = dict(
directory=str(self.output_directory / self.stage_prefix),
input_vars=self.input_vars(stage_outputs),
- terraform_init=terraform_init,
+ tofu_init=tofu_init,
)
state_imports = self.state_imports()
if state_imports:
- deploy_config["terraform_import"] = True
+ deploy_config["tofu_import"] = True
deploy_config["state_imports"] = state_imports
- self.set_outputs(stage_outputs, terraform.deploy(**deploy_config))
+ self.set_outputs(stage_outputs, opentofu.deploy(**deploy_config))
self.post_deploy(stage_outputs, disable_prompt)
yield
@@ -318,27 +318,27 @@ def destroy(
):
self.set_outputs(
stage_outputs,
- terraform.deploy(
+ opentofu.deploy(
directory=str(self.output_directory / self.stage_prefix),
input_vars=self.input_vars(stage_outputs),
- terraform_init=True,
- terraform_import=True,
- terraform_apply=False,
- terraform_destroy=False,
+ tofu_init=True,
+ tofu_import=True,
+ tofu_apply=False,
+ tofu_destroy=False,
),
)
yield
try:
- terraform.deploy(
+ opentofu.deploy(
directory=str(self.output_directory / self.stage_prefix),
input_vars=self.input_vars(stage_outputs),
- terraform_init=True,
- terraform_import=True,
- terraform_apply=False,
- terraform_destroy=True,
+ tofu_init=True,
+ tofu_import=True,
+ tofu_apply=False,
+ tofu_destroy=True,
)
status["stages/" + self.name] = True
- except terraform.TerraformException as e:
+ except opentofu.OpenTofuException as e:
if not ignore_errors:
raise e
status["stages/" + self.name] = False
diff --git a/src/_nebari/stages/infrastructure/__init__.py b/src/_nebari/stages/infrastructure/__init__.py
index 559f17bd53..553e520e3a 100644
--- a/src/_nebari/stages/infrastructure/__init__.py
+++ b/src/_nebari/stages/infrastructure/__init__.py
@@ -6,18 +6,14 @@
import re
import sys
import tempfile
+import warnings
from typing import Annotated, Any, Dict, List, Literal, Optional, Tuple, Type, Union
-from pydantic import Field, field_validator, model_validator
+from pydantic import ConfigDict, Field, field_validator, model_validator
from _nebari import constants
-from _nebari.provider import terraform
-from _nebari.provider.cloud import (
- amazon_web_services,
- azure_cloud,
- digital_ocean,
- google_cloud,
-)
+from _nebari.provider import opentofu
+from _nebari.provider.cloud import amazon_web_services, azure_cloud, google_cloud
from _nebari.stages.base import NebariTerraformStage
from _nebari.stages.kubernetes_services import SharedFsEnum
from _nebari.stages.tf_objects import NebariTerraformState
@@ -43,22 +39,6 @@ class ExistingInputVars(schema.Base):
kube_context: str
-class DigitalOceanNodeGroup(schema.Base):
- instance: str
- min_nodes: int
- max_nodes: int
-
-
-class DigitalOceanInputVars(schema.Base):
- name: str
- environment: str
- region: str
- tags: List[str]
- kubernetes_version: str
- node_groups: Dict[str, DigitalOceanNodeGroup]
- kubeconfig_filename: str = get_kubeconfig_filename()
-
-
class GCPNodeGroupInputVars(schema.Base):
name: str
instance_type: str
@@ -115,6 +95,7 @@ class AzureInputVars(schema.Base):
name: str
environment: str
region: str
+ authorized_ip_ranges: List[str] = ["0.0.0.0/0"]
kubeconfig_filename: str = get_kubeconfig_filename()
kubernetes_version: str
node_groups: Dict[str, AzureNodeGroupInputVars]
@@ -125,6 +106,7 @@ class AzureInputVars(schema.Base):
tags: Dict[str, str] = {}
max_pods: Optional[int] = None
network_profile: Optional[Dict[str, str]] = None
+ azure_policy_enabled: Optional[bool] = None
workload_identity_enabled: bool = False
@@ -152,10 +134,23 @@ class AWSNodeGroupInputVars(schema.Base):
launch_template: Optional[AWSNodeLaunchTemplate] = None
-def construct_aws_ami_type(gpu_enabled: bool, launch_template: AWSNodeLaunchTemplate):
- """Construct the AWS AMI type based on the provided parameters."""
+def construct_aws_ami_type(
+ gpu_enabled: bool, launch_template: AWSNodeLaunchTemplate
+) -> str:
+ """
+ This function selects the Amazon Machine Image (AMI) type for AWS nodes by evaluating
+ the provided parameters. The selection logic prioritizes the launch template over the
+ GPU flag.
+
+ Returns the AMI type (str) determined by the following rules:
+ - Returns "CUSTOM" if a `launch_template` is provided and it includes a valid `ami_id`.
+ - Returns "AL2_x86_64_GPU" if `gpu_enabled` is True and no valid
+ `launch_template` is provided (None).
+ - Returns "AL2_x86_64" as the default AMI type if `gpu_enabled` is False and no
+ valid `launch_template` is provided (None).
+ """
- if launch_template and launch_template.ami_id:
+ if launch_template and getattr(launch_template, "ami_id", None):
return "CUSTOM"
if gpu_enabled:
@@ -174,6 +169,7 @@ class AWSInputVars(schema.Base):
eks_endpoint_access: Optional[
Literal["private", "public", "public_and_private"]
] = "public"
+ eks_kms_arn: Optional[str] = None
node_groups: List[AWSNodeGroupInputVars]
availability_zones: List[str]
vpc_cidr_block: str
@@ -210,11 +206,6 @@ def _calculate_node_groups(config: schema.Main):
group: {"key": "azure-node-pool", "value": group}
for group in ["general", "user", "worker"]
}
- elif config.provider == schema.ProviderEnum.do:
- return {
- group: {"key": "doks.digitalocean.com/node-pool", "value": group}
- for group in ["general", "user", "worker"]
- }
elif config.provider == schema.ProviderEnum.existing:
return config.existing.model_dump()["node_selectors"]
else:
@@ -253,67 +244,6 @@ class KeyValueDict(schema.Base):
value: str
-class DigitalOceanNodeGroup(schema.Base):
- """Representation of a node group with Digital Ocean
-
- - Kubernetes limits: https://docs.digitalocean.com/products/kubernetes/details/limits/
- - Available instance types: https://slugs.do-api.dev/
- """
-
- instance: str
- min_nodes: Annotated[int, Field(ge=1)] = 1
- max_nodes: Annotated[int, Field(ge=1)] = 1
-
-
-DEFAULT_DO_NODE_GROUPS = {
- "general": DigitalOceanNodeGroup(instance="g-8vcpu-32gb", min_nodes=1, max_nodes=1),
- "user": DigitalOceanNodeGroup(instance="g-4vcpu-16gb", min_nodes=1, max_nodes=5),
- "worker": DigitalOceanNodeGroup(instance="g-4vcpu-16gb", min_nodes=1, max_nodes=5),
-}
-
-
-class DigitalOceanProvider(schema.Base):
- region: str
- kubernetes_version: Optional[str] = None
- # Digital Ocean image slugs are listed here https://slugs.do-api.dev/
- node_groups: Dict[str, DigitalOceanNodeGroup] = DEFAULT_DO_NODE_GROUPS
- tags: Optional[List[str]] = []
-
- @model_validator(mode="before")
- @classmethod
- def _check_input(cls, data: Any) -> Any:
- digital_ocean.check_credentials()
-
- # check if region is valid
- available_regions = set(_["slug"] for _ in digital_ocean.regions())
- if data["region"] not in available_regions:
- raise ValueError(
- f"Digital Ocean region={data['region']} is not one of {available_regions}"
- )
-
- # check if kubernetes version is valid
- available_kubernetes_versions = digital_ocean.kubernetes_versions()
- if len(available_kubernetes_versions) == 0:
- raise ValueError(
- "Request to Digital Ocean for available Kubernetes versions failed."
- )
- if data["kubernetes_version"] is None:
- data["kubernetes_version"] = available_kubernetes_versions[-1]
- elif data["kubernetes_version"] not in available_kubernetes_versions:
- raise ValueError(
- f"\nInvalid `kubernetes-version` provided: {data['kubernetes_version']}.\nPlease select from one of the following supported Kubernetes versions: {available_kubernetes_versions} or omit flag to use latest Kubernetes version available."
- )
-
- available_instances = {_["slug"] for _ in digital_ocean.instances()}
- if "node_groups" in data:
- for _, node_group in data["node_groups"].items():
- if node_group["instance"] not in available_instances:
- raise ValueError(
- f"Digital Ocean instance {node_group.instance} not one of available instance types={available_instances}"
- )
- return data
-
-
class GCPIPAllocationPolicy(schema.Base):
cluster_secondary_range_name: str
services_secondary_range_name: str
@@ -358,6 +288,9 @@ class GCPNodeGroup(schema.Base):
class GoogleCloudPlatformProvider(schema.Base):
+ # If you pass a major and minor version without a patch version
+ # yaml will pass it as a float, so we need to coerce it to a string
+ model_config = ConfigDict(coerce_numbers_to_str=True)
region: str
project: str
kubernetes_version: str
@@ -372,6 +305,12 @@ class GoogleCloudPlatformProvider(schema.Base):
master_authorized_networks_config: Optional[Union[GCPCIDRBlock, None]] = None
private_cluster_config: Optional[Union[GCPPrivateClusterConfig, None]] = None
+ @field_validator("kubernetes_version", mode="before")
+ @classmethod
+ def transform_version_to_str(cls, value) -> str:
+ """Transforms the version to a string if it is not already."""
+ return str(value)
+
@model_validator(mode="before")
@classmethod
def _check_input(cls, data: Any) -> Any:
@@ -382,11 +321,28 @@ def _check_input(cls, data: Any) -> Any:
)
available_kubernetes_versions = google_cloud.kubernetes_versions(data["region"])
- print(available_kubernetes_versions)
- if data["kubernetes_version"] not in available_kubernetes_versions:
+ if not any(
+ v.startswith(str(data["kubernetes_version"]))
+ for v in available_kubernetes_versions
+ ):
raise ValueError(
f"\nInvalid `kubernetes-version` provided: {data['kubernetes_version']}.\nPlease select from one of the following supported Kubernetes versions: {available_kubernetes_versions} or omit flag to use latest Kubernetes version available."
)
+
+ # check if instances are valid
+ available_instances = google_cloud.instances(data["region"])
+ if "node_groups" in data:
+ for _, node_group in data["node_groups"].items():
+ instance = (
+ node_group["instance"]
+ if hasattr(node_group, "__getitem__")
+ else node_group.instance
+ )
+ if instance not in available_instances:
+ raise ValueError(
+ f"Google Cloud Platform instance {instance} not one of available instance types={available_instances}"
+ )
+
return data
@@ -407,6 +363,7 @@ class AzureProvider(schema.Base):
region: str
kubernetes_version: Optional[str] = None
storage_account_postfix: str
+ authorized_ip_ranges: Optional[List[str]] = ["0.0.0.0/0"]
resource_group_name: Optional[str] = None
node_groups: Dict[str, AzureNodeGroup] = DEFAULT_AZURE_NODE_GROUPS
storage_account_postfix: str
@@ -417,6 +374,7 @@ class AzureProvider(schema.Base):
network_profile: Optional[Dict[str, str]] = None
max_pods: Optional[int] = None
workload_identity_enabled: bool = False
+ azure_policy_enabled: Optional[bool] = None
@model_validator(mode="before")
@classmethod
@@ -468,7 +426,16 @@ class AWSNodeGroup(schema.Base):
gpu: bool = False
single_subnet: bool = False
permissions_boundary: Optional[str] = None
- launch_template: Optional[AWSNodeLaunchTemplate] = None
+ # Disabled as part of 2024.11.1 until #2832 is resolved
+ # launch_template: Optional[AWSNodeLaunchTemplate] = None
+
+ @model_validator(mode="before")
+ def check_launch_template(cls, values):
+ if "launch_template" in values:
+ raise ValueError(
+ "The 'launch_template' field is currently unavailable and has been removed from the configuration schema.\nPlease omit this field until it is reintroduced in a future update.",
+ )
+ return values
DEFAULT_AWS_NODE_GROUPS = {
@@ -490,6 +457,7 @@ class AmazonWebServicesProvider(schema.Base):
eks_endpoint_access: Optional[
Literal["private", "public", "public_and_private"]
] = "public"
+ eks_kms_arn: Optional[str] = None
existing_subnet_ids: Optional[List[str]] = None
existing_security_group_id: Optional[str] = None
vpc_cidr_block: str = "10.10.0.0/16"
@@ -546,6 +514,42 @@ def _check_input(cls, data: Any) -> Any:
f"Amazon Web Services instance {node_group.instance} not one of available instance types={available_instances}"
)
+ # check if kms key is valid
+ available_kms_keys = amazon_web_services.kms_key_arns(data["region"])
+ if "eks_kms_arn" in data and data["eks_kms_arn"] is not None:
+ key_id = [
+ id for id in available_kms_keys.keys() if id in data["eks_kms_arn"]
+ ]
+ # Raise error if key_id is not found in available_kms_keys
+ if (
+ len(key_id) != 1
+ or available_kms_keys[key_id[0]].Arn != data["eks_kms_arn"]
+ ):
+ raise ValueError(
+ f"Amazon Web Services KMS Key with ARN {data['eks_kms_arn']} not one of available/enabled keys={[v.Arn for v in available_kms_keys.values() if v.KeyManager=='CUSTOMER' and v.KeySpec=='SYMMETRIC_DEFAULT']}"
+ )
+ key_id = key_id[0]
+ # Raise error if key is not a customer managed key
+ if available_kms_keys[key_id].KeyManager != "CUSTOMER":
+ raise ValueError(
+ f"Amazon Web Services KMS Key with ID {key_id} is not a customer managed key"
+ )
+ # Symmetric KMS keys with Encrypt and decrypt key-usage have the SYMMETRIC_DEFAULT key-spec
+ # EKS cluster encryption requires a Symmetric key that is set to encrypt and decrypt data
+ if available_kms_keys[key_id].KeySpec != "SYMMETRIC_DEFAULT":
+ if available_kms_keys[key_id].KeyUsage == "GENERATE_VERIFY_MAC":
+ raise ValueError(
+ f"Amazon Web Services KMS Key with ID {key_id} does not have KeyUsage set to 'Encrypt and decrypt' data"
+ )
+ elif available_kms_keys[key_id].KeyUsage != "ENCRYPT_DECRYPT":
+ raise ValueError(
+ f"Amazon Web Services KMS Key with ID {key_id} is not of type Symmetric, and KeyUsage not set to 'Encrypt and decrypt' data"
+ )
+ else:
+ raise ValueError(
+ f"Amazon Web Services KMS Key with ID {key_id} is not of type Symmetric"
+ )
+
return data
@@ -573,7 +577,6 @@ class ExistingProvider(schema.Base):
schema.ProviderEnum.gcp: GoogleCloudPlatformProvider,
schema.ProviderEnum.aws: AmazonWebServicesProvider,
schema.ProviderEnum.azure: AzureProvider,
- schema.ProviderEnum.do: DigitalOceanProvider,
}
provider_enum_name_map: Dict[schema.ProviderEnum, str] = {
@@ -582,7 +585,6 @@ class ExistingProvider(schema.Base):
schema.ProviderEnum.gcp: "google_cloud_platform",
schema.ProviderEnum.aws: "amazon_web_services",
schema.ProviderEnum.azure: "azure",
- schema.ProviderEnum.do: "digital_ocean",
}
provider_name_abbreviation_map: Dict[str, str] = {
@@ -593,7 +595,6 @@ class ExistingProvider(schema.Base):
schema.ProviderEnum.gcp: node_groups_to_dict(DEFAULT_GCP_NODE_GROUPS),
schema.ProviderEnum.aws: node_groups_to_dict(DEFAULT_AWS_NODE_GROUPS),
schema.ProviderEnum.azure: node_groups_to_dict(DEFAULT_AZURE_NODE_GROUPS),
- schema.ProviderEnum.do: node_groups_to_dict(DEFAULT_DO_NODE_GROUPS),
}
@@ -603,7 +604,6 @@ class InputSchema(schema.Base):
google_cloud_platform: Optional[GoogleCloudPlatformProvider] = None
amazon_web_services: Optional[AmazonWebServicesProvider] = None
azure: Optional[AzureProvider] = None
- digital_ocean: Optional[DigitalOceanProvider] = None
@model_validator(mode="before")
@classmethod
@@ -618,11 +618,23 @@ def check_provider(cls, data: Any) -> Any:
data[provider] = provider_enum_model_map[provider]()
else:
# if the provider field is invalid, it won't be set when this validator is called
- # so we need to check for it explicitly here, and set the `pre` to True
+ # so we need to check for it explicitly here, and set mode to "before"
# TODO: this is a workaround, check if there is a better way to do this in Pydantic v2
raise ValueError(
- f"'{provider}' is not a valid enumeration member; permitted: local, existing, do, aws, gcp, azure"
+ f"'{provider}' is not a valid enumeration member; permitted: local, existing, aws, gcp, azure"
+ )
+ set_providers = {
+ provider
+ for provider in provider_name_abbreviation_map.keys()
+ if provider in data and data[provider]
+ }
+ expected_provider_config = provider_enum_name_map[provider]
+ extra_provider_config = set_providers - {expected_provider_config}
+ if extra_provider_config:
+ warnings.warn(
+ f"Provider is set to {getattr(provider, 'value', provider)}, but configuration defined for other providers: {extra_provider_config}"
)
+
else:
set_providers = [
provider
@@ -636,6 +648,7 @@ def check_provider(cls, data: Any) -> Any:
data["provider"] = provider_name_abbreviation_map[set_providers[0]]
elif num_providers == 0:
data["provider"] = schema.ProviderEnum.local.value
+
return data
@@ -721,26 +734,20 @@ def state_imports(self) -> List[Tuple[str, str]]:
def tf_objects(self) -> List[Dict]:
if self.config.provider == schema.ProviderEnum.gcp:
return [
- terraform.Provider(
+ opentofu.Provider(
"google",
project=self.config.google_cloud_platform.project,
region=self.config.google_cloud_platform.region,
),
NebariTerraformState(self.name, self.config),
]
- elif self.config.provider == schema.ProviderEnum.do:
- return [
- NebariTerraformState(self.name, self.config),
- ]
elif self.config.provider == schema.ProviderEnum.azure:
return [
NebariTerraformState(self.name, self.config),
]
elif self.config.provider == schema.ProviderEnum.aws:
return [
- terraform.Provider(
- "aws", region=self.config.amazon_web_services.region
- ),
+ opentofu.Provider("aws", region=self.config.amazon_web_services.region),
NebariTerraformState(self.name, self.config),
]
else:
@@ -755,15 +762,6 @@ def input_vars(self, stage_outputs: Dict[str, Dict[str, Any]]):
return ExistingInputVars(
kube_context=self.config.existing.kube_context
).model_dump()
- elif self.config.provider == schema.ProviderEnum.do:
- return DigitalOceanInputVars(
- name=self.config.escaped_project_name,
- environment=self.config.namespace,
- region=self.config.digital_ocean.region,
- tags=self.config.digital_ocean.tags,
- kubernetes_version=self.config.digital_ocean.kubernetes_version,
- node_groups=self.config.digital_ocean.node_groups,
- ).model_dump()
elif self.config.provider == schema.ProviderEnum.gcp:
return GCPInputVars(
name=self.config.escaped_project_name,
@@ -804,6 +802,7 @@ def input_vars(self, stage_outputs: Dict[str, Dict[str, Any]]):
environment=self.config.namespace,
region=self.config.azure.region,
kubernetes_version=self.config.azure.kubernetes_version,
+ authorized_ip_ranges=self.config.azure.authorized_ip_ranges,
node_groups={
name: AzureNodeGroupInputVars(
instance=node_group.instance,
@@ -829,12 +828,14 @@ def input_vars(self, stage_outputs: Dict[str, Dict[str, Any]]):
network_profile=self.config.azure.network_profile,
max_pods=self.config.azure.max_pods,
workload_identity_enabled=self.config.azure.workload_identity_enabled,
+ azure_policy_enabled=self.config.azure.azure_policy_enabled,
).model_dump()
elif self.config.provider == schema.ProviderEnum.aws:
return AWSInputVars(
name=self.config.escaped_project_name,
environment=self.config.namespace,
eks_endpoint_access=self.config.amazon_web_services.eks_endpoint_access,
+ eks_kms_arn=self.config.amazon_web_services.eks_kms_arn,
existing_subnet_ids=self.config.amazon_web_services.existing_subnet_ids,
existing_security_group_id=self.config.amazon_web_services.existing_security_group_id,
region=self.config.amazon_web_services.region,
@@ -849,10 +850,10 @@ def input_vars(self, stage_outputs: Dict[str, Dict[str, Any]]):
max_size=node_group.max_nodes,
single_subnet=node_group.single_subnet,
permissions_boundary=node_group.permissions_boundary,
- launch_template=node_group.launch_template,
+ launch_template=None,
ami_type=construct_aws_ami_type(
gpu_enabled=node_group.gpu,
- launch_template=node_group.launch_template,
+ launch_template=None,
),
)
for name, node_group in self.config.amazon_web_services.node_groups.items()
diff --git a/src/_nebari/stages/infrastructure/template/aws/main.tf b/src/_nebari/stages/infrastructure/template/aws/main.tf
index feffd35291..ec0cbb6606 100644
--- a/src/_nebari/stages/infrastructure/template/aws/main.tf
+++ b/src/_nebari/stages/infrastructure/template/aws/main.tf
@@ -99,6 +99,7 @@ module "kubernetes" {
endpoint_public_access = var.eks_endpoint_access == "private" ? false : true
endpoint_private_access = var.eks_endpoint_access == "public" ? false : true
+ eks_kms_arn = var.eks_kms_arn
public_access_cidrs = var.eks_public_access_cidrs
permissions_boundary = var.permissions_boundary
}
diff --git a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/main.tf b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/main.tf
index 5b66201f83..2537b12dad 100644
--- a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/main.tf
+++ b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/main.tf
@@ -14,8 +14,20 @@ resource "aws_eks_cluster" "main" {
public_access_cidrs = var.public_access_cidrs
}
+ # Only set encryption_config if eks_kms_arn is not null
+ dynamic "encryption_config" {
+ for_each = var.eks_kms_arn != null ? [1] : []
+ content {
+ provider {
+ key_arn = var.eks_kms_arn
+ }
+ resources = ["secrets"]
+ }
+ }
+
depends_on = [
aws_iam_role_policy_attachment.cluster-policy,
+ aws_iam_role_policy_attachment.cluster_encryption,
]
tags = merge({ Name = var.name }, var.tags)
@@ -135,6 +147,9 @@ resource "aws_eks_addon" "aws-ebs-csi-driver" {
"eks.amazonaws.com/nodegroup" = "general"
}
}
+ defaultStorageClass = {
+ enabled = true
+ }
})
# Ensure cluster and node groups are created
diff --git a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/policy.tf b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/policy.tf
index 6916bc6532..d72b64edaa 100644
--- a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/policy.tf
+++ b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/policy.tf
@@ -32,6 +32,33 @@ resource "aws_iam_role_policy_attachment" "cluster-policy" {
role = aws_iam_role.cluster.name
}
+data "aws_iam_policy_document" "cluster_encryption" {
+ count = var.eks_kms_arn != null ? 1 : 0
+ statement {
+ actions = [
+ "kms:Encrypt",
+ "kms:Decrypt",
+ "kms:ListGrants",
+ "kms:DescribeKey"
+ ]
+ resources = [var.eks_kms_arn]
+ }
+}
+
+resource "aws_iam_policy" "cluster_encryption" {
+ count = var.eks_kms_arn != null ? 1 : 0
+ name = "${var.name}-eks-encryption-policy"
+ description = "IAM policy for EKS cluster encryption"
+ policy = data.aws_iam_policy_document.cluster_encryption[count.index].json
+}
+
+# Grant the EKS Cluster role KMS permissions if a key-arn is specified
+resource "aws_iam_role_policy_attachment" "cluster_encryption" {
+ count = var.eks_kms_arn != null ? 1 : 0
+ policy_arn = aws_iam_policy.cluster_encryption[count.index].arn
+ role = aws_iam_role.cluster.name
+}
+
# =======================================================
# Kubernetes Node Group Policies
# =======================================================
diff --git a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/variables.tf b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/variables.tf
index 4d38d10a19..63558e550f 100644
--- a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/variables.tf
+++ b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/variables.tf
@@ -72,6 +72,12 @@ variable "endpoint_private_access" {
default = false
}
+variable "eks_kms_arn" {
+ description = "kms key arn for EKS cluster encryption_config"
+ type = string
+ default = null
+}
+
variable "public_access_cidrs" {
type = list(string)
default = ["0.0.0.0/0"]
diff --git a/src/_nebari/stages/infrastructure/template/aws/modules/network/main.tf b/src/_nebari/stages/infrastructure/template/aws/modules/network/main.tf
index da42767976..326da1e4bb 100644
--- a/src/_nebari/stages/infrastructure/template/aws/modules/network/main.tf
+++ b/src/_nebari/stages/infrastructure/template/aws/modules/network/main.tf
@@ -55,6 +55,7 @@ resource "aws_security_group" "main" {
vpc_id = aws_vpc.main.id
ingress {
+ description = "Allow all ports and protocols to enter the security group"
from_port = 0
to_port = 0
protocol = "-1"
@@ -62,6 +63,7 @@ resource "aws_security_group" "main" {
}
egress {
+ description = "Allow all ports and protocols to exit the security group"
from_port = 0
to_port = 0
protocol = "-1"
diff --git a/src/_nebari/stages/infrastructure/template/aws/variables.tf b/src/_nebari/stages/infrastructure/template/aws/variables.tf
index a3f37b9eb9..a71df81d0f 100644
--- a/src/_nebari/stages/infrastructure/template/aws/variables.tf
+++ b/src/_nebari/stages/infrastructure/template/aws/variables.tf
@@ -69,6 +69,12 @@ variable "eks_endpoint_private_access" {
default = false
}
+variable "eks_kms_arn" {
+ description = "kms key arn for EKS cluster encryption_config"
+ type = string
+ default = null
+}
+
variable "eks_public_access_cidrs" {
type = list(string)
default = ["0.0.0.0/0"]
diff --git a/src/_nebari/stages/infrastructure/template/azure/main.tf b/src/_nebari/stages/infrastructure/template/azure/main.tf
index 2d6e2e2afa..960b755f8c 100644
--- a/src/_nebari/stages/infrastructure/template/azure/main.tf
+++ b/src/_nebari/stages/infrastructure/template/azure/main.tf
@@ -28,6 +28,7 @@ module "kubernetes" {
kubernetes_version = var.kubernetes_version
tags = var.tags
max_pods = var.max_pods
+ authorized_ip_ranges = var.authorized_ip_ranges
network_profile = var.network_profile
@@ -43,4 +44,5 @@ module "kubernetes" {
vnet_subnet_id = var.vnet_subnet_id
private_cluster_enabled = var.private_cluster_enabled
workload_identity_enabled = var.workload_identity_enabled
+ azure_policy_enabled = var.azure_policy_enabled
}
diff --git a/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/main.tf b/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/main.tf
index f093f048c6..f97f1f6383 100644
--- a/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/main.tf
+++ b/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/main.tf
@@ -4,6 +4,9 @@ resource "azurerm_kubernetes_cluster" "main" {
location = var.location
resource_group_name = var.resource_group_name
tags = var.tags
+ api_server_access_profile {
+ authorized_ip_ranges = var.authorized_ip_ranges
+ }
# To enable Azure AD Workload Identity oidc_issuer_enabled must be set to true.
oidc_issuer_enabled = var.workload_identity_enabled
@@ -15,6 +18,9 @@ resource "azurerm_kubernetes_cluster" "main" {
# Azure requires that a new, non-existent Resource Group is used, as otherwise the provisioning of the Kubernetes Service will fail.
node_resource_group = var.node_resource_group_name
private_cluster_enabled = var.private_cluster_enabled
+ # https://learn.microsoft.com/en-ie/azure/governance/policy/concepts/policy-for-kubernetes
+ azure_policy_enabled = var.azure_policy_enabled
+
dynamic "network_profile" {
for_each = var.network_profile != null ? [var.network_profile] : []
diff --git a/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/variables.tf b/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/variables.tf
index b93a9fae2d..95d2045420 100644
--- a/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/variables.tf
+++ b/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/variables.tf
@@ -76,3 +76,15 @@ variable "workload_identity_enabled" {
type = bool
default = false
}
+
+variable "authorized_ip_ranges" {
+ description = "The ip range allowed to access the Kubernetes API server, defaults to 0.0.0.0/0"
+ type = list(string)
+ default = ["0.0.0.0/0"]
+}
+
+variable "azure_policy_enabled" {
+ description = "Enable Azure Policy"
+ type = bool
+ default = false
+}
diff --git a/src/_nebari/stages/infrastructure/template/azure/variables.tf b/src/_nebari/stages/infrastructure/template/azure/variables.tf
index dcef2c97cb..44ef90463f 100644
--- a/src/_nebari/stages/infrastructure/template/azure/variables.tf
+++ b/src/_nebari/stages/infrastructure/template/azure/variables.tf
@@ -82,3 +82,15 @@ variable "workload_identity_enabled" {
type = bool
default = false
}
+
+variable "authorized_ip_ranges" {
+ description = "The ip range allowed to access the Kubernetes API server, defaults to 0.0.0.0/0"
+ type = list(string)
+ default = ["0.0.0.0/0"]
+}
+
+variable "azure_policy_enabled" {
+ description = "Enable Azure Policy"
+ type = bool
+ default = false
+}
diff --git a/src/_nebari/stages/infrastructure/template/do/main.tf b/src/_nebari/stages/infrastructure/template/do/main.tf
deleted file mode 100644
index 30a7aa2966..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/main.tf
+++ /dev/null
@@ -1,25 +0,0 @@
-module "kubernetes" {
- source = "./modules/kubernetes"
-
- name = "${var.name}-${var.environment}"
-
- region = var.region
- kubernetes_version = var.kubernetes_version
-
- node_groups = [
- for name, config in var.node_groups : {
- name = name
- auto_scale = true
- size = config.instance
- min_nodes = config.min_nodes
- max_nodes = config.max_nodes
- }
- ]
-
- tags = concat([
- "provision::terraform",
- "project::${var.name}",
- "namespace::${var.environment}",
- "owner::nebari",
- ], var.tags)
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/locals.tf b/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/locals.tf
deleted file mode 100644
index d88a874c5c..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/locals.tf
+++ /dev/null
@@ -1,5 +0,0 @@
-locals {
- master_node_group = var.node_groups[0]
-
- additional_node_groups = slice(var.node_groups, 1, length(var.node_groups))
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/main.tf b/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/main.tf
deleted file mode 100644
index 0d1ce76a35..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/main.tf
+++ /dev/null
@@ -1,35 +0,0 @@
-resource "digitalocean_kubernetes_cluster" "main" {
- name = var.name
- region = var.region
-
- # Grab the latest from `doctl kubernetes options versions`
- version = var.kubernetes_version
-
- node_pool {
- name = local.master_node_group.name
- # List available regions `doctl kubernetes options sizes`
- size = lookup(local.master_node_group, "size", "s-1vcpu-2gb")
- node_count = lookup(local.master_node_group, "node_count", 1)
- }
-
- tags = var.tags
-}
-
-resource "digitalocean_kubernetes_node_pool" "main" {
- count = length(local.additional_node_groups)
-
- cluster_id = digitalocean_kubernetes_cluster.main.id
-
- name = local.additional_node_groups[count.index].name
- size = lookup(local.additional_node_groups[count.index], "size", "s-1vcpu-2gb")
-
- auto_scale = lookup(local.additional_node_groups[count.index], "auto_scale", true)
- min_nodes = lookup(local.additional_node_groups[count.index], "min_nodes", 1)
- max_nodes = lookup(local.additional_node_groups[count.index], "max_nodes", 1)
-
- labels = {
- "nebari.dev/node_group" : local.additional_node_groups[count.index].name
- }
-
- tags = var.tags
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/outputs.tf b/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/outputs.tf
deleted file mode 100644
index e2e1c2c6be..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/outputs.tf
+++ /dev/null
@@ -1,16 +0,0 @@
-output "credentials" {
- description = "Credentials needs to connect to kubernetes instance"
- value = {
- endpoint = digitalocean_kubernetes_cluster.main.endpoint
- token = digitalocean_kubernetes_cluster.main.kube_config[0].token
- cluster_ca_certificate = base64decode(
- digitalocean_kubernetes_cluster.main.kube_config[0].cluster_ca_certificate
- )
- }
-}
-
-
-output "kubeconfig" {
- description = "Kubeconfig for connecting to kubernetes cluster"
- value = digitalocean_kubernetes_cluster.main.kube_config.0.raw_config
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/variables.tf b/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/variables.tf
deleted file mode 100644
index 67843a7820..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/variables.tf
+++ /dev/null
@@ -1,29 +0,0 @@
-variable "name" {
- description = "Prefix name to assign to digital ocean kubernetes cluster"
- type = string
-}
-
-variable "tags" {
- description = "Additional tags to apply to each kubernetes resource"
- type = set(string)
- default = []
-}
-
-# `doctl kubernetes options regions`
-variable "region" {
- description = "Region to deploy digital ocean kubernetes resource"
- type = string
- default = "nyc1"
-}
-
-# `doctl kubernetes options versions`
-variable "kubernetes_version" {
- description = "Version of digital ocean kubernetes resource"
- type = string
- default = "1.18.8-do.0"
-}
-
-variable "node_groups" {
- description = "List of node groups to include in digital ocean kubernetes cluster"
- type = list(map(any))
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/versions.tf b/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/versions.tf
deleted file mode 100644
index b320a102dd..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/versions.tf
+++ /dev/null
@@ -1,9 +0,0 @@
-terraform {
- required_providers {
- digitalocean = {
- source = "digitalocean/digitalocean"
- version = "2.29.0"
- }
- }
- required_version = ">= 1.0"
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/modules/registry/main.tf b/src/_nebari/stages/infrastructure/template/do/modules/registry/main.tf
deleted file mode 100644
index 14e6896030..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/modules/registry/main.tf
+++ /dev/null
@@ -1,4 +0,0 @@
-resource "digitalocean_container_registry" "registry" {
- name = var.name
- subscription_tier_slug = "starter"
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/modules/registry/variable.tf b/src/_nebari/stages/infrastructure/template/do/modules/registry/variable.tf
deleted file mode 100644
index fce96bef08..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/modules/registry/variable.tf
+++ /dev/null
@@ -1,4 +0,0 @@
-variable "name" {
- description = "Prefix name to git container registry"
- type = string
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/modules/registry/versions.tf b/src/_nebari/stages/infrastructure/template/do/modules/registry/versions.tf
deleted file mode 100644
index b320a102dd..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/modules/registry/versions.tf
+++ /dev/null
@@ -1,9 +0,0 @@
-terraform {
- required_providers {
- digitalocean = {
- source = "digitalocean/digitalocean"
- version = "2.29.0"
- }
- }
- required_version = ">= 1.0"
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/outputs.tf b/src/_nebari/stages/infrastructure/template/do/outputs.tf
deleted file mode 100644
index 53aae17634..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/outputs.tf
+++ /dev/null
@@ -1,21 +0,0 @@
-output "kubernetes_credentials" {
- description = "Parameters needed to connect to kubernetes cluster"
- sensitive = true
- value = {
- host = module.kubernetes.credentials.endpoint
- cluster_ca_certificate = module.kubernetes.credentials.cluster_ca_certificate
- token = module.kubernetes.credentials.token
- }
-}
-
-resource "local_file" "kubeconfig" {
- count = var.kubeconfig_filename != null ? 1 : 0
-
- content = module.kubernetes.kubeconfig
- filename = var.kubeconfig_filename
-}
-
-output "kubeconfig_filename" {
- description = "filename for nebari kubeconfig"
- value = var.kubeconfig_filename
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/providers.tf b/src/_nebari/stages/infrastructure/template/do/providers.tf
deleted file mode 100644
index a877aca363..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/providers.tf
+++ /dev/null
@@ -1,3 +0,0 @@
-provider "digitalocean" {
-
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/variables.tf b/src/_nebari/stages/infrastructure/template/do/variables.tf
deleted file mode 100644
index b31a1ab039..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/variables.tf
+++ /dev/null
@@ -1,40 +0,0 @@
-variable "name" {
- description = "Prefix name to assign to nebari resources"
- type = string
-}
-
-variable "environment" {
- description = "Environment to create Kubernetes resources"
- type = string
-}
-
-variable "region" {
- description = "DigitalOcean region"
- type = string
-}
-
-variable "tags" {
- description = "DigitalOcean tags to assign to resources"
- type = list(string)
- default = []
-}
-
-variable "kubernetes_version" {
- description = "DigitalOcean kubernetes version"
- type = string
-}
-
-variable "node_groups" {
- description = "DigitalOcean node groups"
- type = map(object({
- instance = string
- min_nodes = number
- max_nodes = number
- }))
-}
-
-variable "kubeconfig_filename" {
- description = "Kubernetes kubeconfig written to filesystem"
- type = string
- default = null
-}
diff --git a/src/_nebari/stages/infrastructure/template/do/versions.tf b/src/_nebari/stages/infrastructure/template/do/versions.tf
deleted file mode 100644
index b320a102dd..0000000000
--- a/src/_nebari/stages/infrastructure/template/do/versions.tf
+++ /dev/null
@@ -1,9 +0,0 @@
-terraform {
- required_providers {
- digitalocean = {
- source = "digitalocean/digitalocean"
- version = "2.29.0"
- }
- }
- required_version = ">= 1.0"
-}
diff --git a/src/_nebari/stages/infrastructure/template/gcp/main.tf b/src/_nebari/stages/infrastructure/template/gcp/main.tf
index 3d23af5571..ec80cefe16 100644
--- a/src/_nebari/stages/infrastructure/template/gcp/main.tf
+++ b/src/_nebari/stages/infrastructure/template/gcp/main.tf
@@ -5,6 +5,9 @@ data "google_compute_zones" "gcpzones" {
module "registry-jupyterhub" {
source = "./modules/registry"
+
+ repository_id = "${var.name}-${var.environment}"
+ location = var.region
}
diff --git a/src/_nebari/stages/infrastructure/template/gcp/modules/registry/main.tf b/src/_nebari/stages/infrastructure/template/gcp/modules/registry/main.tf
index a4e35bf1a3..9403872737 100644
--- a/src/_nebari/stages/infrastructure/template/gcp/modules/registry/main.tf
+++ b/src/_nebari/stages/infrastructure/template/gcp/modules/registry/main.tf
@@ -1,3 +1,6 @@
-resource "google_container_registry" "registry" {
- location = var.location
+resource "google_artifact_registry_repository" "registry" {
+ # https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/artifact_registry_repository#argument-reference
+ repository_id = var.repository_id
+ location = var.location
+ format = var.format
}
diff --git a/src/_nebari/stages/infrastructure/template/gcp/modules/registry/variables.tf b/src/_nebari/stages/infrastructure/template/gcp/modules/registry/variables.tf
index 39f6d5ed28..9162425fa1 100644
--- a/src/_nebari/stages/infrastructure/template/gcp/modules/registry/variables.tf
+++ b/src/_nebari/stages/infrastructure/template/gcp/modules/registry/variables.tf
@@ -1,6 +1,17 @@
variable "location" {
- # https://cloud.google.com/container-registry/docs/pushing-and-pulling#pushing_an_image_to_a_registry
+ # https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling
description = "Location of registry"
type = string
- default = "US"
+}
+
+variable "format" {
+ # https://cloud.google.com/artifact-registry/docs/reference/rest/v1/projects.locations.repositories#Format
+ description = "The format of packages that are stored in the repository"
+ type = string
+ default = "DOCKER"
+}
+
+variable "repository_id" {
+ description = "Name of repository"
+ type = string
}
diff --git a/src/_nebari/stages/infrastructure/template/gcp/versions.tf b/src/_nebari/stages/infrastructure/template/gcp/versions.tf
index ddea3c185c..92bd117367 100644
--- a/src/_nebari/stages/infrastructure/template/gcp/versions.tf
+++ b/src/_nebari/stages/infrastructure/template/gcp/versions.tf
@@ -2,7 +2,7 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
- version = "4.8.0"
+ version = "6.14.1"
}
}
required_version = ">= 1.0"
diff --git a/src/_nebari/stages/infrastructure/template/local/main.tf b/src/_nebari/stages/infrastructure/template/local/main.tf
index fb0d0997e1..77aa799cbd 100644
--- a/src/_nebari/stages/infrastructure/template/local/main.tf
+++ b/src/_nebari/stages/infrastructure/template/local/main.tf
@@ -1,7 +1,7 @@
terraform {
required_providers {
kind = {
- source = "tehcyx/kind"
+ source = "registry.terraform.io/tehcyx/kind"
version = "0.4.0"
}
docker = {
diff --git a/src/_nebari/stages/kubernetes_ingress/__init__.py b/src/_nebari/stages/kubernetes_ingress/__init__.py
index ea5f8fa335..df70e12b1e 100644
--- a/src/_nebari/stages/kubernetes_ingress/__init__.py
+++ b/src/_nebari/stages/kubernetes_ingress/__init__.py
@@ -43,7 +43,6 @@ def provision_ingress_dns(
record_name = ".".join(record_name)
zone_name = ".".join(zone_name)
if config.provider in {
- schema.ProviderEnum.do,
schema.ProviderEnum.gcp,
schema.ProviderEnum.azure,
}:
diff --git a/src/_nebari/stages/kubernetes_ingress/template/versions.tf b/src/_nebari/stages/kubernetes_ingress/template/versions.tf
index 341def1365..d1e5f8acfb 100644
--- a/src/_nebari/stages/kubernetes_ingress/template/versions.tf
+++ b/src/_nebari/stages/kubernetes_ingress/template/versions.tf
@@ -6,7 +6,7 @@ terraform {
}
kubernetes = {
source = "hashicorp/kubernetes"
- version = "2.20.0"
+ version = "2.35.1"
}
}
required_version = ">= 1.0"
diff --git a/src/_nebari/stages/kubernetes_initialize/template/versions.tf b/src/_nebari/stages/kubernetes_initialize/template/versions.tf
index 341def1365..d1e5f8acfb 100644
--- a/src/_nebari/stages/kubernetes_initialize/template/versions.tf
+++ b/src/_nebari/stages/kubernetes_initialize/template/versions.tf
@@ -6,7 +6,7 @@ terraform {
}
kubernetes = {
source = "hashicorp/kubernetes"
- version = "2.20.0"
+ version = "2.35.1"
}
}
required_version = ">= 1.0"
diff --git a/src/_nebari/stages/kubernetes_keycloak/template/versions.tf b/src/_nebari/stages/kubernetes_keycloak/template/versions.tf
index 341def1365..d1e5f8acfb 100644
--- a/src/_nebari/stages/kubernetes_keycloak/template/versions.tf
+++ b/src/_nebari/stages/kubernetes_keycloak/template/versions.tf
@@ -6,7 +6,7 @@ terraform {
}
kubernetes = {
source = "hashicorp/kubernetes"
- version = "2.20.0"
+ version = "2.35.1"
}
}
required_version = ">= 1.0"
diff --git a/src/_nebari/stages/kubernetes_keycloak_configuration/template/versions.tf b/src/_nebari/stages/kubernetes_keycloak_configuration/template/versions.tf
index 00353a6d2f..d3f87478e2 100644
--- a/src/_nebari/stages/kubernetes_keycloak_configuration/template/versions.tf
+++ b/src/_nebari/stages/kubernetes_keycloak_configuration/template/versions.tf
@@ -6,7 +6,7 @@ terraform {
}
kubernetes = {
source = "hashicorp/kubernetes"
- version = "2.20.0"
+ version = "2.35.1"
}
keycloak = {
source = "mrparkers/keycloak"
diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/argo-workflows/versions.tf b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/argo-workflows/versions.tf
index 341def1365..d1e5f8acfb 100644
--- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/argo-workflows/versions.tf
+++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/argo-workflows/versions.tf
@@ -6,7 +6,7 @@ terraform {
}
kubernetes = {
source = "hashicorp/kubernetes"
- version = "2.20.0"
+ version = "2.35.1"
}
}
required_version = ">= 1.0"
diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/conda-store/config/conda_store_config.py b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/conda-store/config/conda_store_config.py
index ad9b79843a..3136d891bd 100644
--- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/conda-store/config/conda_store_config.py
+++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/conda-store/config/conda_store_config.py
@@ -10,9 +10,10 @@
from pathlib import Path
import requests
-from conda_store_server import api, orm, schema
+from conda_store_server import api
+from conda_store_server._internal.server.dependencies import get_conda_store
+from conda_store_server.server import schema as auth_schema
from conda_store_server.server.auth import GenericOAuthAuthentication
-from conda_store_server.server.dependencies import get_conda_store
from conda_store_server.storage import S3Storage
@@ -356,7 +357,7 @@ def _get_conda_store_client_roles_for_user(
return client_roles_rich
def _get_current_entity_bindings(self, username):
- entity = schema.AuthenticationToken(
+ entity = auth_schema.AuthenticationToken(
primary_namespace=username, role_bindings={}
)
self.log.info(f"entity: {entity}")
@@ -386,7 +387,7 @@ async def authenticate(self, request):
# superadmin gets access to everything
if "conda_store_superadmin" in user_data.get("roles", []):
- return schema.AuthenticationToken(
+ return auth_schema.AuthenticationToken(
primary_namespace=username,
role_bindings={"*/*": {"admin"}},
)
@@ -422,10 +423,9 @@ async def authenticate(self, request):
for namespace in namespaces:
_namespace = api.get_namespace(db, name=namespace)
if _namespace is None:
- db.add(orm.Namespace(name=namespace))
- db.commit()
+ api.ensure_namespace(db, name=namespace)
- return schema.AuthenticationToken(
+ return auth_schema.AuthenticationToken(
primary_namespace=username,
role_bindings=role_bindings,
)
diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/configmaps.tf b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/configmaps.tf
index bfee219e9e..23f2ac9334 100644
--- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/configmaps.tf
+++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/configmaps.tf
@@ -60,6 +60,17 @@ resource "local_file" "overrides_json" {
filename = "${path.module}/files/jupyterlab/overrides.json"
}
+resource "local_file" "page_config_json" {
+ content = jsonencode({
+ "disabledExtensions" : {
+ "jupyterlab-jhub-apps" : !var.jhub-apps-enabled
+ },
+ # `lockedExtensions` is an empty dict to signify that `jupyterlab-jhub-apps` is not being disabled and locked (but only disabled)
+ # which means users are still allowed to disable the jupyterlab-jhub-apps extension (if they have write access to page_config).
+ "lockedExtensions" : {}
+ })
+ filename = "${path.module}/files/jupyterlab/page_config.json"
+}
resource "kubernetes_config_map" "etc-ipython" {
metadata {
@@ -92,6 +103,9 @@ locals {
etc-jupyterlab-settings = {
"overrides.json" = local_file.overrides_json.content
}
+ etc-jupyterlab-page-config = {
+ "page_config.json" = local_file.page_config_json.content
+ }
}
resource "kubernetes_config_map" "etc-jupyter" {
@@ -136,6 +150,20 @@ resource "kubernetes_config_map" "jupyterlab-settings" {
data = local.etc-jupyterlab-settings
}
+
+resource "kubernetes_config_map" "jupyterlab-page-config" {
+ depends_on = [
+ local_file.page_config_json
+ ]
+
+ metadata {
+ name = "jupyterlab-page-config"
+ namespace = var.namespace
+ }
+
+ data = local.etc-jupyterlab-page-config
+}
+
resource "kubernetes_config_map" "git_clone_update" {
metadata {
name = "git-clone-update"
diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/files/jupyterhub/02-spawner.py b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/files/jupyterhub/02-spawner.py
index 09bb649c01..2557a497a7 100644
--- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/files/jupyterhub/02-spawner.py
+++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/files/jupyterhub/02-spawner.py
@@ -10,6 +10,9 @@
from kubespawner import KubeSpawner # noqa: E402
+# conda-store default page size
+DEFAULT_PAGE_SIZE_LIMIT = 100
+
@gen.coroutine
def get_username_hook(spawner):
@@ -23,25 +26,66 @@ def get_username_hook(spawner):
)
+def get_total_records(url: str, token: str) -> int:
+ import urllib3
+
+ http = urllib3.PoolManager()
+ response = http.request("GET", url, headers={"Authorization": f"Bearer {token}"})
+ decoded_response = json.loads(response.data.decode("UTF-8"))
+ return decoded_response.get("count", 0)
+
+
+def generate_paged_urls(base_url: str, total_records: int, page_size: int) -> list[str]:
+ import math
+
+ urls = []
+ # pages starts at 1
+ for page in range(1, math.ceil(total_records / page_size) + 1):
+ urls.append(f"{base_url}?size={page_size}&page={page}")
+
+ return urls
+
+
+# TODO: this should get unit tests. Currently, since this is not a python module,
+# adding tests in a traditional sense is not possible. See https://github.com/soapy1/nebari/tree/try-unit-test-spawner
+# for a demo on one approach to adding test.
def get_conda_store_environments(user_info: dict):
+ import os
+
import urllib3
- import yarl
+
+ # Check for the environment variable `CONDA_STORE_API_PAGE_SIZE_LIMIT`. Fall
+ # back to using the default page size limit if not set.
+ page_size = os.environ.get(
+ "CONDA_STORE_API_PAGE_SIZE_LIMIT", DEFAULT_PAGE_SIZE_LIMIT
+ )
external_url = z2jh.get_config("custom.conda-store-service-name")
token = z2jh.get_config("custom.conda-store-jhub-apps-token")
endpoint = "conda-store/api/v1/environment"
- url = yarl.URL(f"http://{external_url}/{endpoint}/")
-
+ base_url = f"http://{external_url}/{endpoint}/"
http = urllib3.PoolManager()
- response = http.request(
- "GET", str(url), headers={"Authorization": f"Bearer {token}"}
- )
- # parse response
- j = json.loads(response.data.decode("UTF-8"))
+ # get total number of records from the endpoint
+ total_records = get_total_records(base_url, token)
+
+ # will contain all the environment info returned from the api
+ env_data = []
+
+ # generate a list of urls to hit to build the response
+ urls = generate_paged_urls(base_url, total_records, page_size)
+
+ # get content from urls
+ for url in urls:
+ response = http.request(
+ "GET", url, headers={"Authorization": f"Bearer {token}"}
+ )
+ decoded_response = json.loads(response.data.decode("UTF-8"))
+ env_data += decoded_response.get("data", [])
+
# Filter and return conda environments for the user
- return [f"{env['namespace']['name']}-{env['name']}" for env in j.get("data", [])]
+ return [f"{env['namespace']['name']}-{env['name']}" for env in env_data]
c.Spawner.pre_spawn_hook = get_username_hook
diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/main.tf b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/main.tf
index a36090f41c..9a0675fc85 100644
--- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/main.tf
+++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/main.tf
@@ -104,6 +104,11 @@ resource "helm_release" "jupyterhub" {
kind = "configmap"
}
+ "/etc/jupyter/labconfig" = {
+ name = kubernetes_config_map.jupyterlab-page-config.metadata.0.name
+ namespace = kubernetes_config_map.jupyterlab-page-config.metadata.0.namespace
+ kind = "configmap"
+ }
}
)
environments = var.conda-store-environments
diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/monitoring/versions.tf b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/monitoring/versions.tf
index 341def1365..d1e5f8acfb 100644
--- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/monitoring/versions.tf
+++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/monitoring/versions.tf
@@ -6,7 +6,7 @@ terraform {
}
kubernetes = {
source = "hashicorp/kubernetes"
- version = "2.20.0"
+ version = "2.35.1"
}
}
required_version = ">= 1.0"
diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/rook-ceph/versions.tf b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/rook-ceph/versions.tf
index 341def1365..d1e5f8acfb 100644
--- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/rook-ceph/versions.tf
+++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/rook-ceph/versions.tf
@@ -6,7 +6,7 @@ terraform {
}
kubernetes = {
source = "hashicorp/kubernetes"
- version = "2.20.0"
+ version = "2.35.1"
}
}
required_version = ">= 1.0"
diff --git a/src/_nebari/stages/kubernetes_services/template/versions.tf b/src/_nebari/stages/kubernetes_services/template/versions.tf
index 00353a6d2f..d3f87478e2 100644
--- a/src/_nebari/stages/kubernetes_services/template/versions.tf
+++ b/src/_nebari/stages/kubernetes_services/template/versions.tf
@@ -6,7 +6,7 @@ terraform {
}
kubernetes = {
source = "hashicorp/kubernetes"
- version = "2.20.0"
+ version = "2.35.1"
}
keycloak = {
source = "mrparkers/keycloak"
diff --git a/src/_nebari/stages/nebari_tf_extensions/template/versions.tf b/src/_nebari/stages/nebari_tf_extensions/template/versions.tf
index 00353a6d2f..d3f87478e2 100644
--- a/src/_nebari/stages/nebari_tf_extensions/template/versions.tf
+++ b/src/_nebari/stages/nebari_tf_extensions/template/versions.tf
@@ -6,7 +6,7 @@ terraform {
}
kubernetes = {
source = "hashicorp/kubernetes"
- version = "2.20.0"
+ version = "2.35.1"
}
keycloak = {
source = "mrparkers/keycloak"
diff --git a/src/_nebari/stages/terraform_state/__init__.py b/src/_nebari/stages/terraform_state/__init__.py
index e0f643ed3d..e9a18ba7c5 100644
--- a/src/_nebari/stages/terraform_state/__init__.py
+++ b/src/_nebari/stages/terraform_state/__init__.py
@@ -9,7 +9,7 @@
from pydantic import BaseModel, field_validator
from _nebari import utils
-from _nebari.provider import terraform
+from _nebari.provider import opentofu
from _nebari.provider.cloud import azure_cloud
from _nebari.stages.base import NebariTerraformStage
from _nebari.stages.tf_objects import NebariConfig
@@ -22,12 +22,6 @@
from nebari.hookspecs import NebariStage, hookimpl
-class DigitalOceanInputVars(schema.Base):
- name: str
- namespace: str
- region: str
-
-
class GCPInputVars(schema.Base):
name: str
namespace: str
@@ -117,14 +111,7 @@ def stage_prefix(self):
return pathlib.Path("stages") / self.name / self.config.provider.value
def state_imports(self) -> List[Tuple[str, str]]:
- if self.config.provider == schema.ProviderEnum.do:
- return [
- (
- "module.terraform-state.module.spaces.digitalocean_spaces_bucket.main",
- f"{self.config.digital_ocean.region},{self.config.project_name}-{self.config.namespace}-terraform-state",
- )
- ]
- elif self.config.provider == schema.ProviderEnum.gcp:
+ if self.config.provider == schema.ProviderEnum.gcp:
return [
(
"module.terraform-state.module.gcs.google_storage_bucket.static-site",
@@ -175,7 +162,7 @@ def tf_objects(self) -> List[Dict]:
resources = [NebariConfig(self.config)]
if self.config.provider == schema.ProviderEnum.gcp:
return resources + [
- terraform.Provider(
+ opentofu.Provider(
"google",
project=self.config.google_cloud_platform.project,
region=self.config.google_cloud_platform.region,
@@ -183,21 +170,13 @@ def tf_objects(self) -> List[Dict]:
]
elif self.config.provider == schema.ProviderEnum.aws:
return resources + [
- terraform.Provider(
- "aws", region=self.config.amazon_web_services.region
- ),
+ opentofu.Provider("aws", region=self.config.amazon_web_services.region),
]
else:
return resources
def input_vars(self, stage_outputs: Dict[str, Dict[str, Any]]):
- if self.config.provider == schema.ProviderEnum.do:
- return DigitalOceanInputVars(
- name=self.config.project_name,
- namespace=self.config.namespace,
- region=self.config.digital_ocean.region,
- ).model_dump()
- elif self.config.provider == schema.ProviderEnum.gcp:
+ if self.config.provider == schema.ProviderEnum.gcp:
return GCPInputVars(
name=self.config.project_name,
namespace=self.config.namespace,
@@ -236,19 +215,10 @@ def deploy(
):
self.check_immutable_fields()
- # No need to run terraform init here as it's being called when running the
+ # No need to run tofu init here as it's being called when running the
# terraform show command, inside check_immutable_fields
- with super().deploy(stage_outputs, disable_prompt, terraform_init=False):
+ with super().deploy(stage_outputs, disable_prompt, tofu_init=False):
env_mapping = {}
- # DigitalOcean terraform remote state using Spaces Bucket
- # assumes aws credentials thus we set them to match spaces credentials
- if self.config.provider == schema.ProviderEnum.do:
- env_mapping.update(
- {
- "AWS_ACCESS_KEY_ID": os.environ["SPACES_ACCESS_KEY_ID"],
- "AWS_SECRET_ACCESS_KEY": os.environ["SPACES_SECRET_ACCESS_KEY"],
- }
- )
with modified_environ(**env_mapping):
yield
@@ -292,7 +262,7 @@ def check_immutable_fields(self):
def get_nebari_config_state(self) -> dict:
directory = str(self.output_directory / self.stage_prefix)
- tf_state = terraform.show(directory)
+ tf_state = opentofu.show(directory)
nebari_config_state = None
# get nebari config from state
@@ -310,15 +280,6 @@ def destroy(
):
with super().destroy(stage_outputs, status):
env_mapping = {}
- # DigitalOcean terraform remote state using Spaces Bucket
- # assumes aws credentials thus we set them to match spaces credentials
- if self.config.provider == schema.ProviderEnum.do:
- env_mapping.update(
- {
- "AWS_ACCESS_KEY_ID": os.environ["SPACES_ACCESS_KEY_ID"],
- "AWS_SECRET_ACCESS_KEY": os.environ["SPACES_SECRET_ACCESS_KEY"],
- }
- )
with modified_environ(**env_mapping):
yield
diff --git a/src/_nebari/stages/terraform_state/template/do/main.tf b/src/_nebari/stages/terraform_state/template/do/main.tf
deleted file mode 100644
index a6db74f74d..0000000000
--- a/src/_nebari/stages/terraform_state/template/do/main.tf
+++ /dev/null
@@ -1,35 +0,0 @@
-variable "name" {
- description = "Prefix name to assign to Nebari resources"
- type = string
-}
-
-variable "namespace" {
- description = "Namespace to create Kubernetes resources"
- type = string
-}
-
-variable "region" {
- description = "Region for Digital Ocean deployment"
- type = string
-}
-
-provider "digitalocean" {
-
-}
-
-module "terraform-state" {
- source = "./modules/terraform-state"
-
- name = "${var.name}-${var.namespace}"
- region = var.region
-}
-
-terraform {
- required_providers {
- digitalocean = {
- source = "digitalocean/digitalocean"
- version = "2.29.0"
- }
- }
- required_version = ">= 1.0"
-}
diff --git a/src/_nebari/stages/terraform_state/template/do/modules/spaces/main.tf b/src/_nebari/stages/terraform_state/template/do/modules/spaces/main.tf
deleted file mode 100644
index fc2d34c604..0000000000
--- a/src/_nebari/stages/terraform_state/template/do/modules/spaces/main.tf
+++ /dev/null
@@ -1,12 +0,0 @@
-resource "digitalocean_spaces_bucket" "main" {
- name = var.name
- region = var.region
-
- force_destroy = var.force_destroy
-
- acl = (var.public ? "public-read" : "private")
-
- versioning {
- enabled = false
- }
-}
diff --git a/src/_nebari/stages/terraform_state/template/do/modules/spaces/variables.tf b/src/_nebari/stages/terraform_state/template/do/modules/spaces/variables.tf
deleted file mode 100644
index db24a3dce5..0000000000
--- a/src/_nebari/stages/terraform_state/template/do/modules/spaces/variables.tf
+++ /dev/null
@@ -1,21 +0,0 @@
-variable "name" {
- description = "Prefix name for bucket resource"
- type = string
-}
-
-variable "region" {
- description = "Region for Digital Ocean bucket"
- type = string
-}
-
-variable "force_destroy" {
- description = "force_destroy all bucket contents when bucket is deleted"
- type = bool
- default = false
-}
-
-variable "public" {
- description = "Digital Ocean s3 bucket is exposed publicly"
- type = bool
- default = false
-}
diff --git a/src/_nebari/stages/terraform_state/template/do/modules/spaces/versions.tf b/src/_nebari/stages/terraform_state/template/do/modules/spaces/versions.tf
deleted file mode 100644
index b320a102dd..0000000000
--- a/src/_nebari/stages/terraform_state/template/do/modules/spaces/versions.tf
+++ /dev/null
@@ -1,9 +0,0 @@
-terraform {
- required_providers {
- digitalocean = {
- source = "digitalocean/digitalocean"
- version = "2.29.0"
- }
- }
- required_version = ">= 1.0"
-}
diff --git a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/main.tf b/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/main.tf
deleted file mode 100644
index e3445f362d..0000000000
--- a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/main.tf
+++ /dev/null
@@ -1,9 +0,0 @@
-module "spaces" {
- source = "../spaces"
-
- name = "${var.name}-terraform-state"
- region = var.region
- public = false
-
- force_destroy = true
-}
diff --git a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/variables.tf b/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/variables.tf
deleted file mode 100644
index 8010647d39..0000000000
--- a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/variables.tf
+++ /dev/null
@@ -1,9 +0,0 @@
-variable "name" {
- description = "Prefix name for terraform state"
- type = string
-}
-
-variable "region" {
- description = "Region for terraform state"
- type = string
-}
diff --git a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/versions.tf b/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/versions.tf
deleted file mode 100644
index b320a102dd..0000000000
--- a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/versions.tf
+++ /dev/null
@@ -1,9 +0,0 @@
-terraform {
- required_providers {
- digitalocean = {
- source = "digitalocean/digitalocean"
- version = "2.29.0"
- }
- }
- required_version = ">= 1.0"
-}
diff --git a/src/_nebari/stages/terraform_state/template/gcp/main.tf b/src/_nebari/stages/terraform_state/template/gcp/main.tf
index dea6c03ac0..34a45d354a 100644
--- a/src/_nebari/stages/terraform_state/template/gcp/main.tf
+++ b/src/_nebari/stages/terraform_state/template/gcp/main.tf
@@ -24,7 +24,7 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
- version = "4.83.0"
+ version = "6.14.1"
}
}
required_version = ">= 1.0"
diff --git a/src/_nebari/stages/tf_objects.py b/src/_nebari/stages/tf_objects.py
index 04c6d434aa..28884d4789 100644
--- a/src/_nebari/stages/tf_objects.py
+++ b/src/_nebari/stages/tf_objects.py
@@ -1,4 +1,4 @@
-from _nebari.provider.terraform import Data, Provider, Resource, TerraformBackend
+from _nebari.provider.opentofu import Data, Provider, Resource, TerraformBackend
from _nebari.utils import (
AZURE_TF_STATE_RESOURCE_GROUP_SUFFIX,
construct_azure_resource_group_name,
@@ -69,16 +69,6 @@ def NebariTerraformState(directory: str, nebari_config: schema.Main):
bucket=f"{nebari_config.escaped_project_name}-{nebari_config.namespace}-terraform-state",
prefix=f"terraform/{nebari_config.escaped_project_name}/{directory}",
)
- elif nebari_config.provider == "do":
- return TerraformBackend(
- "s3",
- endpoint=f"{nebari_config.digital_ocean.region}.digitaloceanspaces.com",
- region="us-west-1", # fake aws region required by terraform
- bucket=f"{nebari_config.escaped_project_name}-{nebari_config.namespace}-terraform-state",
- key=f"terraform/{nebari_config.escaped_project_name}-{nebari_config.namespace}/{directory}.tfstate",
- skip_credentials_validation=True,
- skip_metadata_api_check=True,
- )
elif nebari_config.provider == "azure":
return TerraformBackend(
"azurerm",
diff --git a/src/_nebari/subcommands/info.py b/src/_nebari/subcommands/info.py
index 1a36afceb1..3f5999e300 100644
--- a/src/_nebari/subcommands/info.py
+++ b/src/_nebari/subcommands/info.py
@@ -10,12 +10,19 @@
@hookimpl
def nebari_subcommand(cli: typer.Typer):
+ EXTERNAL_PLUGIN_STYLE = "cyan"
+
@cli.command()
def info(ctx: typer.Context):
+ """
+ Display information about installed Nebari plugins and their configurations.
+ """
from nebari.plugins import nebari_plugin_manager
rich.print(f"Nebari version: {__version__}")
+ external_plugins = nebari_plugin_manager.get_external_plugins()
+
hooks = collections.defaultdict(list)
for plugin in nebari_plugin_manager.plugin_manager.get_plugins():
for hook in nebari_plugin_manager.plugin_manager.get_hookcallers(plugin):
@@ -27,7 +34,8 @@ def info(ctx: typer.Context):
for hook_name, modules in hooks.items():
for module in modules:
- table.add_row(hook_name, module)
+ style = EXTERNAL_PLUGIN_STYLE if module in external_plugins else None
+ table.add_row(hook_name, module, style=style)
rich.print(table)
@@ -36,8 +44,14 @@ def info(ctx: typer.Context):
table.add_column("priority")
table.add_column("module")
for stage in nebari_plugin_manager.ordered_stages:
+ style = (
+ EXTERNAL_PLUGIN_STYLE if stage.__module__ in external_plugins else None
+ )
table.add_row(
- stage.name, str(stage.priority), f"{stage.__module__}.{stage.__name__}"
+ stage.name,
+ str(stage.priority),
+ f"{stage.__module__}.{stage.__name__}",
+ style=style,
)
rich.print(table)
diff --git a/src/_nebari/subcommands/init.py b/src/_nebari/subcommands/init.py
index 743d30cb40..c2f8d416e9 100644
--- a/src/_nebari/subcommands/init.py
+++ b/src/_nebari/subcommands/init.py
@@ -13,16 +13,10 @@
from _nebari.constants import (
AWS_DEFAULT_REGION,
AZURE_DEFAULT_REGION,
- DO_DEFAULT_REGION,
GCP_DEFAULT_REGION,
)
from _nebari.initialize import render_config
-from _nebari.provider.cloud import (
- amazon_web_services,
- azure_cloud,
- digital_ocean,
- google_cloud,
-)
+from _nebari.provider.cloud import amazon_web_services, azure_cloud, google_cloud
from _nebari.stages.bootstrap import CiEnum
from _nebari.stages.kubernetes_keycloak import AuthenticationEnum
from _nebari.stages.terraform_state import TerraformStateEnum
@@ -44,18 +38,13 @@
CREATE_GCP_CREDS = (
"https://cloud.google.com/iam/docs/creating-managing-service-accounts"
)
-CREATE_DO_CREDS = (
- "https://docs.digitalocean.com/reference/api/create-personal-access-token"
-)
CREATE_AZURE_CREDS = "https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#creating-a-service-principal-in-the-azure-portal"
CREATE_AUTH0_CREDS = "https://auth0.com/docs/get-started/auth0-overview/create-applications/machine-to-machine-apps"
CREATE_GITHUB_OAUTH_CREDS = "https://docs.github.com/en/developers/apps/building-oauth-apps/creating-an-oauth-app"
AWS_REGIONS = "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions"
GCP_REGIONS = "https://cloud.google.com/compute/docs/regions-zones"
AZURE_REGIONS = "https://azure.microsoft.com/en-us/explore/global-infrastructure/geographies/#overview"
-DO_REGIONS = (
- "https://docs.digitalocean.com/products/platform/availability-matrix/#regions"
-)
+
# links to Nebari docs
DOCS_HOME = "https://nebari.dev/docs/"
@@ -78,7 +67,6 @@
CLOUD_PROVIDER_FULL_NAME = {
"Local": ProviderEnum.local.name,
"Existing": ProviderEnum.existing.name,
- "Digital Ocean": ProviderEnum.do.name,
"Amazon Web Services": ProviderEnum.aws.name,
"Google Cloud Platform": ProviderEnum.gcp.name,
"Microsoft Azure": ProviderEnum.azure.name,
@@ -105,6 +93,7 @@ class InitInputs(schema.Base):
region: Optional[str] = None
ssl_cert_email: Optional[schema.email_pydantic] = None
disable_prompt: bool = False
+ config_set: Optional[str] = None
output: pathlib.Path = pathlib.Path("nebari-config.yaml")
explicit: int = 0
@@ -120,8 +109,6 @@ def get_region_docs(cloud_provider: str):
return GCP_REGIONS
elif cloud_provider == ProviderEnum.azure.value.lower():
return AZURE_REGIONS
- elif cloud_provider == ProviderEnum.do.value.lower():
- return DO_REGIONS
def handle_init(inputs: InitInputs, config_schema: BaseModel):
@@ -148,6 +135,7 @@ def handle_init(inputs: InitInputs, config_schema: BaseModel):
terraform_state=inputs.terraform_state,
ssl_cert_email=inputs.ssl_cert_email,
disable_prompt=inputs.disable_prompt,
+ config_set=inputs.config_set,
)
try:
@@ -312,36 +300,6 @@ def check_cloud_provider_creds(cloud_provider: ProviderEnum, disable_prompt: boo
hide_input=True,
)
- # DO
- elif cloud_provider == ProviderEnum.do.value.lower() and (
- not os.environ.get("DIGITALOCEAN_TOKEN")
- or not os.environ.get("SPACES_ACCESS_KEY_ID")
- or not os.environ.get("SPACES_SECRET_ACCESS_KEY")
- ):
- rich.print(
- MISSING_CREDS_TEMPLATE.format(
- provider="Digital Ocean", link_to_docs=CREATE_DO_CREDS
- )
- )
-
- os.environ["DIGITALOCEAN_TOKEN"] = typer.prompt(
- "Paste your DIGITALOCEAN_TOKEN",
- hide_input=True,
- )
- os.environ["SPACES_ACCESS_KEY_ID"] = typer.prompt(
- "Paste your SPACES_ACCESS_KEY_ID",
- hide_input=True,
- )
- os.environ["SPACES_SECRET_ACCESS_KEY"] = typer.prompt(
- "Paste your SPACES_SECRET_ACCESS_KEY",
- hide_input=True,
- )
- # Set spaces credentials. Spaces are API compatible with s3
- # Setting spaces credentials to AWS credentials allows us to
- # reuse s3 code
- os.environ["AWS_ACCESS_KEY_ID"] = os.getenv("SPACES_ACCESS_KEY_ID")
- os.environ["AWS_SECRET_ACCESS_KEY"] = os.getenv("SPACES_SECRET_ACCESS_KEY")
-
# AZURE
elif cloud_provider == ProviderEnum.azure.value.lower() and (
not os.environ.get("ARM_CLIENT_ID")
@@ -409,29 +367,17 @@ def check_cloud_provider_kubernetes_version(
versions = google_cloud.kubernetes_versions(region)
if not kubernetes_version or kubernetes_version == LATEST:
- kubernetes_version = get_latest_kubernetes_version(versions)
- rich.print(
- DEFAULT_KUBERNETES_VERSION_MSG.format(
- kubernetes_version=kubernetes_version
- )
+ kubernetes_version = google_cloud.get_patch_version(
+ get_latest_kubernetes_version(versions)
)
- if kubernetes_version not in versions:
- raise ValueError(
- f"Invalid Kubernetes version `{kubernetes_version}`. Please refer to the GCP docs for a list of valid versions: {versions}"
- )
- elif cloud_provider == ProviderEnum.do.value.lower():
- versions = digital_ocean.kubernetes_versions()
-
- if not kubernetes_version or kubernetes_version == LATEST:
- kubernetes_version = get_latest_kubernetes_version(versions)
rich.print(
DEFAULT_KUBERNETES_VERSION_MSG.format(
kubernetes_version=kubernetes_version
)
)
- if kubernetes_version not in versions:
+ if not any(v.startswith(kubernetes_version) for v in versions):
raise ValueError(
- f"Invalid Kubernetes version `{kubernetes_version}`. Please refer to the DO docs for a list of valid versions: {versions}"
+ f"Invalid Kubernetes version `{kubernetes_version}`. Please refer to the GCP docs for a list of valid versions: {versions}"
)
return kubernetes_version
@@ -462,15 +408,7 @@ def check_cloud_provider_region(region: str, cloud_provider: str) -> str:
raise ValueError(
f"Invalid region `{region}`. Please refer to the GCP docs for a list of valid regions: {GCP_REGIONS}"
)
- elif cloud_provider == ProviderEnum.do.value.lower():
- if not region:
- region = DO_DEFAULT_REGION
- rich.print(DEFAULT_REGION_MSG.format(region=region))
- if region not in set(_["slug"] for _ in digital_ocean.regions()):
- raise ValueError(
- f"Invalid region `{region}`. Please refer to the DO docs for a list of valid regions: {DO_REGIONS}"
- )
return region
@@ -560,6 +498,12 @@ def init(
False,
is_eager=True,
),
+ config_set: str = typer.Option(
+ None,
+ "--config-set",
+ "-s",
+ help="Apply a pre-defined set of nebari configuration options.",
+ ),
output: str = typer.Option(
pathlib.Path("nebari-config.yaml"),
"--output",
@@ -596,10 +540,10 @@ def init(
cloud_provider, disable_prompt
)
- # Digital Ocean deprecation warning -- Nebari 2024.7.1
- if inputs.cloud_provider == ProviderEnum.do.value.lower():
+ # DigitalOcean is no longer supported
+ if inputs.cloud_provider == "do":
rich.print(
- ":warning: Digital Ocean support is being deprecated and support will be removed in the future. :warning:\n"
+ ":warning: DigitalOcean is no longer supported. You'll need to deploy to an existing k8s cluster if you plan to use Nebari on DigitalOcean :warning:\n"
)
inputs.region = check_cloud_provider_region(region, inputs.cloud_provider)
@@ -618,6 +562,7 @@ def init(
inputs.terraform_state = terraform_state
inputs.ssl_cert_email = ssl_cert_email
inputs.disable_prompt = disable_prompt
+ inputs.config_set = config_set
inputs.output = output
inputs.explicit = explicit
diff --git a/src/_nebari/subcommands/plugin.py b/src/_nebari/subcommands/plugin.py
new file mode 100644
index 0000000000..28305848cd
--- /dev/null
+++ b/src/_nebari/subcommands/plugin.py
@@ -0,0 +1,42 @@
+from importlib.metadata import version
+
+import rich
+import typer
+from rich.table import Table
+
+from nebari.hookspecs import hookimpl
+
+
+@hookimpl
+def nebari_subcommand(cli: typer.Typer):
+ plugin_cmd = typer.Typer(
+ add_completion=False,
+ no_args_is_help=True,
+ rich_markup_mode="rich",
+ context_settings={"help_option_names": ["-h", "--help"]},
+ )
+
+ cli.add_typer(
+ plugin_cmd,
+ name="plugin",
+ help="Interact with nebari plugins",
+ rich_help_panel="Additional Commands",
+ )
+
+ @plugin_cmd.command()
+ def list(ctx: typer.Context):
+ """
+ List installed plugins
+ """
+ from nebari.plugins import nebari_plugin_manager
+
+ external_plugins = nebari_plugin_manager.get_external_plugins()
+
+ table = Table(title="Plugins")
+ table.add_column("name", justify="left", no_wrap=True)
+ table.add_column("version", justify="left", no_wrap=True)
+
+ for plugin in external_plugins:
+ table.add_row(plugin, version(plugin))
+
+ rich.print(table)
diff --git a/src/_nebari/upgrade.py b/src/_nebari/upgrade.py
index 6536612f2d..18e75c1827 100644
--- a/src/_nebari/upgrade.py
+++ b/src/_nebari/upgrade.py
@@ -6,6 +6,7 @@
import json
import logging
+import os
import re
import secrets
import string
@@ -20,7 +21,7 @@
import rich
from packaging.version import Version
from pydantic import ValidationError
-from rich.prompt import Prompt
+from rich.prompt import Confirm, Prompt
from typing_extensions import override
from _nebari.config import backup_configuration
@@ -47,6 +48,20 @@
UPGRADE_KUBERNETES_MESSAGE = "Please see the [green][link=https://www.nebari.dev/docs/how-tos/kubernetes-version-upgrade]Kubernetes upgrade docs[/link][/green] for more information."
DESTRUCTIVE_UPGRADE_WARNING = "-> This version upgrade will result in your cluster being completely torn down and redeployed. Please ensure you have backed up any data you wish to keep before proceeding!!!"
+TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION = (
+ "Nebari needs to generate an updated set of Terraform scripts for your deployment and delete the old scripts.\n"
+ "Do you want Nebari to remove your [green]stages[/green] directory automatically for you? It will be recreated the next time Nebari is run.\n"
+ "[red]Warning:[/red] This will remove everything in the [green]stages[/green] directory.\n"
+ "If you do not have Nebari do it automatically here, you will need to remove the [green]stages[/green] manually with a command"
+ "like [green]rm -rf stages[/green]."
+)
+DESTROY_STAGE_FILES_WITH_TF_STATE_NOT_REMOTE = (
+ "⚠️ CAUTION ⚠️\n"
+ "Nebari would like to remove your old Terraform/Opentofu [green]stages[/green] files. Your [blue]terraform_state[/blue] configuration is not set to [blue]remote[/blue], so destroying your [green]stages[/green] files could potentially be very detructive.\n"
+ "If you don't have active Terraform/Opentofu deployment state files contained within your [green]stages[/green] directory, you may proceed by entering [red]y[/red] at the prompt."
+ "If you have an active Terraform/Opentofu deployment with active state files in your [green]stages[/green] folder, you will need to either bring Nebari down temporarily to redeploy or pursue some other means to upgrade. Enter [red]n[/red] at the prompt.\n\n"
+ "Do you want to proceed by deleting your [green]stages[/green] directory and everything in it? ([red]POTENTIALLY VERY DESTRUCTIVE[/red])"
+)
def do_upgrade(config_filename, attempt_fixes=False):
@@ -213,6 +228,54 @@ def upgrade(
return config
+ @classmethod
+ def _rm_rf_stages(cls, config_filename, dry_run: bool = False, verbose=False):
+ """
+ Remove stage files during and upgrade step
+
+ Usually used when you need files in your `stages` directory to be
+ removed in order to avoid resource conflicts
+
+ Args:
+ config_filename (str): The path to the configuration file.
+ Returns:
+ None
+ """
+ config_dir = Path(config_filename).resolve().parent
+
+ if Path.is_dir(config_dir):
+ stage_dir = config_dir / "stages"
+
+ stage_filenames = [d for d in stage_dir.rglob("*") if d.is_file()]
+
+ for stage_filename in stage_filenames:
+ if dry_run and verbose:
+ rich.print(f"Dry run: Would remove {stage_filename}")
+ else:
+ stage_filename.unlink(missing_ok=True)
+ if verbose:
+ rich.print(f"Removed {stage_filename}")
+
+ stage_filedirs = sorted(
+ (d for d in stage_dir.rglob("*") if d.is_dir()),
+ reverse=True,
+ )
+
+ for stage_filedir in stage_filedirs:
+ if dry_run and verbose:
+ rich.print(f"Dry run: Would remove {stage_filedir}")
+ else:
+ stage_filedir.rmdir()
+ if verbose:
+ rich.print(f"Removed {stage_filedir}")
+
+ if dry_run and verbose:
+ rich.print(f"Dry run: Would remove {stage_dir}")
+ elif stage_dir.is_dir():
+ stage_dir.rmdir()
+ if verbose:
+ rich.print(f"Removed {stage_dir}")
+
def get_version(self):
"""
Returns:
@@ -306,7 +369,9 @@ def replace_image_tag_legacy(
return ":".join([m.groups()[0], f"v{new_version}"])
return None
- def replace_image_tag(s: str, new_version: str, config_path: str) -> str:
+ def replace_image_tag(
+ s: str, new_version: str, config_path: str, attempt_fixes: bool
+ ) -> str:
"""
Replace the image tag with the new version.
@@ -328,11 +393,11 @@ def replace_image_tag(s: str, new_version: str, config_path: str) -> str:
if current_tag == new_version:
return s
loc = f"{config_path}: {image_name}"
- response = Prompt.ask(
- f"\nDo you want to replace current tag [green]{current_tag}[/green] with [green]{new_version}[/green] for:\n[purple]{loc}[/purple]? [Y/n] ",
- default="Y",
+ response = attempt_fixes or Confirm.ask(
+ f"\nDo you want to replace current tag [green]{current_tag}[/green] with [green]{new_version}[/green] for:\n[purple]{loc}[/purple]?",
+ default=True,
)
- if response.lower() in ["y", "yes", ""]:
+ if response:
return s.replace(current_tag, new_version)
else:
return s
@@ -363,7 +428,11 @@ def set_nested_item(config: dict, config_path: list, value: str):
config[config_path[-1]] = value
def update_image_tag(
- config: dict, config_path: str, current_image: str, new_version: str
+ config: dict,
+ config_path: str,
+ current_image: str,
+ new_version: str,
+ attempt_fixes: bool,
) -> dict:
"""
Update the image tag in the configuration.
@@ -377,7 +446,12 @@ def update_image_tag(
Returns:
dict: The updated configuration dictionary.
"""
- new_image = replace_image_tag(current_image, new_version, config_path)
+ new_image = replace_image_tag(
+ current_image,
+ new_version,
+ config_path,
+ attempt_fixes,
+ )
if new_image != current_image:
set_nested_item(config, config_path, new_image)
@@ -387,7 +461,11 @@ def update_image_tag(
for k, v in config.get("default_images", {}).items():
config_path = f"default_images.{k}"
config = update_image_tag(
- config, config_path, v, __rounded_finish_version__
+ config,
+ config_path,
+ v,
+ __rounded_finish_version__,
+ kwargs.get("attempt_fixes", False),
)
# update profiles.jupyterlab images
@@ -399,6 +477,7 @@ def update_image_tag(
f"profiles.jupyterlab.{i}.kubespawner_override.image",
current_image,
__rounded_finish_version__,
+ kwargs.get("attempt_fixes", False),
)
# update profiles.dask_worker images
@@ -410,11 +489,16 @@ def update_image_tag(
f"profiles.dask_worker.{k}.image",
current_image,
__rounded_finish_version__,
+ kwargs.get("attempt_fixes", False),
)
# Run any version-specific tasks
return self._version_specific_upgrade(
- config, start_version, config_filename, *args, **kwargs
+ config,
+ start_version,
+ config_filename,
+ *args,
+ **kwargs,
)
def _version_specific_upgrade(
@@ -628,27 +712,93 @@ def _version_specific_upgrade(
"""
Prompt users to delete Argo CRDs
"""
+ argo_crds = [
+ "clusterworkflowtemplates.argoproj.io",
+ "cronworkflows.argoproj.io",
+ "workfloweventbindings.argoproj.io",
+ "workflows.argoproj.io",
+ "workflowtasksets.argoproj.io",
+ "workflowtemplates.argoproj.io",
+ ]
- kubectl_delete_argo_crds_cmd = "kubectl delete crds clusterworkflowtemplates.argoproj.io cronworkflows.argoproj.io workfloweventbindings.argoproj.io workflows.argoproj.io workflowtasksets.argoproj.io workflowtemplates.argoproj.io"
+ argo_sa = ["argo-admin", "argo-dev", "argo-view"]
- kubectl_delete_argo_sa_cmd = (
- f"kubectl delete sa -n {config['namespace']} argo-admin argo-dev argo-view"
- )
+ namespace = config.get("namespace", "default")
- rich.print(
- f"\n\n[bold cyan]Note:[/] Upgrading requires a one-time manual deletion of the Argo Workflows Custom Resource Definitions (CRDs) and service accounts. \n\n[red bold]Warning: [link=https://{config['domain']}/argo/workflows]Workflows[/link] and [link=https://{config['domain']}/argo/workflows]CronWorkflows[/link] created before deleting the CRDs will be erased when the CRDs are deleted and will not be restored.[/red bold] \n\nThe updated CRDs will be installed during the next [cyan bold]nebari deploy[/cyan bold] step. Argo Workflows will not function after deleting the CRDs until the updated CRDs and service accounts are installed in the next nebari deploy. You must delete the Argo Workflows CRDs and service accounts before upgrading to {self.version} (or later) or the deploy step will fail. Please delete them before proceeding by generating a kubeconfig (see [link=https://www.nebari.dev/docs/how-tos/debug-nebari/#generating-the-kubeconfig]docs[/link]), installing kubectl (see [link=https://www.nebari.dev/docs/how-tos/debug-nebari#installing-kubectl]docs[/link]), and running the following two commands:\n\n\t[cyan bold]{kubectl_delete_argo_crds_cmd} [/cyan bold]\n\n\t[cyan bold]{kubectl_delete_argo_sa_cmd} [/cyan bold]"
- ""
- )
+ if kwargs.get("attempt_fixes", False):
+ try:
+ kubernetes.config.load_kube_config()
+ except kubernetes.config.config_exception.ConfigException:
+ rich.print(
+ "[red bold]No default kube configuration file was found. Make sure to [link=https://www.nebari.dev/docs/how-tos/debug-nebari#generating-the-kubeconfig]have one pointing to your Nebari cluster[/link] before upgrading.[/red bold]"
+ )
+ exit()
- continue_ = Prompt.ask(
- "Have you deleted the Argo Workflows CRDs and service accounts? [y/N] ",
- default="N",
- )
- if not continue_ == "y":
+ for crd in argo_crds:
+ api_instance = kubernetes.client.ApiextensionsV1Api()
+ try:
+ api_instance.delete_custom_resource_definition(
+ name=crd,
+ )
+ except kubernetes.client.exceptions.ApiException as e:
+ if e.status == 404:
+ rich.print(f"CRD [yellow]{crd}[/yellow] not found. Ignoring.")
+ else:
+ raise e
+ else:
+ rich.print(f"Successfully removed CRD [green]{crd}[/green]")
+
+ for sa in argo_sa:
+ api_instance = kubernetes.client.CoreV1Api()
+ try:
+ api_instance.delete_namespaced_service_account(
+ sa,
+ namespace,
+ )
+ except kubernetes.client.exceptions.ApiException as e:
+ if e.status == 404:
+ rich.print(
+ f"Service account [yellow]{sa}[/yellow] not found. Ignoring."
+ )
+ else:
+ raise e
+ else:
+ rich.print(
+ f"Successfully removed service account [green]{sa}[/green]"
+ )
+ else:
+ kubectl_delete_argo_crds_cmd = " ".join(
+ (
+ *("kubectl delete crds",),
+ *argo_crds,
+ ),
+ )
+ kubectl_delete_argo_sa_cmd = " ".join(
+ (
+ *(
+ "kubectl delete sa",
+ f"-n {namespace}",
+ ),
+ *argo_sa,
+ ),
+ )
rich.print(
- f"You must delete the Argo Workflows CRDs and service accounts before upgrading to [green]{self.version}[/green] (or later)."
+ f"\n\n[bold cyan]Note:[/] Upgrading requires a one-time manual deletion of the Argo Workflows Custom Resource Definitions (CRDs) and service accounts. \n\n[red bold]"
+ f"Warning: [link=https://{config['domain']}/argo/workflows]Workflows[/link] and [link=https://{config['domain']}/argo/workflows]CronWorkflows[/link] created before deleting the CRDs will be erased when the CRDs are deleted and will not be restored.[/red bold] \n\n"
+ f"The updated CRDs will be installed during the next [cyan bold]nebari deploy[/cyan bold] step. Argo Workflows will not function after deleting the CRDs until the updated CRDs and service accounts are installed in the next nebari deploy. "
+ f"You must delete the Argo Workflows CRDs and service accounts before upgrading to {self.version} (or later) or the deploy step will fail. "
+ f"Please delete them before proceeding by generating a kubeconfig (see [link=https://www.nebari.dev/docs/how-tos/debug-nebari/#generating-the-kubeconfig]docs[/link]), installing kubectl (see [link=https://www.nebari.dev/docs/how-tos/debug-nebari#installing-kubectl]docs[/link]), and running the following two commands:\n\n\t[cyan bold]{kubectl_delete_argo_crds_cmd} [/cyan bold]\n\n\t[cyan bold]{kubectl_delete_argo_sa_cmd} [/cyan bold]"
)
- exit()
+
+ continue_ = Confirm.ask(
+ "Have you deleted the Argo Workflows CRDs and service accounts?",
+ default=False,
+ )
+ if not continue_:
+ rich.print(
+ f"You must delete the Argo Workflows CRDs and service accounts before upgrading to [green]{self.version}[/green] (or later)."
+ )
+ exit()
return config
@@ -681,11 +831,11 @@ def _version_specific_upgrade(
):
argo = config.get("argo_workflows", {})
if argo.get("enabled"):
- response = Prompt.ask(
- f"\nDo you want to enable the [green][link={NEBARI_WORKFLOW_CONTROLLER_DOCS}]Nebari Workflow Controller[/link][/green], required for [green][link={ARGO_JUPYTER_SCHEDULER_REPO}]Argo-Jupyter-Scheduler[/link][green]? [Y/n] ",
- default="Y",
+ response = kwargs.get("attempt_fixes", False) or Confirm.ask(
+ f"\nDo you want to enable the [green][link={NEBARI_WORKFLOW_CONTROLLER_DOCS}]Nebari Workflow Controller[/link][/green], required for [green][link={ARGO_JUPYTER_SCHEDULER_REPO}]Argo-Jupyter-Scheduler[/link][green]?",
+ default=True,
)
- if response.lower() in ["y", "yes", ""]:
+ if response:
argo["nebari_workflow_controller"] = {"enabled": True}
rich.print("\n ⚠️ Deprecation Warnings ⚠️")
@@ -725,9 +875,6 @@ def _version_specific_upgrade(
rich.print(
"-> Data should be backed up before performing this upgrade ([green][link=https://www.nebari.dev/docs/how-tos/manual-backup]see docs[/link][/green]) The 'prevent_deploy' flag has been set in your config file and must be manually removed to deploy."
)
- rich.print(
- "-> Please also run the [green]rm -rf stages[/green] so that we can regenerate an updated set of Terraform scripts for your deployment."
- )
# Setting the following flag will prevent deployment and display guidance to the user
# which they can override if they are happy they understand the situation.
@@ -811,6 +958,26 @@ def _version_specific_upgrade(
rich.print("\n ⚠️ DANGER ⚠️")
rich.print(DESTRUCTIVE_UPGRADE_WARNING)
+ if kwargs.get("attempt_fixes", False) or Confirm.ask(
+ TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION,
+ default=False,
+ ):
+ if (
+ (_terraform_state_config := config.get("terraform_state"))
+ and (_terraform_state_config.get("type") != "remote")
+ and not Confirm.ask(
+ DESTROY_STAGE_FILES_WITH_TF_STATE_NOT_REMOTE,
+ default=False,
+ )
+ ):
+ exit()
+
+ self._rm_rf_stages(
+ config_filename,
+ dry_run=kwargs.get("dry_run", False),
+ verbose=True,
+ )
+
return config
@@ -828,15 +995,31 @@ class Upgrade_2023_11_1(UpgradeStep):
def _version_specific_upgrade(
self, config, start_version, config_filename: Path, *args, **kwargs
):
- rich.print("\n ⚠️ Warning ⚠️")
- rich.print(
- "-> Please run the [green]rm -rf stages[/green] so that we can regenerate an updated set of Terraform scripts for your deployment."
- )
rich.print("\n ⚠️ Deprecation Warning ⚠️")
rich.print(
f"-> ClearML, Prefect and kbatch are no longer supported in Nebari version [green]{self.version}[/green] and will be uninstalled."
)
+ if kwargs.get("attempt_fixes", False) or Confirm.ask(
+ TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION,
+ default=False,
+ ):
+ if (
+ (_terraform_state_config := config.get("terraform_state"))
+ and (_terraform_state_config.get("type") != "remote")
+ and not Confirm.ask(
+ DESTROY_STAGE_FILES_WITH_TF_STATE_NOT_REMOTE,
+ default=False,
+ )
+ ):
+ exit()
+
+ self._rm_rf_stages(
+ config_filename,
+ dry_run=kwargs.get("dry_run", False),
+ verbose=True,
+ )
+
return config
@@ -854,16 +1037,32 @@ class Upgrade_2023_12_1(UpgradeStep):
def _version_specific_upgrade(
self, config, start_version, config_filename: Path, *args, **kwargs
):
- rich.print("\n ⚠️ Warning ⚠️")
- rich.print(
- "-> Please run the [green]rm -rf stages[/green] so that we can regenerate an updated set of Terraform scripts for your deployment."
- )
rich.print("\n ⚠️ Deprecation Warning ⚠️")
rich.print(
f"-> [green]{self.version}[/green] is the last Nebari version that supports the jupyterlab-videochat extension."
)
rich.print()
+ if kwargs.get("attempt_fixes", False) or Confirm.ask(
+ TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION,
+ default=False,
+ ):
+ if (
+ (_terraform_state_config := config.get("terraform_state"))
+ and (_terraform_state_config.get("type") != "remote")
+ and not Confirm.ask(
+ DESTROY_STAGE_FILES_WITH_TF_STATE_NOT_REMOTE,
+ default=False,
+ )
+ ):
+ exit()
+
+ self._rm_rf_stages(
+ config_filename,
+ dry_run=kwargs.get("dry_run", False),
+ verbose=True,
+ )
+
return config
@@ -881,10 +1080,6 @@ class Upgrade_2024_1_1(UpgradeStep):
def _version_specific_upgrade(
self, config, start_version, config_filename: Path, *args, **kwargs
):
- rich.print("\n ⚠️ Warning ⚠️")
- rich.print(
- "-> Please run the [green]rm -rf stages[/green] so that we can regenerate an updated set of Terraform scripts for your deployment."
- )
rich.print("\n ⚠️ Deprecation Warning ⚠️")
rich.print(
"-> jupyterlab-videochat, retrolab, jupyter-tensorboard, jupyterlab-conda-store and jupyter-nvdashboard",
@@ -892,6 +1087,26 @@ def _version_specific_upgrade(
)
rich.print()
+ if kwargs.get("attempt_fixes", False) or Confirm.ask(
+ TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION,
+ default=False,
+ ):
+ if (
+ (_terraform_state_config := config.get("terraform_state"))
+ and (_terraform_state_config.get("type") != "remote")
+ and not Confirm.ask(
+ DESTROY_STAGE_FILES_WITH_TF_STATE_NOT_REMOTE,
+ default=False,
+ )
+ ):
+ exit()
+
+ self._rm_rf_stages(
+ config_filename,
+ dry_run=kwargs.get("dry_run", False),
+ verbose=True,
+ )
+
return config
@@ -957,12 +1172,11 @@ def _version_specific_upgrade(
default_node_groups = provider_enum_default_node_groups_map[
provider
]
- continue_ = Prompt.ask(
+ continue_ = kwargs.get("attempt_fixes", False) or Confirm.ask(
f"Would you like to include the default configuration for the node groups in [purple]{config_filename}[/purple]?",
- choices=["y", "N"],
- default="N",
+ default=False,
)
- if continue_ == "y":
+ if continue_:
config[provider_full_name]["node_groups"] = default_node_groups
except KeyError:
pass
@@ -999,7 +1213,6 @@ def _version_specific_upgrade(
):
# Prompt users to manually update kube-prometheus-stack CRDs if monitoring is enabled
if config.get("monitoring", {}).get("enabled", True):
-
crd_urls = [
"https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml",
"https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml",
@@ -1029,10 +1242,9 @@ def _version_specific_upgrade(
"\n-> [red bold]Nebari version 2024.6.1 comes with a new version of Grafana. Any custom dashboards that you created will be deleted after upgrading Nebari. Make sure to [link=https://grafana.com/docs/grafana/latest/dashboards/share-dashboards-panels/#export-a-dashboard-as-json]export them as JSON[/link] so you can [link=https://grafana.com/docs/grafana/latest/dashboards/build-dashboards/import-dashboards/#import-a-dashboard]import them[/link] again afterwards.[/red bold]"
f"\n-> [red bold]Before upgrading, kube-prometheus-stack CRDs need to be updated and the {daemonset_name} daemonset needs to be deleted.[/red bold]"
)
- run_commands = Prompt.ask(
+ run_commands = kwargs.get("attempt_fixes", False) or Confirm.ask(
"\nDo you want Nebari to update the kube-prometheus-stack CRDs and delete the prometheus-node-exporter for you? If not, you'll have to do it manually.",
- choices=["y", "N"],
- default="N",
+ default=False,
)
# By default, rich wraps lines by splitting them into multiple lines. This is
@@ -1040,7 +1252,7 @@ def _version_specific_upgrade(
# To avoid this, we use a rich console with a larger width to print the entire commands
# and let the terminal wrap them if needed.
console = rich.console.Console(width=220)
- if run_commands == "y":
+ if run_commands:
try:
kubernetes.config.load_kube_config()
except kubernetes.config.config_exception.ConfigException:
@@ -1053,10 +1265,14 @@ def _version_specific_upgrade(
rich.print(
f"The following commands will be run for the [cyan bold]{cluster_name}[/cyan bold] cluster"
)
- Prompt.ask("Hit enter to show the commands")
+ _ = kwargs.get("attempt_fixes", False) or Prompt.ask(
+ "Hit enter to show the commands"
+ )
console.print(commands)
- Prompt.ask("Hit enter to continue")
+ _ = kwargs.get("attempt_fixes", False) or Prompt.ask(
+ "Hit enter to continue"
+ )
# We need to add a special constructor to the yaml loader to handle a specific
# tag as otherwise the kubernetes API will fail when updating the CRD.
yaml.constructor.add_constructor(
@@ -1098,16 +1314,15 @@ def _version_specific_upgrade(
rich.print(
"[red bold]Before upgrading, you need to manually delete the prometheus-node-exporter daemonset and update the kube-prometheus-stack CRDs. To do that, please run the following commands.[/red bold]"
)
- Prompt.ask("Hit enter to show the commands")
+ _ = Prompt.ask("Hit enter to show the commands")
console.print(commands)
- Prompt.ask("Hit enter to continue")
- continue_ = Prompt.ask(
+ _ = Prompt.ask("Hit enter to continue")
+ continue_ = Confirm.ask(
f"Have you backed up your custom dashboards (if necessary), deleted the {daemonset_name} daemonset and updated the kube-prometheus-stack CRDs?",
- choices=["y", "N"],
- default="N",
+ default=False,
)
- if not continue_ == "y":
+ if not continue_:
rich.print(
f"[red bold]You must back up your custom dashboards (if necessary), delete the {daemonset_name} daemonset and update the kube-prometheus-stack CRDs before upgrading to [green]{self.version}[/green] (or later).[/bold red]"
)
@@ -1132,12 +1347,11 @@ def _version_specific_upgrade(
If not, select "N" and the old default node groups will be added to the nebari config file.
"""
)
- continue_ = Prompt.ask(
+ continue_ = kwargs.get("attempt_fixes", False) or Confirm.ask(
text,
- choices=["y", "N"],
- default="y",
+ default=True,
)
- if continue_ == "N":
+ if not continue_:
config[provider_full_name]["node_groups"] = {
"general": {
"instance": "n1-standard-8",
@@ -1178,8 +1392,9 @@ def _version_specific_upgrade(
},
indent=4,
)
- text += "\n\nHit enter to continue"
- Prompt.ask(text)
+ rich.print(text)
+ if not kwargs.get("attempt_fixes", False):
+ _ = Prompt.ask("\n\nHit enter to continue")
return config
@@ -1197,7 +1412,7 @@ class Upgrade_2024_7_1(UpgradeStep):
def _version_specific_upgrade(
self, config, start_version, config_filename: Path, *args, **kwargs
):
- if config.get("provider", "") == ProviderEnum.do.value:
+ if config.get("provider", "") == "do":
rich.print("\n ⚠️ Deprecation Warning ⚠️")
rich.print(
"-> Digital Ocean support is currently being deprecated and will be removed in a future release.",
@@ -1214,6 +1429,22 @@ class Upgrade_2024_9_1(UpgradeStep):
version = "2024.9.1"
+ # Nebari version 2024.9.1 has been marked as broken, and will be skipped:
+ # https://github.com/nebari-dev/nebari/issues/2798
+ @override
+ def _version_specific_upgrade(
+ self, config, start_version, config_filename: Path, *args, **kwargs
+ ):
+ return config
+
+
+class Upgrade_2024_11_1(UpgradeStep):
+ """
+ Upgrade step for Nebari version 2024.11.1
+ """
+
+ version = "2024.11.1"
+
@override
def _version_specific_upgrade(
self, config, start_version, config_filename: Path, *args, **kwargs
@@ -1229,7 +1460,7 @@ def _version_specific_upgrade(
),
)
rich.print("")
- elif config.get("provider", "") == ProviderEnum.do.value:
+ elif config.get("provider", "") == "do":
rich.print("\n ⚠️ Deprecation Warning ⚠️")
rich.print(
"-> Digital Ocean support is currently being deprecated and will be removed in a future release.",
@@ -1243,16 +1474,16 @@ def _version_specific_upgrade(
Please ensure no users are currently logged in prior to deploying this
update.
- Nebari [green]2024.9.1[/green] introduces changes to how group
- directories are mounted in JupyterLab pods.
+ This release introduces changes to how group directories are mounted in
+ JupyterLab pods.
Previously, every Keycloak group in the Nebari realm automatically created a
shared directory at ~/shared/, accessible to all group members
in their JupyterLab pods.
- Starting with Nebari [green]2024.9.1[/green], only groups assigned the
- JupyterHub client role [magenta]allow-group-directory-creation[/magenta] will have their
- directories mounted.
+ Moving forward, only groups assigned the JupyterHub client role
+ [magenta]allow-group-directory-creation[/magenta] or its affiliated scope
+ [magenta]write:shared-mount[/magenta] will have their directories mounted.
By default, the admin, analyst, and developer groups will have this
role assigned during the upgrade. For other groups, you'll now need to
@@ -1266,13 +1497,10 @@ def _version_specific_upgrade(
keycloak_admin = None
# Prompt the user for role assignment (if yes, transforms the response into bool)
- assign_roles = (
- Prompt.ask(
- "[bold]Would you like Nebari to assign the corresponding role to all of your current groups automatically?[/bold]",
- choices=["y", "N"],
- default="N",
- ).lower()
- == "y"
+ # This needs to be monkeypatched and will be addressed in a future PR. Until then, this causes test failures.
+ assign_roles = kwargs.get("attempt_fixes", False) or Confirm.ask(
+ "[bold]Would you like Nebari to assign the corresponding role/scopes to all of your current groups automatically?[/bold]",
+ default=False,
)
if assign_roles:
@@ -1281,18 +1509,63 @@ def _version_specific_upgrade(
urllib3.disable_warnings()
- keycloak_admin = get_keycloak_admin(
- server_url=f"https://{config['domain']}/auth/",
- username="root",
- password=config["security"]["keycloak"]["initial_root_password"],
+ keycloak_username = os.environ.get("KEYCLOAK_ADMIN_USERNAME", "root")
+ keycloak_password = os.environ.get(
+ "KEYCLOAK_ADMIN_PASSWORD",
+ config["security"]["keycloak"]["initial_root_password"],
)
- # Proceed with updating group permissions
+ try:
+ # Quick test to connect to Keycloak
+ keycloak_admin = get_keycloak_admin(
+ server_url=f"https://{config['domain']}/auth/",
+ username=keycloak_username,
+ password=keycloak_password,
+ )
+ except ValueError as e:
+ if "invalid_grant" in str(e):
+ rich.print(
+ textwrap.dedent(
+ """
+ [red bold]Failed to connect to the Keycloak server.[/red bold]\n
+ [yellow]Please set the [bold]KEYCLOAK_ADMIN_USERNAME[/bold] and [bold]KEYCLOAK_ADMIN_PASSWORD[/bold]
+ environment variables with the Keycloak root credentials and try again.[/yellow]
+ """
+ )
+ )
+ exit()
+ else:
+ # Handle other exceptions
+ rich.print(
+ f"[red bold]An unexpected error occurred: {repr(e)}[/red bold]"
+ )
+ exit()
+
+ # Get client ID as role is bound to the JupyterHub client
client_id = keycloak_admin.get_client_id("jupyterhub")
- role_name = "allow-group-directory-creation-role"
+ role_name = "legacy-group-directory-creation-role"
+
+ # Create role with shared scopes
+ keycloak_admin.create_client_role(
+ client_role_id=client_id,
+ skip_exists=True,
+ payload={
+ "name": role_name,
+ "attributes": {
+ "scopes": ["write:shared-mount"],
+ "component": ["shared-directory"],
+ },
+ "description": (
+ "Role to allow group directory creation, created as part of the "
+ "Nebari 2024.11.1 upgrade workflow."
+ ),
+ },
+ )
+
role_id = keycloak_admin.get_client_role_id(
client_id=client_id, role_name=role_name
)
+
role_representation = keycloak_admin.get_role_by_id(role_id=role_id)
# Fetch all groups and groups with the role
@@ -1328,6 +1601,61 @@ def _version_specific_upgrade(
return config
+class Upgrade_2024_12_1(UpgradeStep):
+ """
+ Upgrade step for Nebari version 2024.12.1
+ """
+
+ version = "2024.12.1"
+
+ @override
+ def _version_specific_upgrade(
+ self, config, start_version, config_filename: Path, *args, **kwargs
+ ):
+ if config.get("provider", "") == "do":
+ rich.print(
+ "\n[red bold]Error: DigitalOcean is no longer supported as a provider[/red bold].",
+ )
+ rich.print(
+ "You can still deploy Nebari to a Kubernetes cluster on DigitalOcean by using 'existing' as the provider in the config file."
+ )
+ exit()
+
+ rich.print("Ready to upgrade to Nebari version [green]2024.12.1[/green].")
+
+ return config
+
+
+class Upgrade_2025_2_1(UpgradeStep):
+ version = "2025.2.1"
+
+ @override
+ def _version_specific_upgrade(
+ self, config, start_version, config_filename: Path, *args, **kwargs
+ ):
+ rich.print("\n ⚠️ Upgrade Warning ⚠️")
+
+ text = textwrap.dedent(
+ """
+ In this release, we have updated our maximum supported Kubernetes version from 1.29 to 1.31.
+ Please note that Nebari will NOT automatically upgrade your running Kubernetes version as part of
+ the redeployment process.
+
+ After completing this upgrade step, we strongly recommend updating the Kubernetes version
+ specified in your nebari-config YAML file and redeploying to apply the changes. Remember that
+ Kubernetes minor versions must be upgraded incrementally (1.29 → 1.30 → 1.31).
+
+ For more information on upgrading Kubernetes for your specific cloud provider, please visit:
+ https://www.nebari.dev/docs/how-tos/kubernetes-version-upgrade
+ """
+ )
+
+ rich.print(text)
+ rich.print("Ready to upgrade to Nebari version [green]2025.2.1[/green].")
+
+ return config
+
+
__rounded_version__ = str(rounded_ver_parse(__version__))
# Manually-added upgrade steps must go above this line
diff --git a/src/_nebari/utils.py b/src/_nebari/utils.py
index 5f0877666a..48b8a91e9b 100644
--- a/src/_nebari/utils.py
+++ b/src/_nebari/utils.py
@@ -160,7 +160,7 @@ def modified_environ(*remove: List[str], **update: Dict[str, str]):
def deep_merge(*args):
- """Deep merge multiple dictionaries.
+ """Deep merge multiple dictionaries. Preserves order in dicts and lists.
>>> value_1 = {
'a': [1, 2],
@@ -190,7 +190,7 @@ def deep_merge(*args):
if isinstance(d1, dict) and isinstance(d2, dict):
d3 = {}
- for key in d1.keys() | d2.keys():
+ for key in tuple(d1.keys()) + tuple(d2.keys()):
if key in d1 and key in d2:
d3[key] = deep_merge(d1[key], d2[key])
elif key in d1:
@@ -286,11 +286,6 @@ def random_secure_string(
return "".join(secrets.choice(chars) for i in range(length))
-def set_do_environment():
- os.environ["AWS_ACCESS_KEY_ID"] = os.environ["SPACES_ACCESS_KEY_ID"]
- os.environ["AWS_SECRET_ACCESS_KEY"] = os.environ["SPACES_SECRET_ACCESS_KEY"]
-
-
def set_docker_image_tag() -> str:
"""Set docker image tag for `jupyterlab`, `jupyterhub`, and `dask-worker`."""
return os.environ.get("NEBARI_IMAGE_TAG", constants.DEFAULT_NEBARI_IMAGE_TAG)
@@ -348,7 +343,6 @@ def get_provider_config_block_name(provider):
PROVIDER_CONFIG_NAMES = {
"aws": "amazon_web_services",
"azure": "azure",
- "do": "digital_ocean",
"gcp": "google_cloud_platform",
}
diff --git a/src/nebari/plugins.py b/src/nebari/plugins.py
index 71db0ade96..a6cb1aa688 100644
--- a/src/nebari/plugins.py
+++ b/src/nebari/plugins.py
@@ -19,6 +19,7 @@
"_nebari.subcommands.deploy",
"_nebari.subcommands.destroy",
"_nebari.subcommands.keycloak",
+ "_nebari.subcommands.plugin",
"_nebari.subcommands.render",
"_nebari.subcommands.support",
"_nebari.subcommands.upgrade",
@@ -121,6 +122,14 @@ def read_config(self, config_path: typing.Union[str, Path], **kwargs):
return read_configuration(config_path, self.config_schema, **kwargs)
+ def get_external_plugins(self):
+ external_plugins = []
+ all_plugins = DEFAULT_SUBCOMMAND_PLUGINS + DEFAULT_STAGES_PLUGINS
+ for plugin in self.plugin_manager.get_plugins():
+ if plugin.__name__ not in all_plugins:
+ external_plugins.append(plugin.__name__)
+ return external_plugins
+
@property
def ordered_stages(self):
return self.get_available_stages()
diff --git a/src/nebari/schema.py b/src/nebari/schema.py
index 6a809842d7..b45af521be 100644
--- a/src/nebari/schema.py
+++ b/src/nebari/schema.py
@@ -35,7 +35,6 @@ class Base(pydantic.BaseModel):
class ProviderEnum(str, enum.Enum):
local = "local"
existing = "existing"
- do = "do"
aws = "aws"
gcp = "gcp"
azure = "azure"
diff --git a/tests/common/handlers.py b/tests/common/handlers.py
index 51964d3ac5..5485059141 100644
--- a/tests/common/handlers.py
+++ b/tests/common/handlers.py
@@ -86,20 +86,31 @@ def _dismiss_kernel_popup(self):
def _shutdown_all_kernels(self):
"""Shutdown all running kernels."""
logger.debug(">>> Shutting down all kernels")
- kernel_menu = self.page.get_by_role("menuitem", name="Kernel")
- kernel_menu.click()
+
+ # Open the "Kernel" menu
+ self.page.get_by_role("menuitem", name="Kernel").click()
+
+ # Locate the "Shut Down All Kernels…" menu item
shut_down_all = self.page.get_by_role("menuitem", name="Shut Down All Kernels…")
- logger.debug(
- f">>> Shut down all kernels visible: {shut_down_all.is_visible()} enabled: {shut_down_all.is_enabled()}"
- )
- if shut_down_all.is_visible() and shut_down_all.is_enabled():
- shut_down_all.click()
- self.page.get_by_role("button", name="Shut Down All").click()
- else:
+
+ # If it's not visible or is disabled, there's nothing to shut down
+ if not shut_down_all.is_visible() or shut_down_all.is_disabled():
logger.debug(">>> No kernels to shut down")
+ return
+
+ # Otherwise, click to shut down all kernels and confirm
+ shut_down_all.click()
+ self.page.get_by_role("button", name="Shut Down All").click()
def _navigate_to_root_folder(self):
"""Navigate back to the root folder in JupyterLab."""
+ # Make sure the home directory is select in the sidebar
+ if not self.page.get_by_role(
+ "region", name="File Browser Section"
+ ).is_visible():
+ file_browser_tab = self.page.get_by_role("tab", name="File Browser")
+ file_browser_tab.click()
+
logger.debug(">>> Navigating to root folder")
self.page.get_by_title(f"/home/{self.nav.username}", exact=True).locator(
"path"
@@ -298,13 +309,24 @@ def _open_conda_store_service(self):
def _open_new_environment_tab(self):
self.page.get_by_label("Create a new environment in").click()
- expect(self.page.get_by_text("Create Environment")).to_be_visible()
-
- def _assert_user_namespace(self):
expect(
- self.page.get_by_role("button", name=f"{self.nav.username} Create a new")
+ self.page.get_by_role("button", name="Create", exact=True)
).to_be_visible()
+ def _assert_user_namespace(self):
+ user_namespace_dropdown = self.page.get_by_role(
+ "button", name=f"{self.nav.username} Create a new"
+ )
+
+ if not (
+ expect(
+ user_namespace_dropdown
+ ).to_be_visible() # this asserts the user namespace shows in the UI
+ or self.nav.username
+ in user_namespace_dropdown.text_content() # this attests that the namespace corresponds to the logged in user
+ ):
+ raise ValueError(f"User namespace {self.nav.username} not found")
+
def _get_shown_namespaces(self):
_envs = self.page.locator("#environmentsScroll").get_by_role("button")
_env_contents = [env.text_content() for env in _envs.all()]
diff --git a/tests/common/navigator.py b/tests/common/navigator.py
index 04e019a7a6..e0b404fd26 100644
--- a/tests/common/navigator.py
+++ b/tests/common/navigator.py
@@ -5,6 +5,7 @@
from pathlib import Path
from playwright.sync_api import expect, sync_playwright
+from yarl import URL
logger = logging.getLogger()
@@ -50,7 +51,7 @@ def setup(self):
self.page = self.context.new_page()
self.initialized = True
- def _rename_test_video_path(self, video_path):
+ def _rename_test_video_path(self, video_path: Path):
"""Rename the test video file to the test unique identifier."""
video_file_name = (
f"{self.video_name_prefix}.mp4" if self.video_name_prefix else None
@@ -62,7 +63,7 @@ def teardown(self) -> None:
"""Teardown Playwright browser and context."""
if self.initialized:
# Rename the video file to the test unique identifier
- current_video_path = self.page.video.path()
+ current_video_path = Path(self.page.video.path())
self._rename_test_video_path(current_video_path)
self.context.close()
@@ -87,10 +88,17 @@ class LoginNavigator(NavigatorMixin):
def __init__(self, nebari_url, username, password, auth="password", **kwargs):
super().__init__(**kwargs)
- self.nebari_url = nebari_url
+ self._nebari_url = URL(nebari_url)
self.username = username
self.password = password
self.auth = auth
+ logger.debug(
+ f"LoginNavigator initialized with {self.auth} auth method. :: {self.nebari_url}"
+ )
+
+ @property
+ def nebari_url(self):
+ return self._nebari_url.human_repr()
def login(self):
"""Login to Nebari deployment using the provided authentication method."""
@@ -110,7 +118,7 @@ def logout(self):
def _login_google(self):
logger.debug(">>> Sign in via Google and start the server")
- self.page.goto(self.nebari_url)
+ self.page.goto(url=self.nebari_url)
expect(self.page).to_have_url(re.compile(f"{self.nebari_url}*"))
self.page.get_by_role("button", name="Sign in with Keycloak").click()
@@ -123,7 +131,7 @@ def _login_google(self):
def _login_password(self):
logger.debug(">>> Sign in via Username/Password")
- self.page.goto(self.nebari_url)
+ self.page.goto(url=self.nebari_url)
expect(self.page).to_have_url(re.compile(f"{self.nebari_url}*"))
self.page.get_by_role("button", name="Sign in with Keycloak").click()
diff --git a/tests/common/playwright_fixtures.py b/tests/common/playwright_fixtures.py
index 35ea36baad..581d9347f8 100644
--- a/tests/common/playwright_fixtures.py
+++ b/tests/common/playwright_fixtures.py
@@ -23,17 +23,43 @@ def load_env_vars():
def build_params(request, pytestconfig, extra_params=None):
"""Construct and return parameters for navigator instances."""
env_vars = load_env_vars()
+
+ # Retrieve values from request or environment
+ nebari_url = request.param.get("nebari_url") or env_vars.get("nebari_url")
+ username = request.param.get("keycloak_username") or env_vars.get("username")
+ password = request.param.get("keycloak_password") or env_vars.get("password")
+
+ # Validate that required fields are present
+ if not nebari_url:
+ raise ValueError(
+ "Error: 'nebari_url' is required but was not provided in "
+ "'request.param' or environment variables."
+ )
+ if not username:
+ raise ValueError(
+ "Error: 'username' is required but was not provided in "
+ "'request.param' or environment variables."
+ )
+ if not password:
+ raise ValueError(
+ "Error: 'password' is required but was not provided in "
+ "'request.param' or environment variables."
+ )
+
+ # Build the params dictionary once all required fields are validated
params = {
- "nebari_url": request.param.get("nebari_url") or env_vars["nebari_url"],
- "username": request.param.get("keycloak_username") or env_vars["username"],
- "password": request.param.get("keycloak_password") or env_vars["password"],
+ "nebari_url": nebari_url,
+ "username": username,
+ "password": password,
"auth": "password",
"video_dir": "videos/",
"headless": pytestconfig.getoption("--headed"),
"slow_mo": pytestconfig.getoption("--slowmo"),
}
+
if extra_params:
params.update(extra_params)
+
return params
diff --git a/tests/tests_deployment/test_jupyterhub_ssh.py b/tests/tests_deployment/test_jupyterhub_ssh.py
index d65bd4800f..f21247162b 100644
--- a/tests/tests_deployment/test_jupyterhub_ssh.py
+++ b/tests/tests_deployment/test_jupyterhub_ssh.py
@@ -1,5 +1,6 @@
import re
import string
+import time
import uuid
import paramiko
@@ -14,9 +15,14 @@
TIMEOUT_SECS = 300
-@pytest.fixture(scope="function")
+@pytest.fixture(scope="session")
def paramiko_object(jupyterhub_access_token):
- """Connects to JupyterHub ssh cluster from outside the cluster."""
+ """Connects to JupyterHub SSH cluster from outside the cluster.
+
+ Ensures the JupyterLab pod is ready before attempting reauthentication
+ by setting both `auth_timeout` and `banner_timeout` appropriately,
+ and by retrying the connection until the pod is ready or a timeout occurs.
+ """
params = {
"hostname": constants.NEBARI_HOSTNAME,
"port": 8022,
@@ -24,54 +30,65 @@ def paramiko_object(jupyterhub_access_token):
"password": jupyterhub_access_token,
"allow_agent": constants.PARAMIKO_SSH_ALLOW_AGENT,
"look_for_keys": constants.PARAMIKO_SSH_LOOK_FOR_KEYS,
- "auth_timeout": 5 * 60,
}
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
- try:
- ssh_client.connect(**params)
- yield ssh_client
- finally:
- ssh_client.close()
-
-
-def run_command(command, stdin, stdout, stderr):
- delimiter = uuid.uuid4().hex
- stdin.write(f"echo {delimiter}start; {command}; echo {delimiter}end\n")
-
- output = []
-
- line = stdout.readline()
- while not re.match(f"^{delimiter}start$", line.strip()):
- line = stdout.readline()
- line = stdout.readline()
- if delimiter not in line:
- output.append(line)
-
- while not re.match(f"^{delimiter}end$", line.strip()):
- line = stdout.readline()
- if delimiter not in line:
- output.append(line)
-
- return "".join(output).strip()
-
-
-@pytest.mark.timeout(TIMEOUT_SECS)
-@pytest.mark.filterwarnings("ignore::urllib3.exceptions.InsecureRequestWarning")
-@pytest.mark.filterwarnings("ignore::ResourceWarning")
-def test_simple_jupyterhub_ssh(paramiko_object):
- stdin, stdout, stderr = paramiko_object.exec_command("")
+ yield ssh_client, params
+
+ ssh_client.close()
+
+
+def invoke_shell(
+ client: paramiko.SSHClient, params: dict[str, any]
+) -> paramiko.Channel:
+ client.connect(**params)
+ return client.invoke_shell()
+
+
+def extract_output(delimiter: str, output: str) -> str:
+ # Extract the command output between the start and end delimiters
+ match = re.search(rf"{delimiter}start\n(.*)\n{delimiter}end", output, re.DOTALL)
+ if match:
+ print(match.group(1).strip())
+ return match.group(1).strip()
+ else:
+ return output.strip()
+
+
+def run_command_list(
+ commands: list[str], channel: paramiko.Channel, wait_time: int = 0
+) -> dict[str, str]:
+ command_delimiters = {}
+ for command in commands:
+ delimiter = uuid.uuid4().hex
+ command_delimiters[command] = delimiter
+ b = channel.send(f"echo {delimiter}start; {command}; echo {delimiter}end\n")
+ if b == 0:
+ print(f"Command '{command}' failed to send")
+ # Wait for the output to be ready before reading
+ time.sleep(wait_time)
+ while not channel.recv_ready():
+ time.sleep(1)
+ print("Waiting for output")
+ output = ""
+ while channel.recv_ready():
+ output += channel.recv(65535).decode("utf-8")
+ outputs = {}
+ for command, delimiter in command_delimiters.items():
+ command_output = extract_output(delimiter, output)
+ outputs[command] = command_output
+ return outputs
@pytest.mark.timeout(TIMEOUT_SECS)
@pytest.mark.filterwarnings("ignore::urllib3.exceptions.InsecureRequestWarning")
@pytest.mark.filterwarnings("ignore::ResourceWarning")
def test_print_jupyterhub_ssh(paramiko_object):
- stdin, stdout, stderr = paramiko_object.exec_command("")
-
- # commands to run and just print the output
+ client, params = paramiko_object
+ channel = invoke_shell(client, params)
+ # Commands to run and just print the output
commands_print = [
"id",
"env",
@@ -80,52 +97,60 @@ def test_print_jupyterhub_ssh(paramiko_object):
"ls -la",
"umask",
]
-
- for command in commands_print:
- print(f'COMMAND: "{command}"')
- print(run_command(command, stdin, stdout, stderr))
+ outputs = run_command_list(commands_print, channel)
+ for command, output in outputs.items():
+ print(f"COMMAND: {command}")
+ print(f"OUTPUT: {output}")
+ channel.close()
@pytest.mark.timeout(TIMEOUT_SECS)
@pytest.mark.filterwarnings("ignore::urllib3.exceptions.InsecureRequestWarning")
@pytest.mark.filterwarnings("ignore::ResourceWarning")
def test_exact_jupyterhub_ssh(paramiko_object):
- stdin, stdout, stderr = paramiko_object.exec_command("")
-
- # commands to run and exactly match output
- commands_exact = [
- ("id -u", "1000"),
- ("id -g", "100"),
- ("whoami", constants.KEYCLOAK_USERNAME),
- ("pwd", f"/home/{constants.KEYCLOAK_USERNAME}"),
- ("echo $HOME", f"/home/{constants.KEYCLOAK_USERNAME}"),
- ("conda activate default && echo $CONDA_PREFIX", "/opt/conda/envs/default"),
- (
- "hostname",
- f"jupyter-{escape_string(constants.KEYCLOAK_USERNAME, safe=set(string.ascii_lowercase + string.digits), escape_char='-').lower()}",
- ),
- ]
+ client, params = paramiko_object
+ channel = invoke_shell(client, params)
+ # Commands to run and exactly match output
+ commands_exact = {
+ "id -u": "1000",
+ "id -g": "100",
+ "whoami": constants.KEYCLOAK_USERNAME,
+ "pwd": f"/home/{constants.KEYCLOAK_USERNAME}",
+ "echo $HOME": f"/home/{constants.KEYCLOAK_USERNAME}",
+ "conda activate default && echo $CONDA_PREFIX": "/opt/conda/envs/default",
+ "hostname": f"jupyter-{escape_string(constants.KEYCLOAK_USERNAME, safe=set(string.ascii_lowercase + string.digits), escape_char='-').lower()}",
+ }
+ outputs = run_command_list(list(commands_exact.keys()), channel)
+ for command, output in outputs.items():
+ assert (
+ output == outputs[command]
+ ), f"Command '{command}' output '{outputs[command]}' does not match expected '{output}'"
- for command, output in commands_exact:
- assert output == run_command(command, stdin, stdout, stderr)
+ channel.close()
@pytest.mark.timeout(TIMEOUT_SECS)
@pytest.mark.filterwarnings("ignore::urllib3.exceptions.InsecureRequestWarning")
@pytest.mark.filterwarnings("ignore::ResourceWarning")
def test_contains_jupyterhub_ssh(paramiko_object):
- stdin, stdout, stderr = paramiko_object.exec_command("")
-
- # commands to run and string need to be contained in output
- commands_contain = [
- ("ls -la", ".bashrc"),
- ("cat ~/.bashrc", "Managed by Nebari"),
- ("cat ~/.profile", "Managed by Nebari"),
- ("cat ~/.bash_logout", "Managed by Nebari"),
- # ensure we don't copy over extra files from /etc/skel in init container
- ("ls -la ~/..202*", "No such file or directory"),
- ("ls -la ~/..data", "No such file or directory"),
- ]
+ client, params = paramiko_object
+ channel = invoke_shell(client, params)
+
+ # Commands to run and check if the output contains specific strings
+ commands_contain = {
+ "ls -la": ".bashrc",
+ "cat ~/.bashrc": "Managed by Nebari",
+ "cat ~/.profile": "Managed by Nebari",
+ "cat ~/.bash_logout": "Managed by Nebari",
+ # Ensure we don't copy over extra files from /etc/skel in init container
+ "ls -la ~/..202*": "No such file or directory",
+ "ls -la ~/..data": "No such file or directory",
+ }
+
+ outputs = run_command_list(commands_contain.keys(), channel, 30)
+ for command, expected_output in commands_contain.items():
+ assert (
+ expected_output in outputs[command]
+ ), f"Command '{command}' output does not contain expected substring '{expected_output}'. Instead got '{outputs[command]}'"
- for command, output in commands_contain:
- assert output in run_command(command, stdin, stdout, stderr)
+ channel.close()
diff --git a/tests/tests_e2e/playwright/.env.tpl b/tests/tests_e2e/playwright/.env.tpl
index 399eff80c7..d1fad0a084 100644
--- a/tests/tests_e2e/playwright/.env.tpl
+++ b/tests/tests_e2e/playwright/.env.tpl
@@ -1,3 +1,3 @@
KEYCLOAK_USERNAME="USERNAME_OR_GOOGLE_EMAIL"
KEYCLOAK_PASSWORD="PASSWORD"
-NEBARI_FULL_URL="https://nebari.quansight.dev/"
+NEBARI_FULL_URL="https://localhost/"
diff --git a/tests/tests_e2e/playwright/Makefile b/tests/tests_e2e/playwright/Makefile
new file mode 100644
index 0000000000..429a8a4ac5
--- /dev/null
+++ b/tests/tests_e2e/playwright/Makefile
@@ -0,0 +1,10 @@
+.PHONY: setup
+
+setup:
+ @echo "Setting up correct pins for playwright user-journey tests"
+ pip install -r requirements.txt
+ @echo "Setting up playwright browser dependencies"
+ playwright install
+ @echo "Setting up .env file"
+ cp .env.tpl .env
+ @echo "Please fill in the .env file with the correct values"
diff --git a/tests/tests_e2e/playwright/README.md b/tests/tests_e2e/playwright/README.md
index c328681273..bb3592c9b2 100644
--- a/tests/tests_e2e/playwright/README.md
+++ b/tests/tests_e2e/playwright/README.md
@@ -33,48 +33,57 @@ tests
- `handlers.py`: Contains classes fore handling the different level of access to
services a User might encounter, such as Notebook, Conda-store and others.
-## Setup
-
-1. **Install Nebari with Development Requirements**
+Below is an example of how you might update the **Setup** and **Running the Playwright Tests** sections of your README to reflect the new `Makefile` and the updated `pytest` invocation.
- Install Nebari including development requirements (which include Playwright):
+---
- ```bash
- pip install -e ".[dev]"
- ```
+## Setup
-2. **Install Playwright**
+1. **Use the provided Makefile to install dependencies**
- Install Playwright:
+ Navigate to the Playwright tests directory and run the `setup` target:
```bash
- playwright install
+ cd tests_e2e/playwright
+ make setup
```
- *Note:* If you see the warning `BEWARE: your OS is not officially supported by Playwright; downloading fallback build`, it is not critical. Playwright should still work (see microsoft/playwright#15124).
+ This command will:
-3. **Create Environment Vars**
+ - Install the pinned dependencies from `requirements.txt`.
+ - Install Playwright and its required browser dependencies.
+ - Create a new `.env` file from `.env.tpl`.
- Fill in your execution space environment with the following values:
+2. **Fill in the `.env` file**
- - `KEYCLOAK_USERNAME`: Nebari username for username/password login or Google email address/Google sign-in.
- - `KEYCLOAK_PASSWORD`: Password associated with `KEYCLOAK_USERNAME`.
- - `NEBARI_FULL_URL`: Full URL path including scheme to the Nebari instance (e.g., "https://nebari.quansight.dev/").
+ Open the newly created `.env` file and fill in the following values:
- This user can be created with the following command (or use an existing non-root user):
+ - `KEYCLOAK_USERNAME`: Nebari username for username/password login (or Google email for Google sign-in).
+ - `KEYCLOAK_PASSWORD`: Password associated with the above username.
+ - `NEBARI_FULL_URL`: Full URL (including `https://`) to the Nebari instance (e.g., `https://nebari.quansight.dev/`).
+
+ If you need to create a user for testing, you can do so with:
```bash
nebari keycloak adduser --user --config
```
-## Running the Playwright Tests
+*Note:* If you see the warning:
+```
+BEWARE: your OS is not officially supported by Playwright; downloading fallback build
+```
+it is not critical. Playwright should still work despite the warning.
-Playwright tests are run inside of pytest using:
+## Running the Playwright Tests
+You can run the Playwright tests with `pytest`.
```bash
-pytest tests_e2e/playwright/test_playwright.py
+pytest tests_e2e/playwright/test_playwright.py --numprocesses auto
```
+> **Important**: Due to how Pytest manages async code; Playwright’s sync calls can conflict with default Pytest concurrency settings, and using `--numprocesses auto` helps mitigate potential thread-blocking issues.
+
+
Videos of the test playback will be available in `$PWD/videos/`. To disabled the browser
runtime preview of what is happening while the test runs, pass the `--headed` option to `pytest`. You
can also add the `--slowmo=$MILLI_SECONDS` option to introduce a delay before each
@@ -188,3 +197,17 @@ If your test suit presents a need for a more complex sequence of actions or spec
parsing around the contents present in each page, you can create
your own handler to execute the auxiliary actions while the test is running. Check the
`handlers.py` over some examples of how that's being done.
+
+
+## Debugging Playwright tests
+
+Playwright supports a debug mode called
+[Inspector](https://playwright.dev/python/docs/debug#playwright-inspector) that can be
+used to inspect the browser and the page while the test is running. To enabled this
+debugging option within the tests execution you can pass the `PWDEBUG=1` variable within
+your test execution command.
+
+For example, to run a single test with the debug mode enabled, you can use the following
+```bash
+PWDEBUG=1 pytest -s test_playwright.py::test_notebook --numprocesses 1
+```
diff --git a/tests/tests_e2e/playwright/requirements.txt b/tests/tests_e2e/playwright/requirements.txt
new file mode 100644
index 0000000000..0e5093a62d
--- /dev/null
+++ b/tests/tests_e2e/playwright/requirements.txt
@@ -0,0 +1,4 @@
+playwright==1.50.0
+pytest==8.0.0
+pytest-playwright==0.7.0
+pytest-xdist==3.6.1
diff --git a/tests/tests_e2e/playwright/test_playwright.py b/tests/tests_e2e/playwright/test_playwright.py
index 9d04a4e027..0a835c8413 100644
--- a/tests/tests_e2e/playwright/test_playwright.py
+++ b/tests/tests_e2e/playwright/test_playwright.py
@@ -30,7 +30,8 @@ def test_login_logout(navigator):
)
@login_parameterized()
def test_navbar_services(navigator, services):
- navigator.page.goto(navigator.nebari_url + "hub/home")
+ home_url = navigator._nebari_url / "hub/home"
+ navigator.page.goto(home_url.human_repr())
navigator.page.wait_for_load_state("networkidle")
navbar_items = navigator.page.locator("#thenavbar").get_by_role("link")
navbar_items_names = [item.text_content() for item in navbar_items.all()]
diff --git a/tests/tests_integration/README.md b/tests/tests_integration/README.md
index 759a70a594..79c037a390 100644
--- a/tests/tests_integration/README.md
+++ b/tests/tests_integration/README.md
@@ -3,26 +3,6 @@
These tests are designed to test things on Nebari deployed
on cloud.
-
-## Digital Ocean
-
-```bash
-DIGITALOCEAN_TOKEN
-NEBARI_K8S_VERSION
-SPACES_ACCESS_KEY_ID
-SPACES_SECRET_ACCESS_KEY
-CLOUDFLARE_TOKEN
-```
-
-Assuming you're in the `tests_integration` directory, run:
-
-```bash
-pytest -vvv -s --cloud do
-```
-
-This will deploy on Nebari on Digital Ocean, run tests on the deployment
-and then teardown the cluster.
-
## Amazon Web Services
```bash
diff --git a/tests/tests_integration/conftest.py b/tests/tests_integration/conftest.py
index 4a64fd4274..b4b7a9af79 100644
--- a/tests/tests_integration/conftest.py
+++ b/tests/tests_integration/conftest.py
@@ -7,5 +7,5 @@
# argparse under-the-hood
def pytest_addoption(parser):
parser.addoption(
- "--cloud", action="store", help="Cloud to deploy on: aws/do/gcp/azure"
+ "--cloud", action="store", help="Cloud to deploy on: aws/gcp/azure"
)
diff --git a/tests/tests_integration/deployment_fixtures.py b/tests/tests_integration/deployment_fixtures.py
index f5752d4c24..4ece916667 100644
--- a/tests/tests_integration/deployment_fixtures.py
+++ b/tests/tests_integration/deployment_fixtures.py
@@ -16,10 +16,8 @@
from _nebari.destroy import destroy_configuration
from _nebari.provider.cloud.amazon_web_services import aws_cleanup
from _nebari.provider.cloud.azure_cloud import azure_cleanup
-from _nebari.provider.cloud.digital_ocean import digital_ocean_cleanup
from _nebari.provider.cloud.google_cloud import gcp_cleanup
from _nebari.render import render_template
-from _nebari.utils import set_do_environment
from nebari import schema
from tests.common.config_mod_utils import add_gpu_config, add_preemptible_node_group
from tests.tests_unit.utils import render_config_partial
@@ -98,10 +96,7 @@ def _cleanup_nebari(config: schema.Main):
cloud_provider = config.provider
- if cloud_provider == schema.ProviderEnum.do.value.lower():
- logger.info("Forcefully clean up Digital Ocean resources")
- digital_ocean_cleanup(config)
- elif cloud_provider == schema.ProviderEnum.aws.lower():
+ if cloud_provider == schema.ProviderEnum.aws.lower():
logger.info("Forcefully clean up AWS resources")
aws_cleanup(config)
elif cloud_provider == schema.ProviderEnum.gcp.lower():
@@ -119,9 +114,6 @@ def deploy(request):
cloud = request.config.getoption("--cloud")
# initialize
- if cloud == "do":
- set_do_environment()
-
deployment_dir = _get_or_create_deployment_directory(cloud)
config = render_config_partial(
project_name=deployment_dir.name,
diff --git a/tests/tests_integration/test_all_clouds.py b/tests/tests_integration/test_all_clouds.py
index 8a163fb7b6..6a9bf87dd4 100644
--- a/tests/tests_integration/test_all_clouds.py
+++ b/tests/tests_integration/test_all_clouds.py
@@ -2,7 +2,6 @@
def test_service_status(deploy):
- """Tests if deployment on DigitalOcean succeeds"""
service_urls = deploy["stages/07-kubernetes-services"]["service_urls"]["value"]
assert (
requests.get(service_urls["jupyterhub"]["health_url"], verify=False).status_code
diff --git a/tests/tests_unit/cli_validate/do.happy.yaml b/tests/tests_unit/cli_validate/do.happy.yaml
deleted file mode 100644
index 4ca0b2e62f..0000000000
--- a/tests/tests_unit/cli_validate/do.happy.yaml
+++ /dev/null
@@ -1,28 +0,0 @@
-provider: do
-namespace: dev
-nebari_version: 2023.7.2.dev23+g53d17964.d20230824
-project_name: test
-domain: test.example.com
-ci_cd:
- type: none
-terraform_state:
- type: local
-security:
- keycloak:
- initial_root_password: m1s25vc4k43dxbk5jaxubxcq39n4vmjq
- authentication:
- type: password
-theme:
- jupyterhub:
- hub_title: Nebari - test
- welcome: Welcome! Learn about Nebari's features and configurations in the
- documentation. If you have any questions or feedback, reach the team on
- Nebari's support
- forums.
- hub_subtitle: Your open source data science platform, hosted on Azure
-certificate:
- type: lets-encrypt
- acme_email: test@example.com
-digital_ocean:
- kubernetes_version: '1.20.2-do.0'
- region: nyc3
diff --git a/tests/tests_unit/conftest.py b/tests/tests_unit/conftest.py
index ce60e44799..54528cbd23 100644
--- a/tests/tests_unit/conftest.py
+++ b/tests/tests_unit/conftest.py
@@ -7,7 +7,6 @@
from _nebari.constants import (
AWS_DEFAULT_REGION,
AZURE_DEFAULT_REGION,
- DO_DEFAULT_REGION,
GCP_DEFAULT_REGION,
)
from _nebari.initialize import render_config
@@ -56,6 +55,18 @@ def _mock_return_value(return_value):
"m5.xlarge": "m5.xlarge",
"m5.2xlarge": "m5.2xlarge",
},
+ "_nebari.provider.cloud.amazon_web_services.kms_key_arns": {
+ "xxxxxxxx-east-zzzz": {
+ "Arn": "arn:aws:kms:us-east-1:100000:key/xxxxxxxx-east-zzzz",
+ "KeyUsage": "ENCRYPT_DECRYPT",
+ "KeySpec": "SYMMETRIC_DEFAULT",
+ },
+ "xxxxxxxx-west-zzzz": {
+ "Arn": "arn:aws:kms:us-west-2:100000:key/xxxxxxxx-west-zzzz",
+ "KeyUsage": "ENCRYPT_DECRYPT",
+ "KeySpec": "SYMMETRIC_DEFAULT",
+ },
+ },
# Azure
"_nebari.provider.cloud.azure_cloud.kubernetes_versions": [
"1.18",
@@ -63,22 +74,6 @@ def _mock_return_value(return_value):
"1.20",
],
"_nebari.provider.cloud.azure_cloud.check_credentials": None,
- # Digital Ocean
- "_nebari.provider.cloud.digital_ocean.kubernetes_versions": [
- "1.19.2-do.3",
- "1.20.2-do.0",
- "1.21.5-do.0",
- ],
- "_nebari.provider.cloud.digital_ocean.check_credentials": None,
- "_nebari.provider.cloud.digital_ocean.regions": [
- {"name": "New York 3", "slug": "nyc3"},
- ],
- "_nebari.provider.cloud.digital_ocean.instances": [
- {"name": "s-2vcpu-4gb", "slug": "s-2vcpu-4gb"},
- {"name": "g-2vcpu-8gb", "slug": "g-2vcpu-8gb"},
- {"name": "g-8vcpu-32gb", "slug": "g-8vcpu-32gb"},
- {"name": "g-4vcpu-16gb", "slug": "g-4vcpu-16gb"},
- ],
# Google Cloud
"_nebari.provider.cloud.google_cloud.kubernetes_versions": [
"1.18",
@@ -90,6 +85,11 @@ def _mock_return_value(return_value):
"us-central1",
"us-east1",
],
+ "_nebari.provider.cloud.google_cloud.instances": [
+ "e2-standard-4",
+ "e2-standard-8",
+ "e2-highmem-4",
+ ],
}
for attribute_path, return_value in MOCK_VALUES.items():
@@ -101,15 +101,6 @@ def _mock_return_value(return_value):
@pytest.fixture(
params=[
# project, namespace, domain, cloud_provider, region, ci_provider, auth_provider
- (
- "pytestdo",
- "dev",
- "do.nebari.dev",
- schema.ProviderEnum.do,
- DO_DEFAULT_REGION,
- CiEnum.github_actions,
- AuthenticationEnum.password,
- ),
(
"pytestaws",
"dev",
diff --git a/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310-customauth.yaml b/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310-customauth.yaml
similarity index 85%
rename from tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310-customauth.yaml
rename to tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310-customauth.yaml
index 50a2b89af4..28877bf1bc 100644
--- a/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310-customauth.yaml
+++ b/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310-customauth.yaml
@@ -1,6 +1,6 @@
-project_name: do-pytest
-provider: do
-domain: do.nebari.dev
+project_name: aws-pytest
+provider: aws
+domain: aws.nebari.dev
certificate:
type: self-signed
security:
@@ -32,7 +32,7 @@ storage:
theme:
jupyterhub:
hub_title: Nebari - do-pytest
- hub_subtitle: Autoscaling Compute Environment on Digital Ocean
+ hub_subtitle: Autoscaling Compute Environment on AWS
welcome: Welcome to do.nebari.dev. It is maintained by Quansight
staff. The hub's configuration is stored in a github repository based on
https://github.com/Quansight/nebari/.
@@ -48,22 +48,31 @@ theme:
terraform_state:
type: remote
namespace: dev
-digital_ocean:
- region: nyc3
- kubernetes_version: 1.21.5-do.0
+amazon_web_services:
+ kubernetes_version: '1.20'
+ region: us-east-1
node_groups:
general:
- instance: s-2vcpu-4gb
+ instance: m5.2xlarge
min_nodes: 1
max_nodes: 1
+ gpu: false
+ single_subnet: false
+ permissions_boundary:
user:
- instance: g-2vcpu-8gb
- min_nodes: 1
+ instance: m5.xlarge
+ min_nodes: 0
max_nodes: 5
+ gpu: false
+ single_subnet: false
+ permissions_boundary:
worker:
- instance: g-2vcpu-8gb
- min_nodes: 1
+ instance: m5.xlarge
+ min_nodes: 0
max_nodes: 5
+ gpu: false
+ single_subnet: false
+ permissions_boundary:
profiles:
jupyterlab:
- display_name: Small Instance
diff --git a/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310.yaml b/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310.yaml
similarity index 85%
rename from tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310.yaml
rename to tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310.yaml
index a3a06da6a2..874de58b61 100644
--- a/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310.yaml
+++ b/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310.yaml
@@ -1,6 +1,6 @@
-project_name: do-pytest
-provider: do
-domain: do.nebari.dev
+project_name: aws-pytest
+provider: aws
+domain: aws.nebari.dev
certificate:
type: self-signed
security:
@@ -29,7 +29,7 @@ storage:
theme:
jupyterhub:
hub_title: Nebari - do-pytest
- hub_subtitle: Autoscaling Compute Environment on Digital Ocean
+ hub_subtitle: Autoscaling Compute Environment on AWS
welcome: Welcome to do.nebari.dev. It is maintained by Quansight
staff. The hub's configuration is stored in a github repository based on
https://github.com/Quansight/nebari/.
@@ -45,22 +45,31 @@ theme:
terraform_state:
type: remote
namespace: dev
-digital_ocean:
- region: nyc3
- kubernetes_version: 1.21.5-do.0
+amazon_web_services:
+ kubernetes_version: '1.20'
+ region: us-east-1
node_groups:
general:
- instance: s-2vcpu-4gb
+ instance: m5.2xlarge
min_nodes: 1
max_nodes: 1
+ gpu: false
+ single_subnet: false
+ permissions_boundary:
user:
- instance: g-2vcpu-8gb
- min_nodes: 1
+ instance: m5.xlarge
+ min_nodes: 0
max_nodes: 5
+ gpu: false
+ single_subnet: false
+ permissions_boundary:
worker:
- instance: g-2vcpu-8gb
- min_nodes: 1
+ instance: m5.xlarge
+ min_nodes: 0
max_nodes: 5
+ gpu: false
+ single_subnet: false
+ permissions_boundary:
profiles:
jupyterlab:
- display_name: Small Instance
diff --git a/tests/tests_unit/test_cli_init.py b/tests/tests_unit/test_cli_init.py
index 9afab5ddc5..03b22557ae 100644
--- a/tests/tests_unit/test_cli_init.py
+++ b/tests/tests_unit/test_cli_init.py
@@ -17,13 +17,11 @@
"aws": ["1.20"],
"azure": ["1.20"],
"gcp": ["1.20"],
- "do": ["1.21.5-do.0"],
}
MOCK_CLOUD_REGIONS = {
"aws": ["us-east-1"],
"azure": [AZURE_DEFAULT_REGION],
"gcp": ["us-central1"],
- "do": ["nyc3"],
}
@@ -70,7 +68,7 @@ def generate_test_data_test_cli_init_happy_path():
"""
test_data = []
- for provider in ["local", "aws", "azure", "gcp", "do", "existing"]:
+ for provider in ["local", "aws", "azure", "gcp", "existing"]:
for region in get_cloud_regions(provider):
for project_name in ["testproject"]:
for domain_name in [f"{project_name}.example.com"]:
@@ -265,9 +263,6 @@ def get_provider_section_header(provider: str):
return "google_cloud_platform"
if provider == "azure":
return "azure"
- if provider == "do":
- return "digital_ocean"
-
return ""
@@ -278,8 +273,6 @@ def get_cloud_regions(provider: str):
return MOCK_CLOUD_REGIONS["gcp"]
if provider == "azure":
return MOCK_CLOUD_REGIONS["azure"]
- if provider == "do":
- return MOCK_CLOUD_REGIONS["do"]
return ""
@@ -291,7 +284,4 @@ def get_kubernetes_versions(provider: str):
return MOCK_KUBERNETES_VERSIONS["gcp"]
if provider == "azure":
return MOCK_KUBERNETES_VERSIONS["azure"]
- if provider == "do":
- return MOCK_KUBERNETES_VERSIONS["do"]
-
return ""
diff --git a/tests/tests_unit/test_cli_plugin.py b/tests/tests_unit/test_cli_plugin.py
new file mode 100644
index 0000000000..2f6257050e
--- /dev/null
+++ b/tests/tests_unit/test_cli_plugin.py
@@ -0,0 +1,64 @@
+from typing import List
+from unittest.mock import Mock, patch
+
+import pytest
+from typer.testing import CliRunner
+
+from _nebari.cli import create_cli
+
+runner = CliRunner()
+
+
+@pytest.mark.parametrize(
+ "args, exit_code, content",
+ [
+ # --help
+ ([], 0, ["Usage:"]),
+ (["--help"], 0, ["Usage:"]),
+ (["-h"], 0, ["Usage:"]),
+ (["list", "--help"], 0, ["Usage:"]),
+ (["list", "-h"], 0, ["Usage:"]),
+ (["list"], 0, ["Plugins"]),
+ ],
+)
+def test_cli_plugin_stdout(args: List[str], exit_code: int, content: List[str]):
+ app = create_cli()
+ result = runner.invoke(app, ["plugin"] + args)
+ assert result.exit_code == exit_code
+ for c in content:
+ assert c in result.stdout
+
+
+def mock_get_plugins():
+ mytestexternalplugin = Mock()
+ mytestexternalplugin.__name__ = "mytestexternalplugin"
+
+ otherplugin = Mock()
+ otherplugin.__name__ = "otherplugin"
+
+ return [mytestexternalplugin, otherplugin]
+
+
+def mock_version(pkg):
+ pkg_version_map = {
+ "mytestexternalplugin": "0.4.4",
+ "otherplugin": "1.1.1",
+ }
+ return pkg_version_map.get(pkg)
+
+
+@patch(
+ "nebari.plugins.NebariPluginManager.plugin_manager.get_plugins", mock_get_plugins
+)
+@patch("_nebari.subcommands.plugin.version", mock_version)
+def test_cli_plugin_list_external_plugins():
+ app = create_cli()
+ result = runner.invoke(app, ["plugin", "list"])
+ assert result.exit_code == 0
+ expected_output = [
+ "Plugins",
+ "mytestexternalplugin │ 0.4.4",
+ "otherplugin │ 1.1.1",
+ ]
+ for c in expected_output:
+ assert c in result.stdout
diff --git a/tests/tests_unit/test_cli_upgrade.py b/tests/tests_unit/test_cli_upgrade.py
index aa79838bee..364b51b23b 100644
--- a/tests/tests_unit/test_cli_upgrade.py
+++ b/tests/tests_unit/test_cli_upgrade.py
@@ -5,6 +5,7 @@
import pytest
import yaml
+from rich.prompt import Confirm, Prompt
from typer.testing import CliRunner
import _nebari.upgrade
@@ -18,13 +19,11 @@
"aws": ["1.20"],
"azure": ["1.20"],
"gcp": ["1.20"],
- "do": ["1.21.5-do.0"],
}
MOCK_CLOUD_REGIONS = {
"aws": ["us-east-1"],
"azure": [AZURE_DEFAULT_REGION],
"gcp": ["us-central1"],
- "do": ["nyc3"],
}
@@ -106,7 +105,7 @@ def test_cli_upgrade_2023_4_1_to_2023_5_1(monkeypatch: pytest.MonkeyPatch):
@pytest.mark.parametrize(
"provider",
- ["aws", "azure", "do", "gcp"],
+ ["aws", "azure", "gcp"],
)
def test_cli_upgrade_2023_5_1_to_2023_7_1(
monkeypatch: pytest.MonkeyPatch, provider: str
@@ -434,9 +433,6 @@ def test_cli_upgrade_to_2023_10_1_cdsdashboard_removed(monkeypatch: pytest.Monke
("azure", "compatible"),
("azure", "incompatible"),
("azure", "invalid"),
- ("do", "compatible"),
- ("do", "incompatible"),
- ("do", "invalid"),
("gcp", "compatible"),
("gcp", "incompatible"),
("gcp", "invalid"),
@@ -452,14 +448,27 @@ def test_cli_upgrade_to_2023_10_1_kubernetes_validations(
kubernetes_configs = {
"aws": {"incompatible": "1.19", "compatible": "1.26", "invalid": "badname"},
"azure": {"incompatible": "1.23", "compatible": "1.26", "invalid": "badname"},
- "do": {
- "incompatible": "1.19.2-do.3",
- "compatible": "1.26.0-do.custom",
- "invalid": "badname",
- },
"gcp": {"incompatible": "1.23", "compatible": "1.26", "invalid": "badname"},
}
+ def mock_input_ask(prompt, *args, **kwargs):
+ from _nebari.upgrade import TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION
+
+ # For more about structural pattern matching, see:
+ # https://peps.python.org/pep-0636/
+ match prompt:
+ case str(s) if s == TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION:
+ return kwargs.get("attempt_fixes", False)
+ case _:
+ return kwargs.get("default", False)
+
+ monkeypatch.setattr(Confirm, "ask", mock_input_ask)
+ monkeypatch.setattr(
+ Prompt,
+ "ask",
+ lambda x, *args, **kwargs: "",
+ )
+
with tempfile.TemporaryDirectory() as tmp:
tmp_file = Path(tmp).resolve() / "nebari-config.yaml"
assert tmp_file.exists() is False
diff --git a/tests/tests_unit/test_cli_validate.py b/tests/tests_unit/test_cli_validate.py
index faf2efa8a1..b12d3cfea0 100644
--- a/tests/tests_unit/test_cli_validate.py
+++ b/tests/tests_unit/test_cli_validate.py
@@ -221,7 +221,6 @@ def test_cli_validate_error_from_env(
}
},
),
- ("do", {"digital_ocean": {"kubernetes_version": "1.20", "region": "nyc3"}}),
pytest.param(
"local",
{"security": {"authentication": {"type": "Auth0"}}},
@@ -248,7 +247,6 @@ def test_cli_validate_error_missing_cloud_env(
"ARM_TENANT_ID",
"ARM_CLIENT_ID",
"ARM_CLIENT_SECRET",
- "DIGITALOCEAN_TOKEN",
"SPACES_ACCESS_KEY_ID",
"SPACES_SECRET_ACCESS_KEY",
"AUTH0_CLIENT_ID",
diff --git a/tests/tests_unit/test_config_set.py b/tests/tests_unit/test_config_set.py
new file mode 100644
index 0000000000..81f5a8a11c
--- /dev/null
+++ b/tests/tests_unit/test_config_set.py
@@ -0,0 +1,73 @@
+from unittest.mock import patch
+
+import pytest
+from packaging.requirements import SpecifierSet
+
+from _nebari.config_set import ConfigSetMetadata, read_config_set
+
+test_version = "2024.12.2"
+
+
+@pytest.mark.parametrize(
+ "version_input,test_version,should_pass",
+ [
+ # Standard version tests
+ (">=2024.12.0,<2025.0.0", "2024.12.2", True),
+ (SpecifierSet(">=2024.12.0,<2025.0.0"), "2024.12.2", True),
+ # Pre-release version requirement tests
+ (">=2024.12.0rc1,<2025.0.0", "2024.12.0rc1", True),
+ (SpecifierSet(">=2024.12.0rc1"), "2024.12.0rc2", True),
+ # Pre-release test version against standard requirement
+ (">=2024.12.0,<2025.0.0", "2024.12.1rc1", True),
+ (SpecifierSet(">=2024.12.0,<2025.0.0"), "2024.12.1rc1", True),
+ # Failing cases
+ (">=2025.0.0", "2024.12.2rc1", False),
+ (SpecifierSet(">=2025.0.0rc1"), "2024.12.2", False),
+ ],
+)
+def test_version_requirement(version_input, test_version, should_pass):
+ metadata = ConfigSetMetadata(name="test-config", nebari_version=version_input)
+
+ if should_pass:
+ metadata.check_version(test_version)
+ else:
+ with pytest.raises(ValueError) as exc_info:
+ metadata.check_version(test_version)
+ assert "Nebari version" in str(exc_info.value)
+
+
+def test_read_config_set_valid(tmp_path):
+ config_set_yaml = """
+ metadata:
+ name: test-config
+ nebari_version: ">=2024.12.0"
+ config:
+ key: value
+ """
+ config_set_filepath = tmp_path / "config_set.yaml"
+ config_set_filepath.write_text(config_set_yaml)
+ with patch("_nebari.config_set.__version__", "2024.12.2"):
+ config_set = read_config_set(str(config_set_filepath))
+ assert config_set.metadata.name == "test-config"
+ assert config_set.config["key"] == "value"
+
+
+def test_read_config_set_invalid_version(tmp_path):
+ config_set_yaml = """
+ metadata:
+ name: test-config
+ nebari_version: ">=2025.0.0"
+ config:
+ key: value
+ """
+ config_set_filepath = tmp_path / "config_set.yaml"
+ config_set_filepath.write_text(config_set_yaml)
+
+ with patch("_nebari.config_set.__version__", "2024.12.2"):
+ with pytest.raises(ValueError) as exc_info:
+ read_config_set(str(config_set_filepath))
+ assert "Nebari version" in str(exc_info.value)
+
+
+if __name__ == "__main__":
+ pytest.main()
diff --git a/tests/tests_unit/test_dependencies.py b/tests/tests_unit/test_dependencies.py
deleted file mode 100644
index bcde584e08..0000000000
--- a/tests/tests_unit/test_dependencies.py
+++ /dev/null
@@ -1,18 +0,0 @@
-import urllib
-
-from _nebari.provider import terraform
-
-
-def test_terraform_open_source_license():
- tf_version = terraform.version()
- license_url = (
- f"https://raw.githubusercontent.com/hashicorp/terraform/v{tf_version}/LICENSE"
- )
-
- request = urllib.request.Request(license_url)
- with urllib.request.urlopen(request) as response:
- assert 200 == response.getcode()
-
- license = str(response.read())
- assert "Mozilla Public License" in license
- assert "Business Source License" not in license
diff --git a/tests/tests_unit/test_links.py b/tests/tests_unit/test_links.py
index a393391ce9..6e8529149e 100644
--- a/tests/tests_unit/test_links.py
+++ b/tests/tests_unit/test_links.py
@@ -1,10 +1,9 @@
import pytest
import requests
-from _nebari.constants import AWS_ENV_DOCS, AZURE_ENV_DOCS, DO_ENV_DOCS, GCP_ENV_DOCS
+from _nebari.constants import AWS_ENV_DOCS, AZURE_ENV_DOCS, GCP_ENV_DOCS
LINKS_TO_TEST = [
- DO_ENV_DOCS,
AWS_ENV_DOCS,
GCP_ENV_DOCS,
AZURE_ENV_DOCS,
diff --git a/tests/tests_unit/test_schema.py b/tests/tests_unit/test_schema.py
index fa6a0c747c..e445ba37da 100644
--- a/tests/tests_unit/test_schema.py
+++ b/tests/tests_unit/test_schema.py
@@ -62,12 +62,11 @@ def test_render_schema(nebari_config):
"fake",
pytest.raises(
ValueError,
- match="'fake' is not a valid enumeration member; permitted: local, existing, do, aws, gcp, azure",
+ match="'fake' is not a valid enumeration member; permitted: local, existing, aws, gcp, azure",
),
),
("aws", nullcontext()),
("gcp", nullcontext()),
- ("do", nullcontext()),
("azure", nullcontext()),
("existing", nullcontext()),
("local", nullcontext()),
@@ -102,11 +101,6 @@ def test_provider_validation(config_schema, provider, exception):
"kubernetes_version": "1.18",
},
),
- (
- "do",
- "digital_ocean",
- {"region": "nyc3", "kubernetes_version": "1.19.2-do.3"},
- ),
(
"azure",
"azure",
@@ -167,3 +161,13 @@ def test_set_provider(config_schema, provider):
result_config_dict = config.model_dump()
assert provider in result_config_dict
assert result_config_dict[provider]["kube_context"] == "some_context"
+
+
+def test_provider_config_mismatch_warning(config_schema):
+ config_dict = {
+ "project_name": "test",
+ "provider": "local",
+ "existing": {"kube_context": "some_context"}, # <-- Doesn't match the provider
+ }
+ with pytest.warns(UserWarning, match="configuration defined for other providers"):
+ config_schema(**config_dict)
diff --git a/tests/tests_unit/test_stages.py b/tests/tests_unit/test_stages.py
index c716d93030..c15aa6d9fc 100644
--- a/tests/tests_unit/test_stages.py
+++ b/tests/tests_unit/test_stages.py
@@ -53,6 +53,7 @@ def test_check_immutable_fields_immutable_change(
mock_model_fields, mock_get_state, terraform_state_stage, mock_config
):
old_config = mock_config.model_copy(deep=True)
+ old_config.local = None
old_config.provider = schema.ProviderEnum.gcp
mock_get_state.return_value = old_config.model_dump()
diff --git a/tests/tests_unit/test_upgrade.py b/tests/tests_unit/test_upgrade.py
index f6e3f80348..8f4a62630b 100644
--- a/tests/tests_unit/test_upgrade.py
+++ b/tests/tests_unit/test_upgrade.py
@@ -2,7 +2,7 @@
from pathlib import Path
import pytest
-from rich.prompt import Prompt
+from rich.prompt import Confirm, Prompt
from _nebari.upgrade import do_upgrade
from _nebari.version import __version__, rounded_ver_parse
@@ -21,21 +21,51 @@ def qhub_users_import_json():
)
+class MockKeycloakAdmin:
+ @staticmethod
+ def get_client_id(*args, **kwargs):
+ return "test-client"
+
+ @staticmethod
+ def create_client_role(*args, **kwargs):
+ return "test-client-role"
+
+ @staticmethod
+ def get_client_role_id(*args, **kwargs):
+ return "test-client-role-id"
+
+ @staticmethod
+ def get_role_by_id(*args, **kwargs):
+ return bytearray("test-role-id", "utf-8")
+
+ @staticmethod
+ def get_groups(*args, **kwargs):
+ return []
+
+ @staticmethod
+ def get_client_role_groups(*args, **kwargs):
+ return []
+
+ @staticmethod
+ def assign_group_client_roles(*args, **kwargs):
+ pass
+
+
@pytest.mark.parametrize(
"old_qhub_config_path_str,attempt_fixes,expect_upgrade_error",
[
(
- "./qhub-config-yaml-files-for-upgrade/qhub-config-do-310.yaml",
+ "./qhub-config-yaml-files-for-upgrade/qhub-config-aws-310.yaml",
False,
False,
),
(
- "./qhub-config-yaml-files-for-upgrade/qhub-config-do-310-customauth.yaml",
+ "./qhub-config-yaml-files-for-upgrade/qhub-config-aws-310-customauth.yaml",
False,
True,
),
(
- "./qhub-config-yaml-files-for-upgrade/qhub-config-do-310-customauth.yaml",
+ "./qhub-config-yaml-files-for-upgrade/qhub-config-aws-310-customauth.yaml",
True,
False,
),
@@ -49,34 +79,100 @@ def test_upgrade_4_0(
qhub_users_import_json,
monkeypatch,
):
-
def mock_input(prompt, **kwargs):
+ from _nebari.upgrade import TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION
+
# Mock different upgrade steps prompt answers
- if (
- prompt
- == "Have you deleted the Argo Workflows CRDs and service accounts? [y/N] "
- ):
- return "y"
+ if prompt == "Have you deleted the Argo Workflows CRDs and service accounts?":
+ return True
elif (
prompt
== "\nDo you want Nebari to update the kube-prometheus-stack CRDs and delete the prometheus-node-exporter for you? If not, you'll have to do it manually."
):
- return "N"
+ return False
elif (
prompt
== "Have you backed up your custom dashboards (if necessary), deleted the prometheus-node-exporter daemonset and updated the kube-prometheus-stack CRDs?"
):
- return "y"
+ return True
elif (
prompt
- == "[bold]Would you like Nebari to assign the corresponding role to all of your current groups automatically?[/bold]"
+ == "[bold]Would you like Nebari to assign the corresponding role/scopes to all of your current groups automatically?[/bold]"
):
- return "N"
+ return False
+ elif prompt == TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION:
+ return attempt_fixes
# All other prompts will be answered with "y"
else:
- return "y"
+ return True
+
+ monkeypatch.setattr(Confirm, "ask", mock_input)
+ monkeypatch.setattr(Prompt, "ask", lambda x: "")
+
+ from kubernetes import config as _kube_config
+ from kubernetes.client import ApiextensionsV1Api as _ApiextensionsV1Api
+ from kubernetes.client import AppsV1Api as _AppsV1Api
+ from kubernetes.client import CoreV1Api as _CoreV1Api
+ from kubernetes.client import V1Status as _V1Status
+
+ def monkey_patch_delete_crd(*args, **kwargs):
+ return _V1Status(code=200)
- monkeypatch.setattr(Prompt, "ask", mock_input)
+ def monkey_patch_delete_namespaced_sa(*args, **kwargs):
+ return _V1Status(code=200)
+
+ def monkey_patch_list_namespaced_daemon_set(*args, **kwargs):
+ class MonkeypatchApiResponse:
+ items = False
+
+ return MonkeypatchApiResponse
+
+ monkeypatch.setattr(
+ _kube_config,
+ "load_kube_config",
+ lambda *args, **kwargs: None,
+ )
+ monkeypatch.setattr(
+ _kube_config,
+ "list_kube_config_contexts",
+ lambda *args, **kwargs: [None, {"context": {"cluster": "test"}}],
+ )
+ monkeypatch.setattr(
+ _ApiextensionsV1Api,
+ "delete_custom_resource_definition",
+ monkey_patch_delete_crd,
+ )
+ monkeypatch.setattr(
+ _CoreV1Api,
+ "delete_namespaced_service_account",
+ monkey_patch_delete_namespaced_sa,
+ )
+ monkeypatch.setattr(
+ _ApiextensionsV1Api,
+ "read_custom_resource_definition",
+ lambda *args, **kwargs: True,
+ )
+ monkeypatch.setattr(
+ _ApiextensionsV1Api,
+ "patch_custom_resource_definition",
+ lambda *args, **kwargs: True,
+ )
+ monkeypatch.setattr(
+ _AppsV1Api,
+ "list_namespaced_daemon_set",
+ monkey_patch_list_namespaced_daemon_set,
+ )
+
+ from _nebari import upgrade as _upgrade
+
+ def monkey_patch_get_keycloak_admin(*args, **kwargs):
+ return MockKeycloakAdmin()
+
+ monkeypatch.setattr(
+ _upgrade,
+ "get_keycloak_admin",
+ monkey_patch_get_keycloak_admin,
+ )
old_qhub_config_path = Path(__file__).parent / old_qhub_config_path_str
diff --git a/tests/tests_unit/test_utils.py b/tests/tests_unit/test_utils.py
index 678cd1f230..88b911ff60 100644
--- a/tests/tests_unit/test_utils.py
+++ b/tests/tests_unit/test_utils.py
@@ -1,6 +1,6 @@
import pytest
-from _nebari.utils import JsonDiff, JsonDiffEnum, byte_unit_conversion
+from _nebari.utils import JsonDiff, JsonDiffEnum, byte_unit_conversion, deep_merge
@pytest.mark.parametrize(
@@ -64,3 +64,75 @@ def test_JsonDiff_modified():
diff = JsonDiff(obj1, obj2)
modifieds = diff.modified()
assert sorted(modifieds) == sorted([(["b", "!"], 2, 3), (["+"], 4, 5)])
+
+
+def test_deep_merge_order_preservation_dict():
+ value_1 = {
+ "a": [1, 2],
+ "b": {"c": 1, "z": [5, 6]},
+ "e": {"f": {"g": {}}},
+ "m": 1,
+ }
+
+ value_2 = {
+ "a": [3, 4],
+ "b": {"d": 2, "z": [7]},
+ "e": {"f": {"h": 1}},
+ "m": [1],
+ }
+
+ expected_result = {
+ "a": [1, 2, 3, 4],
+ "b": {"c": 1, "z": [5, 6, 7], "d": 2},
+ "e": {"f": {"g": {}, "h": 1}},
+ "m": 1,
+ }
+
+ result = deep_merge(value_1, value_2)
+ assert result == expected_result
+ assert list(result.keys()) == list(expected_result.keys())
+ assert list(result["b"].keys()) == list(expected_result["b"].keys())
+ assert list(result["e"]["f"].keys()) == list(expected_result["e"]["f"].keys())
+
+
+def test_deep_merge_order_preservation_list():
+ value_1 = {
+ "a": [1, 2],
+ "b": {"c": 1, "z": [5, 6]},
+ }
+
+ value_2 = {
+ "a": [3, 4],
+ "b": {"d": 2, "z": [7]},
+ }
+
+ expected_result = {
+ "a": [1, 2, 3, 4],
+ "b": {"c": 1, "z": [5, 6, 7], "d": 2},
+ }
+
+ result = deep_merge(value_1, value_2)
+ assert result == expected_result
+ assert result["a"] == expected_result["a"]
+ assert result["b"]["z"] == expected_result["b"]["z"]
+
+
+def test_deep_merge_single_dict():
+ value_1 = {
+ "a": [1, 2],
+ "b": {"c": 1, "z": [5, 6]},
+ }
+
+ expected_result = value_1
+
+ result = deep_merge(value_1)
+ assert result == expected_result
+ assert list(result.keys()) == list(expected_result.keys())
+ assert list(result["b"].keys()) == list(expected_result["b"].keys())
+
+
+def test_deep_merge_empty():
+ expected_result = {}
+
+ result = deep_merge()
+ assert result == expected_result
diff --git a/tests/tests_unit/utils.py b/tests/tests_unit/utils.py
index 82dffdcd3c..eddc66f52f 100644
--- a/tests/tests_unit/utils.py
+++ b/tests/tests_unit/utils.py
@@ -15,7 +15,6 @@
)
INIT_INPUTS = [
# project, namespace, domain, cloud_provider, ci_provider, auth_provider
- ("pytestdo", "dev", "do.nebari.dev", "do", "github-actions", "github"),
("pytestaws", "dev", "aws.nebari.dev", "aws", "github-actions", "github"),
("pytestgcp", "dev", "gcp.nebari.dev", "gcp", "github-actions", "github"),
("pytestazure", "dev", "azure.nebari.dev", "azure", "github-actions", "github"),
diff --git a/tests/utils.py b/tests/utils.py
index 82dffdcd3c..eddc66f52f 100644
--- a/tests/utils.py
+++ b/tests/utils.py
@@ -15,7 +15,6 @@
)
INIT_INPUTS = [
# project, namespace, domain, cloud_provider, ci_provider, auth_provider
- ("pytestdo", "dev", "do.nebari.dev", "do", "github-actions", "github"),
("pytestaws", "dev", "aws.nebari.dev", "aws", "github-actions", "github"),
("pytestgcp", "dev", "gcp.nebari.dev", "gcp", "github-actions", "github"),
("pytestazure", "dev", "azure.nebari.dev", "azure", "github-actions", "github"),
Import nebari plugin
- Default: @@ -58,7 +59,7 @@
- ---exclude-stage <excluded_stages>¶ +--exclude-stage <excluded_stages>¶
Exclude nebari stage(s) by name or regex
- Default: @@ -68,7 +69,7 @@
- ---exclude-default-stages¶ +--exclude-default-stages¶
Exclude default nebari included stages
- deploy¶
+deploy¶
Deploy the Nebari cluster from your [purple]nebari-config.yaml[/purple] file.
@@ -86,35 +87,30 @@nebari deploy [OPTIONS]
deployOptions
- --c, --config <config_filename>¶ +-c, --config <config_filename>¶
Required nebari configuration yaml file path
- --o, --output <output_directory>¶ +-o, --output <output_directory>¶
output directory
- Default: -
+./
'./'
- ---dns-provider <dns_provider>¶ +--dns-provider <dns_provider>¶
dns provider to use for registering domain name mapping ⚠️ moved to dns.provider in nebari-config.yaml
--
-
- Default: -
-False
-
- ---dns-auto-provision¶ +--dns-auto-provision¶
Attempt to automatically provision DNS, currently only available for cloudflare ⚠️ moved to dns.auto_provision in nebari-config.yaml
- Default: @@ -125,7 +121,7 @@
- ---disable-prompt¶ +--disable-prompt¶
Disable human intervention
- Default: @@ -136,7 +132,7 @@
- ---disable-render¶ +--disable-render¶
Disable auto-rendering in deploy stage
- Default: @@ -147,7 +143,7 @@
- ---disable-checks¶ +--disable-checks¶
Disable the checks performed after each stage
- Default: @@ -158,7 +154,7 @@
- ---skip-remote-state-provision¶ +--skip-remote-state-provision¶
Skip terraform state deployment which is often required in CI once the terraform remote state bootstrapping phase is complete
- Default: @@ -169,7 +165,7 @@
- --c, --config <config_filename>¶ +-c, --config <config_filename>¶
Required nebari configuration file path
- --o, --output <output_directory>¶ +-o, --output <output_directory>¶
output directory
- Default: -
+./
'./'
- ---disable-render¶ +--disable-render¶
Disable auto-rendering before destroy
- Default: @@ -205,7 +201,7 @@
- ---disable-prompt¶ +--disable-prompt¶
Destroy entire Nebari cluster without confirmation request. Suggested for CI use.
- Default: @@ -216,13 +212,13 @@
- ---guided-init, --no-guided-init¶ +--guided-init, --no-guided-init¶
[bold green]START HERE[/bold green] - this will guide you step-by-step to generate your [purple]nebari-config.yaml[/purple]. It is an [i]alternative[/i] to passing the options listed below.
- Default: @@ -278,42 +275,48 @@
- --p, --project-name, --project <project_name>¶ +-p, --project-name, --project <project_name>¶
Required
init
-
+
- +--region <region>¶ +
The region you want to deploy your Nebari cluster to (if deploying to the cloud)
+
- ---auth-provider <auth_provider>¶ -
options: [‘password’, ‘GitHub’, ‘Auth0’, ‘custom’]
+--auth-provider <auth_provider>¶ +options: [‘password’, ‘GitHub’, ‘Auth0’]
- Default: -
+AuthenticationEnum.password
<AuthenticationEnum.password: 'password'>
- Options: -
password | GitHub | Auth0 | custom
+password | GitHub | Auth0
- ---auth-auto-provision, --no-auth-auto-provision¶ +--auth-auto-provision, --no-auth-auto-provision¶
- Default:
@@ -323,19 +326,15 @@False
init
- ---repository <repository>¶
-options: [‘github.com’, ‘gitlab.com’]
--
-
- Options: -
github.com | gitlab.com
-
-
Github repository URL to be initialized with –repository-auto-provision
- ---repository-auto-provision, --no-repository-auto-provision¶ -
-
+--repository-auto-provision, --no-repository-auto-provision¶
+
Initialize the GitHub repository provided by –repository (GitHub credentials required)
+- Default:
False
@@ -344,11 +343,11 @@ - ---ci-provider <ci_provider>¶ +--ci-provider <ci_provider>¶
options: [‘github-actions’, ‘gitlab-ci’, ‘none’]
- Default: -
+CiEnum.none
<CiEnum.none: 'none'>
- Options:
github-actions | gitlab-ci | none
@@ -358,11 +357,11 @@init
- ---terraform-state <terraform_state>¶
+--terraform-state <terraform_state>¶options: [‘remote’, ‘local’, ‘existing’]
- Default: -
+TerraformStateEnum.remote
<TerraformStateEnum.remote: 'remote'>
- Options:
remote | local | existing
@@ -372,22 +371,23 @@init
- ---kubernetes-version <kubernetes_version>¶
--
+--kubernetes-version <kubernetes_version>¶
+
The Kubernetes version you want to deploy your Nebari cluster to, leave blank for latest version
+- Default: -
+latest
'latest'
- ---disable-prompt, --no-disable-prompt¶ +--disable-prompt, --no-disable-prompt¶
- Default:
@@ -395,13 +395,30 @@False
init +
- +-s, --config-set <config_set>¶
+Apply a pre-defined set of nebari configuration options.
+
- --o, --output <output>¶ +-o, --output <output>¶
Output file path for the rendered config file.
- Default: -
+nebari-config.yaml
+
+PosixPath('nebari-config.yaml')
-
+
- +-e, --explicit¶ +
Write explicit nebari config file (advanced users only).
+-
+
- Default: +
0
initArguments
- -CLOUD_PROVIDER¶ +CLOUD_PROVIDER¶
Optional argument
+options: [‘local’, ‘existing’, ‘aws’, ‘gcp’, ‘azure’]
- keycloak¶
+keycloak¶
Interact with the Nebari Keycloak identity and access management tool.
nebari keycloak [OPTIONS] COMMAND [ARGS]...
- adduser¶
+adduser¶
Add a user to Keycloak. User will be automatically added to the [italic]analyst[/italic] group.
@@ -429,19 +447,19 @@nebari keycloak adduser [OPTIONS]
adduserOptions
- --c, --config <config_filename>¶ +-c, --config <config_filename>¶
Required nebari configuration file path
- export-users¶
+export-users¶
Export the users in Keycloak.
@@ -449,24 +467,24 @@nebari keycloak export-users [OPTIONS]
export-usersOptions
- --c, --config <config_filename>¶ +-c, --config <config_filename>¶
Required nebari configuration file path
- ---realm <realm>¶ +--realm <realm>¶
realm from which users are to be exported
- Default: -
+nebari
'nebari'
- listusers¶
+listusers¶
List the users in Keycloak.
@@ -474,14 +492,28 @@nebari keycloak listusers [OPTIONS]
listusersOptions
- --c, --config <config_filename>¶ +-c, --config <config_filename>¶
Required nebari configuration file path
+ plugin¶
+Interact with nebari plugins
+++nebari plugin [OPTIONS] COMMAND [ARGS]... +
+ list¶
+List installed plugins
++nebari plugin list [OPTIONS] +
- render¶
+render¶
Dynamically render the Terraform scripts and other files from your [purple]nebari-config.yaml[/purple] file.
@@ -489,24 +521,24 @@nebari render [OPTIONS]
renderOptions
- --o, --output <output_directory>¶ +-o, --output <output_directory>¶
output directory
- Default: -
+./
'./'
- --c, --config <config_filename>¶ +-c, --config <config_filename>¶
Required nebari configuration yaml file path
- ---dry-run¶ +--dry-run¶
simulate rendering files without actually writing or updating any files
- Default: @@ -517,7 +549,7 @@
- --c, --config <config_filename>¶ +-c, --config <config_filename>¶
Required nebari configuration file path
- --o, --output <output>¶ +-o, --output <output>¶
output filename
- Default: -
+./nebari-support-logs.zip
'./nebari-support-logs.zip'
- --c, --config <config_filename>¶ +-c, --config <config_filename>¶
Required nebari configuration file path
- ---attempt-fixes¶ +--attempt-fixes¶
Attempt to fix the config for any incompatibilities between your old and new Nebari versions.
- Default: @@ -571,7 +603,7 @@
- --c, --config <config_filename>¶ +-c, --config <config_filename>¶
Required nebari configuration yaml file path, please pass in as -c/–config flag
- ---enable-commenting¶ +--enable-commenting¶
Toggle PR commenting on GitHub Actions
- Default: @@ -603,7 +635,7 @@
validate +
Nebari CLI documentation
@@ -614,7 +646,16 @@Nebari CLI documentation
-Navigation
+ ++ ++ ++Navigation
-- @@ -673,11 +708,11 @@Quick search
-- --Quick search
- ©2023, Nebari. + ©2023, Nebari. | - Powered by Sphinx 6.1.3 - & Alabaster 0.7.13 + Powered by Sphinx 8.1.3 + & Alabaster 1.0.0 | =3.9.0,<4.0.0", "questionary==2.0.0", @@ -79,7 +81,7 @@ dependencies = [ "ruamel.yaml==0.18.6", "typer==0.9.0", "packaging==23.2", - "typing-extensions==4.11.0", + "typing-extensions>=4.11.0", ] [project.optional-dependencies] @@ -87,7 +89,6 @@ dev = [ "black==22.3.0", "coverage[toml]", "dask-gateway", - "diagrams", "escapism", "importlib-metadata<5.0", "mypy==1.6.1", diff --git a/src/_nebari/cli.py b/src/_nebari/cli.py index de91cc1853..6bf030ae26 100644 --- a/src/_nebari/cli.py +++ b/src/_nebari/cli.py @@ -10,7 +10,7 @@ class OrderCommands(TyperGroup): def list_commands(self, ctx: typer.Context): """Return list of commands in the order appear.""" - return list(self.commands) + return list(self.commands)[::-1] def version_callback(value: bool): @@ -65,6 +65,7 @@ def common( [], "--import-plugin", help="Import nebari plugin", + callback=import_plugin, ), excluded_stages: typing.List[str] = typer.Option( [], diff --git a/src/_nebari/config_set.py b/src/_nebari/config_set.py new file mode 100644 index 0000000000..95413ea1a7 --- /dev/null +++ b/src/_nebari/config_set.py @@ -0,0 +1,54 @@ +import logging +import pathlib +from typing import Optional + +from packaging.requirements import SpecifierSet +from pydantic import BaseModel, ConfigDict, field_validator + +from _nebari._version import __version__ +from _nebari.utils import yaml + +logger = logging.getLogger(__name__) + + +class ConfigSetMetadata(BaseModel): + model_config: ConfigDict = ConfigDict(extra="allow", arbitrary_types_allowed=True) + name: str # for use with guided init + description: Optional[str] = None + nebari_version: str | SpecifierSet + + @field_validator("nebari_version") + @classmethod + def validate_version_requirement(cls, version_req): + if isinstance(version_req, str): + version_req = SpecifierSet(version_req, prereleases=True) + + return version_req + + def check_version(self, version): + if not self.nebari_version.contains(version, prereleases=True): + raise ValueError( + f'Nebari version "{version}" is not compatible with ' + f'version requirement {self.nebari_version} for "{self.name}" config set.' + ) + + +class ConfigSet(BaseModel): + metadata: ConfigSetMetadata + config: dict + + +def read_config_set(config_set_filepath: str): + """Read a config set from a config file.""" + + filename = pathlib.Path(config_set_filepath) + + with filename.open() as f: + config_set_yaml = yaml.load(f) + + config_set = ConfigSet(**config_set_yaml) + + # validation + config_set.metadata.check_version(__version__) + + return config_set diff --git a/src/_nebari/constants.py b/src/_nebari/constants.py index 6e57519fee..a4f81c354c 100644 --- a/src/_nebari/constants.py +++ b/src/_nebari/constants.py @@ -1,31 +1,27 @@ -CURRENT_RELEASE = "2024.9.1" +CURRENT_RELEASE = "2025.2.1" HELM_VERSION = "v3.15.3" KUSTOMIZE_VERSION = "5.4.3" -# NOTE: Terraform cannot be upgraded further due to Hashicorp licensing changes -# implemented in August 2023. -# https://www.hashicorp.com/license-faq -TERRAFORM_VERSION = "1.5.7" +OPENTOFU_VERSION = "1.8.3" KUBERHEALTHY_HELM_VERSION = "100" # 04-kubernetes-ingress DEFAULT_TRAEFIK_IMAGE_TAG = "2.9.1" -HIGHEST_SUPPORTED_K8S_VERSION = ("1", "29", "2") +HIGHEST_SUPPORTED_K8S_VERSION = ("1", "31") # specify Major and Minor version DEFAULT_GKE_RELEASE_CHANNEL = "UNSPECIFIED" DEFAULT_NEBARI_DASK_VERSION = CURRENT_RELEASE DEFAULT_NEBARI_IMAGE_TAG = CURRENT_RELEASE DEFAULT_NEBARI_WORKFLOW_CONTROLLER_IMAGE_TAG = CURRENT_RELEASE -DEFAULT_CONDA_STORE_IMAGE_TAG = "2024.3.1" +DEFAULT_CONDA_STORE_IMAGE_TAG = "2025.2.1" LATEST_SUPPORTED_PYTHON_VERSION = "3.10" # DOCS -DO_ENV_DOCS = "https://www.nebari.dev/docs/how-tos/nebari-do" AZURE_ENV_DOCS = "https://www.nebari.dev/docs/how-tos/nebari-azure" AWS_ENV_DOCS = "https://www.nebari.dev/docs/how-tos/nebari-aws" GCP_ENV_DOCS = "https://www.nebari.dev/docs/how-tos/nebari-gcp" @@ -34,4 +30,3 @@ AWS_DEFAULT_REGION = "us-east-1" AZURE_DEFAULT_REGION = "Central US" GCP_DEFAULT_REGION = "us-central1" -DO_DEFAULT_REGION = "nyc3" diff --git a/src/_nebari/initialize.py b/src/_nebari/initialize.py index 7745df2a98..7566fe7b44 100644 --- a/src/_nebari/initialize.py +++ b/src/_nebari/initialize.py @@ -8,21 +8,16 @@ import pydantic import requests -from _nebari import constants +from _nebari import constants, utils +from _nebari.config_set import read_config_set from _nebari.provider import git from _nebari.provider.cicd import github -from _nebari.provider.cloud import ( - amazon_web_services, - azure_cloud, - digital_ocean, - google_cloud, -) +from _nebari.provider.cloud import amazon_web_services, azure_cloud, google_cloud from _nebari.provider.oauth.auth0 import create_client from _nebari.stages.bootstrap import CiEnum from _nebari.stages.infrastructure import ( DEFAULT_AWS_NODE_GROUPS, DEFAULT_AZURE_NODE_GROUPS, - DEFAULT_DO_NODE_GROUPS, DEFAULT_GCP_NODE_GROUPS, node_groups_to_dict, ) @@ -53,6 +48,7 @@ def render_config( region: str = None, disable_prompt: bool = False, ssl_cert_email: str = None, + config_set: str = None, ) -> Dict[str, Any]: config = { "provider": cloud_provider, @@ -117,22 +113,7 @@ def render_config( ), } - if cloud_provider == ProviderEnum.do: - do_region = region or constants.DO_DEFAULT_REGION - do_kubernetes_versions = kubernetes_version or get_latest_kubernetes_version( - digital_ocean.kubernetes_versions() - ) - config["digital_ocean"] = { - "kubernetes_version": do_kubernetes_versions, - "region": do_region, - "node_groups": node_groups_to_dict(DEFAULT_DO_NODE_GROUPS), - } - - config["theme"]["jupyterhub"][ - "hub_subtitle" - ] = f"{WELCOME_HEADER_TEXT} on Digital Ocean" - - elif cloud_provider == ProviderEnum.gcp: + if cloud_provider == ProviderEnum.gcp: gcp_region = region or constants.GCP_DEFAULT_REGION gcp_kubernetes_version = kubernetes_version or get_latest_kubernetes_version( google_cloud.kubernetes_versions(gcp_region) @@ -197,13 +178,17 @@ def render_config( config["certificate"] = {"type": CertificateEnum.letsencrypt.value} config["certificate"]["acme_email"] = ssl_cert_email + if config_set: + config_set = read_config_set(config_set) + config = utils.deep_merge(config, config_set.config) + # validate configuration and convert to model from nebari.plugins import nebari_plugin_manager try: config_model = nebari_plugin_manager.config_schema.model_validate(config) except pydantic.ValidationError as e: - print(str(e)) + raise e if repository_auto_provision: match = re.search(github_url_regex, repository) @@ -245,16 +230,7 @@ def github_auto_provision(config: pydantic.BaseModel, owner: str, repo: str): try: # Secrets - if config.provider == ProviderEnum.do: - for name in { - "AWS_ACCESS_KEY_ID", - "AWS_SECRET_ACCESS_KEY", - "SPACES_ACCESS_KEY_ID", - "SPACES_SECRET_ACCESS_KEY", - "DIGITALOCEAN_TOKEN", - }: - github.update_secret(owner, repo, name, os.environ[name]) - elif config.provider == ProviderEnum.aws: + if config.provider == ProviderEnum.aws: for name in { "AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", diff --git a/src/_nebari/provider/cicd/github.py b/src/_nebari/provider/cicd/github.py index d091d1d027..92d3b853e9 100644 --- a/src/_nebari/provider/cicd/github.py +++ b/src/_nebari/provider/cicd/github.py @@ -117,12 +117,6 @@ def gha_env_vars(config: schema.Main): env_vars["ARM_CLIENT_SECRET"] = "${{ secrets.ARM_CLIENT_SECRET }}" env_vars["ARM_SUBSCRIPTION_ID"] = "${{ secrets.ARM_SUBSCRIPTION_ID }}" env_vars["ARM_TENANT_ID"] = "${{ secrets.ARM_TENANT_ID }}" - elif config.provider == schema.ProviderEnum.do: - env_vars["AWS_ACCESS_KEY_ID"] = "${{ secrets.AWS_ACCESS_KEY_ID }}" - env_vars["AWS_SECRET_ACCESS_KEY"] = "${{ secrets.AWS_SECRET_ACCESS_KEY }}" - env_vars["SPACES_ACCESS_KEY_ID"] = "${{ secrets.SPACES_ACCESS_KEY_ID }}" - env_vars["SPACES_SECRET_ACCESS_KEY"] = "${{ secrets.SPACES_SECRET_ACCESS_KEY }}" - env_vars["DIGITALOCEAN_TOKEN"] = "${{ secrets.DIGITALOCEAN_TOKEN }}" elif config.provider == schema.ProviderEnum.gcp: env_vars["GOOGLE_CREDENTIALS"] = "${{ secrets.GOOGLE_CREDENTIALS }}" env_vars["PROJECT_ID"] = "${{ secrets.PROJECT_ID }}" diff --git a/src/_nebari/provider/cloud/amazon_web_services.py b/src/_nebari/provider/cloud/amazon_web_services.py index 1123c07fe0..dee4df891c 100644 --- a/src/_nebari/provider/cloud/amazon_web_services.py +++ b/src/_nebari/provider/cloud/amazon_web_services.py @@ -2,6 +2,7 @@ import os import re import time +from dataclasses import dataclass from typing import Dict, List, Optional import boto3 @@ -23,25 +24,19 @@ def check_credentials() -> None: @functools.lru_cache() def aws_session( - region: Optional[str] = None, digitalocean_region: Optional[str] = None + region: Optional[str] = None, ) -> boto3.Session: """Create a boto3 session.""" - if digitalocean_region: - aws_access_key_id = os.environ["SPACES_ACCESS_KEY_ID"] - aws_secret_access_key = os.environ["SPACES_SECRET_ACCESS_KEY"] - region = digitalocean_region - aws_session_token = None - else: - check_credentials() - aws_access_key_id = os.environ["AWS_ACCESS_KEY_ID"] - aws_secret_access_key = os.environ["AWS_SECRET_ACCESS_KEY"] - aws_session_token = os.environ.get("AWS_SESSION_TOKEN") - - if not region: - raise ValueError( - "Please specify `region` in the nebari-config.yaml or if initializing the nebari-config, set the region via the " - "`--region` flag or via the AWS_DEFAULT_REGION environment variable.\n" - ) + check_credentials() + aws_access_key_id = os.environ["AWS_ACCESS_KEY_ID"] + aws_secret_access_key = os.environ["AWS_SECRET_ACCESS_KEY"] + aws_session_token = os.environ.get("AWS_SESSION_TOKEN") + + if not region: + raise ValueError( + "Please specify `region` in the nebari-config.yaml or if initializing the nebari-config, set the region via the " + "`--region` flag or via the AWS_DEFAULT_REGION environment variable.\n" + ) return boto3.Session( region_name=region, @@ -121,6 +116,35 @@ def instances(region: str) -> Dict[str, str]: return {t: t for t in instance_types} +@dataclass +class Kms_Key_Info: + Arn: str + KeyUsage: str + KeySpec: str + KeyManager: str + + +@functools.lru_cache() +def kms_key_arns(region: str) -> Dict[str, Kms_Key_Info]: + """Return dict of available/enabled KMS key IDs and associated KeyMetadata for the AWS region.""" + session = aws_session(region=region) + client = session.client("kms") + kms_keys = {} + # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kms/client/list_keys.html + for key in client.list_keys().get("Keys"): + key_id = key["KeyId"] + # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kms/client/describe_key.html#:~:text=Response%20Structure + key_data = client.describe_key(KeyId=key_id).get("KeyMetadata") + if key_data.get("Enabled"): + kms_keys[key_id] = Kms_Key_Info( + Arn=key_data.get("Arn"), + KeyUsage=key_data.get("KeyUsage"), + KeySpec=key_data.get("KeySpec"), + KeyManager=key_data.get("KeyManager"), + ) + return kms_keys + + def aws_get_vpc_id(name: str, namespace: str, region: str) -> Optional[str]: """Return VPC ID for the EKS cluster namedd `{name}-{namespace}`.""" cluster_name = f"{name}-{namespace}" @@ -682,21 +706,17 @@ def aws_delete_s3_objects( bucket_name: str, endpoint: Optional[str] = None, region: Optional[str] = None, - digitalocean_region: Optional[str] = None, ): """ Delete all objects in the S3 bucket. - NOTE: This method is shared with Digital Ocean as their "Spaces" is S3 compatible and uses the same API. - Parameters: bucket_name (str): S3 bucket name - endpoint (str): S3 endpoint URL (required for Digital Ocean spaces) + endpoint (str): S3 endpoint URL region (str): AWS region - digitalocean_region (str): Digital Ocean region """ - session = aws_session(region=region, digitalocean_region=digitalocean_region) + session = aws_session(region=region) s3 = session.client("s3", endpoint_url=endpoint) try: @@ -749,22 +769,18 @@ def aws_delete_s3_bucket( bucket_name: str, endpoint: Optional[str] = None, region: Optional[str] = None, - digitalocean_region: Optional[str] = None, ): """ Delete S3 bucket. - NOTE: This method is shared with Digital Ocean as their "Spaces" is S3 compatible and uses the same API. - Parameters: bucket_name (str): S3 bucket name - endpoint (str): S3 endpoint URL (required for Digital Ocean spaces) + endpoint (str): S3 endpoint URL region (str): AWS region - digitalocean_region (str): Digital Ocean region """ - aws_delete_s3_objects(bucket_name, endpoint, region, digitalocean_region) + aws_delete_s3_objects(bucket_name, endpoint, region) - session = aws_session(region=region, digitalocean_region=digitalocean_region) + session = aws_session(region=region) s3 = session.client("s3", endpoint_url=endpoint) try: diff --git a/src/_nebari/provider/cloud/commons.py b/src/_nebari/provider/cloud/commons.py index 566b2029a4..d2bed87c48 100644 --- a/src/_nebari/provider/cloud/commons.py +++ b/src/_nebari/provider/cloud/commons.py @@ -6,9 +6,7 @@ def filter_by_highest_supported_k8s_version(k8s_versions_list): filtered_k8s_versions_list = [] for k8s_version in k8s_versions_list: - version = tuple( - filter(None, re.search(r"(\d+)\.(\d+)(?:\.(\d+))?", k8s_version).groups()) - ) + version = tuple(filter(None, re.search(r"(\d+)\.(\d+)", k8s_version).groups())) if version <= HIGHEST_SUPPORTED_K8S_VERSION: filtered_k8s_versions_list.append(k8s_version) return filtered_k8s_versions_list diff --git a/src/_nebari/provider/cloud/digital_ocean.py b/src/_nebari/provider/cloud/digital_ocean.py deleted file mode 100644 index 3e4a507be6..0000000000 --- a/src/_nebari/provider/cloud/digital_ocean.py +++ /dev/null @@ -1,131 +0,0 @@ -import functools -import os -import tempfile -import typing - -import kubernetes.client -import kubernetes.config -import requests - -from _nebari.constants import DO_ENV_DOCS -from _nebari.provider.cloud.amazon_web_services import aws_delete_s3_bucket -from _nebari.provider.cloud.commons import filter_by_highest_supported_k8s_version -from _nebari.utils import check_environment_variables, set_do_environment -from nebari import schema - - -def check_credentials() -> None: - required_variables = { - "DIGITALOCEAN_TOKEN", - "SPACES_ACCESS_KEY_ID", - "SPACES_SECRET_ACCESS_KEY", - } - check_environment_variables(required_variables, DO_ENV_DOCS) - - -def digital_ocean_request(url, method="GET", json=None): - BASE_DIGITALOCEAN_URL = "https://api.digitalocean.com/v2/" - - for name in {"DIGITALOCEAN_TOKEN"}: - if name not in os.environ: - raise ValueError( - f"Digital Ocean api requests require environment variable={name} defined" - ) - - headers = {"Authorization": f'Bearer {os.environ["DIGITALOCEAN_TOKEN"]}'} - - method_map = { - "GET": requests.get, - "DELETE": requests.delete, - } - - response = method_map[method]( - f"{BASE_DIGITALOCEAN_URL}{url}", headers=headers, json=json - ) - response.raise_for_status() - return response - - -@functools.lru_cache() -def _kubernetes_options(): - return digital_ocean_request("kubernetes/options").json() - - -def instances(): - return _kubernetes_options()["options"]["sizes"] - - -def regions(): - return _kubernetes_options()["options"]["regions"] - - -def kubernetes_versions() -> typing.List[str]: - """Return list of available kubernetes supported by cloud provider. Sorted from oldest to latest.""" - supported_kubernetes_versions = sorted( - [_["slug"].split("-")[0] for _ in _kubernetes_options()["options"]["versions"]] - ) - filtered_versions = filter_by_highest_supported_k8s_version( - supported_kubernetes_versions - ) - return [f"{v}-do.0" for v in filtered_versions] - - -def digital_ocean_get_cluster_id(cluster_name): - clusters = digital_ocean_request("kubernetes/clusters").json()[ - "kubernetes_clusters" - ] - - cluster_id = None - for cluster in clusters: - if cluster["name"] == cluster_name: - cluster_id = cluster["id"] - break - - return cluster_id - - -def digital_ocean_get_kubeconfig(cluster_id: str): - kubeconfig_content = digital_ocean_request( - f"kubernetes/clusters/{cluster_id}/kubeconfig" - ).content - - with tempfile.NamedTemporaryFile(delete=False) as temp_kubeconfig: - temp_kubeconfig.write(kubeconfig_content) - - return temp_kubeconfig.name - - -def digital_ocean_delete_kubernetes_cluster(cluster_name: str): - cluster_id = digital_ocean_get_cluster_id(cluster_name) - digital_ocean_request(f"kubernetes/clusters/{cluster_id}", method="DELETE") - - -def digital_ocean_cleanup(config: schema.Main): - """Delete all Digital Ocean resources created by Nebari.""" - - name = config.project_name - namespace = config.namespace - - cluster_name = f"{name}-{namespace}" - tf_state_bucket = f"{cluster_name}-terraform-state" - do_spaces_endpoint = "https://nyc3.digitaloceanspaces.com" - - cluster_id = digital_ocean_get_cluster_id(cluster_name) - if cluster_id is None: - return - - kubernetes.config.load_kube_config(digital_ocean_get_kubeconfig(cluster_id)) - api = kubernetes.client.CoreV1Api() - - labels = {"component": "singleuser-server", "app": "jupyterhub"} - - api.delete_collection_namespaced_pod( - namespace=namespace, - label_selector=",".join([f"{k}={v}" for k, v in labels.items()]), - ) - - set_do_environment() - aws_delete_s3_bucket( - tf_state_bucket, digitalocean=True, endpoint=do_spaces_endpoint - ) - digital_ocean_delete_kubernetes_cluster(cluster_name) diff --git a/src/_nebari/provider/cloud/google_cloud.py b/src/_nebari/provider/cloud/google_cloud.py index 6b54e40e9d..5317cb1528 100644 --- a/src/_nebari/provider/cloud/google_cloud.py +++ b/src/_nebari/provider/cloud/google_cloud.py @@ -51,19 +51,47 @@ def regions() -> Set[str]: return {region.name for region in response} +@functools.lru_cache() +def instances(region: str) -> set[str]: + """Return a set of available compute instances in a region.""" + credentials, project_id = load_credentials() + zones_client = compute_v1.services.region_zones.RegionZonesClient( + credentials=credentials + ) + instances_client = compute_v1.MachineTypesClient(credentials=credentials) + zone_list = zones_client.list(project=project_id, region=region) + zones = [zone for zone in zone_list] + instance_set: set[str] = set() + for zone in zones: + instance_list = instances_client.list(project=project_id, zone=zone.name) + for instance in instance_list: + instance_set.add(instance.name) + return instance_set + + @functools.lru_cache() def kubernetes_versions(region: str) -> List[str]: """Return list of available kubernetes supported by cloud provider. Sorted from oldest to latest.""" credentials, project_id = load_credentials() client = container_v1.ClusterManagerClient(credentials=credentials) response = client.get_server_config( - name=f"projects/{project_id}/locations/{region}" + name=f"projects/{project_id}/locations/{region}", timeout=300 ) supported_kubernetes_versions = response.valid_master_versions return filter_by_highest_supported_k8s_version(supported_kubernetes_versions) +def get_patch_version(full_version: str) -> str: + return full_version.split("-")[0] + + +def get_minor_version(full_version: str) -> str: + patch_version = get_patch_version(full_version) + parts = patch_version.split(".") + return f"{parts[0]}.{parts[1]}" + + def cluster_exists(cluster_name: str, region: str) -> bool: """Check if a GKE cluster exists.""" credentials, project_id = load_credentials() diff --git a/src/_nebari/provider/terraform.py b/src/_nebari/provider/opentofu.py similarity index 62% rename from src/_nebari/provider/terraform.py rename to src/_nebari/provider/opentofu.py index 59d88e76dd..78936d1808 100644 --- a/src/_nebari/provider/terraform.py +++ b/src/_nebari/provider/opentofu.py @@ -18,39 +18,39 @@ logger = logging.getLogger(__name__) -class TerraformException(Exception): +class OpenTofuException(Exception): pass def deploy( directory, - terraform_init: bool = True, - terraform_import: bool = False, - terraform_apply: bool = True, - terraform_destroy: bool = False, + tofu_init: bool = True, + tofu_import: bool = False, + tofu_apply: bool = True, + tofu_destroy: bool = False, input_vars: Dict[str, Any] = {}, state_imports: List[Any] = [], ): - """Execute a given terraform directory. + """Execute a given directory with OpenTofu infrastructure configuration. Parameters: - directory: directory in which to run terraform operations on + directory: directory in which to run tofu operations on - terraform_init: whether to run `terraform init` default True + tofu_init: whether to run `tofu init` default True - terraform_import: whether to run `terraform import` default + tofu_import: whether to run `tofu import` default False for each `state_imports` supplied to function - terraform_apply: whether to run `terraform apply` default True + tofu_apply: whether to run `tofu apply` default True - terraform_destroy: whether to run `terraform destroy` default + tofu_destroy: whether to run `tofu destroy` default False input_vars: supply values for "variable" resources within terraform module state_imports: (addr, id) pairs for iterate through and attempt - to terraform import + to tofu import """ with tempfile.NamedTemporaryFile( mode="w", encoding="utf-8", suffix=".tfvars.json" @@ -58,25 +58,25 @@ def deploy( json.dump(input_vars, f.file) f.file.flush() - if terraform_init: + if tofu_init: init(directory) - if terraform_import: + if tofu_import: for addr, id in state_imports: tfimport( addr, id, directory=directory, var_files=[f.name], exist_ok=True ) - if terraform_apply: + if tofu_apply: apply(directory, var_files=[f.name]) - if terraform_destroy: + if tofu_destroy: destroy(directory, var_files=[f.name]) return output(directory) -def download_terraform_binary(version=constants.TERRAFORM_VERSION): +def download_opentofu_binary(version=constants.OPENTOFU_VERSION): os_mapping = { "linux": "linux", "win32": "windows", @@ -94,73 +94,72 @@ def download_terraform_binary(version=constants.TERRAFORM_VERSION): "arm64": "arm64", } - download_url = f"https://releases.hashicorp.com/terraform/{version}/terraform_{version}_{os_mapping[sys.platform]}_{architecture_mapping[platform.machine()]}.zip" - filename_directory = Path(tempfile.gettempdir()) / "terraform" / version - filename_path = filename_directory / "terraform" + download_url = f"https://github.com/opentofu/opentofu/releases/download/v{version}/tofu_{version}_{os_mapping[sys.platform]}_{architecture_mapping[platform.machine()]}.zip" + + filename_directory = Path(tempfile.gettempdir()) / "opentofu" / version + filename_path = filename_directory / "tofu" if not filename_path.is_file(): logger.info( - f"downloading and extracting terraform binary from url={download_url} to path={filename_path}" + f"downloading and extracting opentofu binary from url={download_url} to path={filename_path}" ) with urllib.request.urlopen(download_url) as f: bytes_io = io.BytesIO(f.read()) download_file = zipfile.ZipFile(bytes_io) - download_file.extract("terraform", filename_directory) + download_file.extract("tofu", filename_directory) filename_path.chmod(0o555) return filename_path -def run_terraform_subprocess(processargs, **kwargs): - terraform_path = download_terraform_binary() - logger.info(f" terraform at {terraform_path}") - exit_code, output = run_subprocess_cmd([terraform_path] + processargs, **kwargs) +def run_tofu_subprocess(processargs, **kwargs): + tofu_path = download_opentofu_binary() + logger.info(f" tofu at {tofu_path}") + exit_code, output = run_subprocess_cmd([tofu_path] + processargs, **kwargs) if exit_code != 0: - raise TerraformException("Terraform returned an error") + raise OpenTofuException("OpenTofu returned an error") return output def version(): - terraform_path = download_terraform_binary() - logger.info(f"checking terraform={terraform_path} version") + tofu_path = download_opentofu_binary() + logger.info(f"checking opentofu={tofu_path} version") - version_output = subprocess.check_output([terraform_path, "--version"]).decode( - "utf-8" - ) + version_output = subprocess.check_output([tofu_path, "--version"]).decode("utf-8") return re.search(r"(\d+)\.(\d+).(\d+)", version_output).group(0) def init(directory=None, upgrade=True): - logger.info(f"terraform init directory={directory}") - with timer(logger, "terraform init"): + logger.info(f"tofu init directory={directory}") + with timer(logger, "tofu init"): command = ["init"] if upgrade: command.append("-upgrade") - run_terraform_subprocess(command, cwd=directory, prefix="terraform") + run_tofu_subprocess(command, cwd=directory, prefix="tofu") def apply(directory=None, targets=None, var_files=None): targets = targets or [] var_files = var_files or [] - logger.info(f"terraform apply directory={directory} targets={targets}") + logger.info(f"tofu apply directory={directory} targets={targets}") command = ( ["apply", "-auto-approve"] + ["-target=" + _ for _ in targets] + ["-var-file=" + _ for _ in var_files] ) - with timer(logger, "terraform apply"): - run_terraform_subprocess(command, cwd=directory, prefix="terraform") + with timer(logger, "tofu apply"): + run_tofu_subprocess(command, cwd=directory, prefix="tofu") def output(directory=None): - terraform_path = download_terraform_binary() + tofu_path = download_opentofu_binary() - logger.info(f"terraform={terraform_path} output directory={directory}") - with timer(logger, "terraform output"): + logger.info(f"tofu={tofu_path} output directory={directory}") + with timer(logger, "tofu output"): return json.loads( subprocess.check_output( - [terraform_path, "output", "-json"], cwd=directory + [tofu_path, "output", "-json"], cwd=directory ).decode("utf8")[:-1] ) @@ -168,61 +167,61 @@ def output(directory=None): def tfimport(addr, id, directory=None, var_files=None, exist_ok=False): var_files = var_files or [] - logger.info(f"terraform import directory={directory} addr={addr} id={id}") + logger.info(f"tofu import directory={directory} addr={addr} id={id}") command = ["import"] + ["-var-file=" + _ for _ in var_files] + [addr, id] logger.error(str(command)) - with timer(logger, "terraform import"): + with timer(logger, "tofu import"): try: - run_terraform_subprocess( + run_tofu_subprocess( command, cwd=directory, - prefix="terraform", + prefix="tofu", strip_errors=True, timeout=30, ) - except TerraformException as e: + except OpenTofuException as e: if not exist_ok: raise e -def show(directory=None, terraform_init: bool = True) -> dict: +def show(directory=None, tofu_init: bool = True) -> dict: - if terraform_init: + if tofu_init: init(directory) - logger.info(f"terraform show directory={directory}") + logger.info(f"tofu show directory={directory}") command = ["show", "-json"] - with timer(logger, "terraform show"): + with timer(logger, "tofu show"): try: output = json.loads( - run_terraform_subprocess( + run_tofu_subprocess( command, cwd=directory, - prefix="terraform", + prefix="tofu", strip_errors=True, capture_output=True, ) ) return output - except TerraformException as e: + except OpenTofuException as e: raise e def refresh(directory=None, var_files=None): var_files = var_files or [] - logger.info(f"terraform refresh directory={directory}") + logger.info(f"tofu refresh directory={directory}") command = ["refresh"] + ["-var-file=" + _ for _ in var_files] - with timer(logger, "terraform refresh"): - run_terraform_subprocess(command, cwd=directory, prefix="terraform") + with timer(logger, "tofu refresh"): + run_tofu_subprocess(command, cwd=directory, prefix="tofu") def destroy(directory=None, targets=None, var_files=None): targets = targets or [] var_files = var_files or [] - logger.info(f"terraform destroy directory={directory} targets={targets}") + logger.info(f"tofu destroy directory={directory} targets={targets}") command = ( [ "destroy", @@ -232,8 +231,8 @@ def destroy(directory=None, targets=None, var_files=None): + ["-var-file=" + _ for _ in var_files] ) - with timer(logger, "terraform destroy"): - run_terraform_subprocess(command, cwd=directory, prefix="terraform") + with timer(logger, "tofu destroy"): + run_tofu_subprocess(command, cwd=directory, prefix="tofu") def rm_local_state(directory=None): diff --git a/src/_nebari/stages/base.py b/src/_nebari/stages/base.py index cef1322e95..bcc6bb82bf 100644 --- a/src/_nebari/stages/base.py +++ b/src/_nebari/stages/base.py @@ -11,7 +11,7 @@ from kubernetes import client, config from kubernetes.client.rest import ApiException -from _nebari.provider import helm, kubernetes, kustomize, terraform +from _nebari.provider import helm, kubernetes, kustomize, opentofu from _nebari.stages.tf_objects import NebariTerraformState from nebari.hookspecs import NebariStage @@ -248,7 +248,7 @@ def tf_objects(self) -> List[Dict]: def render(self) -> Dict[pathlib.Path, str]: contents = { - (self.stage_prefix / "_nebari.tf.json"): terraform.tf_render_objects( + (self.stage_prefix / "_nebari.tf.json"): opentofu.tf_render_objects( self.tf_objects() ) } @@ -283,19 +283,19 @@ def deploy( self, stage_outputs: Dict[str, Dict[str, Any]], disable_prompt: bool = False, - terraform_init: bool = True, + tofu_init: bool = True, ): deploy_config = dict( directory=str(self.output_directory / self.stage_prefix), input_vars=self.input_vars(stage_outputs), - terraform_init=terraform_init, + tofu_init=tofu_init, ) state_imports = self.state_imports() if state_imports: - deploy_config["terraform_import"] = True + deploy_config["tofu_import"] = True deploy_config["state_imports"] = state_imports - self.set_outputs(stage_outputs, terraform.deploy(**deploy_config)) + self.set_outputs(stage_outputs, opentofu.deploy(**deploy_config)) self.post_deploy(stage_outputs, disable_prompt) yield @@ -318,27 +318,27 @@ def destroy( ): self.set_outputs( stage_outputs, - terraform.deploy( + opentofu.deploy( directory=str(self.output_directory / self.stage_prefix), input_vars=self.input_vars(stage_outputs), - terraform_init=True, - terraform_import=True, - terraform_apply=False, - terraform_destroy=False, + tofu_init=True, + tofu_import=True, + tofu_apply=False, + tofu_destroy=False, ), ) yield try: - terraform.deploy( + opentofu.deploy( directory=str(self.output_directory / self.stage_prefix), input_vars=self.input_vars(stage_outputs), - terraform_init=True, - terraform_import=True, - terraform_apply=False, - terraform_destroy=True, + tofu_init=True, + tofu_import=True, + tofu_apply=False, + tofu_destroy=True, ) status["stages/" + self.name] = True - except terraform.TerraformException as e: + except opentofu.OpenTofuException as e: if not ignore_errors: raise e status["stages/" + self.name] = False diff --git a/src/_nebari/stages/infrastructure/__init__.py b/src/_nebari/stages/infrastructure/__init__.py index 559f17bd53..553e520e3a 100644 --- a/src/_nebari/stages/infrastructure/__init__.py +++ b/src/_nebari/stages/infrastructure/__init__.py @@ -6,18 +6,14 @@ import re import sys import tempfile +import warnings from typing import Annotated, Any, Dict, List, Literal, Optional, Tuple, Type, Union -from pydantic import Field, field_validator, model_validator +from pydantic import ConfigDict, Field, field_validator, model_validator from _nebari import constants -from _nebari.provider import terraform -from _nebari.provider.cloud import ( - amazon_web_services, - azure_cloud, - digital_ocean, - google_cloud, -) +from _nebari.provider import opentofu +from _nebari.provider.cloud import amazon_web_services, azure_cloud, google_cloud from _nebari.stages.base import NebariTerraformStage from _nebari.stages.kubernetes_services import SharedFsEnum from _nebari.stages.tf_objects import NebariTerraformState @@ -43,22 +39,6 @@ class ExistingInputVars(schema.Base): kube_context: str -class DigitalOceanNodeGroup(schema.Base): - instance: str - min_nodes: int - max_nodes: int - - -class DigitalOceanInputVars(schema.Base): - name: str - environment: str - region: str - tags: List[str] - kubernetes_version: str - node_groups: Dict[str, DigitalOceanNodeGroup] - kubeconfig_filename: str = get_kubeconfig_filename() - - class GCPNodeGroupInputVars(schema.Base): name: str instance_type: str @@ -115,6 +95,7 @@ class AzureInputVars(schema.Base): name: str environment: str region: str + authorized_ip_ranges: List[str] = ["0.0.0.0/0"] kubeconfig_filename: str = get_kubeconfig_filename() kubernetes_version: str node_groups: Dict[str, AzureNodeGroupInputVars] @@ -125,6 +106,7 @@ class AzureInputVars(schema.Base): tags: Dict[str, str] = {} max_pods: Optional[int] = None network_profile: Optional[Dict[str, str]] = None + azure_policy_enabled: Optional[bool] = None workload_identity_enabled: bool = False @@ -152,10 +134,23 @@ class AWSNodeGroupInputVars(schema.Base): launch_template: Optional[AWSNodeLaunchTemplate] = None -def construct_aws_ami_type(gpu_enabled: bool, launch_template: AWSNodeLaunchTemplate): - """Construct the AWS AMI type based on the provided parameters.""" +def construct_aws_ami_type( + gpu_enabled: bool, launch_template: AWSNodeLaunchTemplate +) -> str: + """ + This function selects the Amazon Machine Image (AMI) type for AWS nodes by evaluating + the provided parameters. The selection logic prioritizes the launch template over the + GPU flag. + + Returns the AMI type (str) determined by the following rules: + - Returns "CUSTOM" if a `launch_template` is provided and it includes a valid `ami_id`. + - Returns "AL2_x86_64_GPU" if `gpu_enabled` is True and no valid + `launch_template` is provided (None). + - Returns "AL2_x86_64" as the default AMI type if `gpu_enabled` is False and no + valid `launch_template` is provided (None). + """ - if launch_template and launch_template.ami_id: + if launch_template and getattr(launch_template, "ami_id", None): return "CUSTOM" if gpu_enabled: @@ -174,6 +169,7 @@ class AWSInputVars(schema.Base): eks_endpoint_access: Optional[ Literal["private", "public", "public_and_private"] ] = "public" + eks_kms_arn: Optional[str] = None node_groups: List[AWSNodeGroupInputVars] availability_zones: List[str] vpc_cidr_block: str @@ -210,11 +206,6 @@ def _calculate_node_groups(config: schema.Main): group: {"key": "azure-node-pool", "value": group} for group in ["general", "user", "worker"] } - elif config.provider == schema.ProviderEnum.do: - return { - group: {"key": "doks.digitalocean.com/node-pool", "value": group} - for group in ["general", "user", "worker"] - } elif config.provider == schema.ProviderEnum.existing: return config.existing.model_dump()["node_selectors"] else: @@ -253,67 +244,6 @@ class KeyValueDict(schema.Base): value: str -class DigitalOceanNodeGroup(schema.Base): - """Representation of a node group with Digital Ocean - - - Kubernetes limits: https://docs.digitalocean.com/products/kubernetes/details/limits/ - - Available instance types: https://slugs.do-api.dev/ - """ - - instance: str - min_nodes: Annotated[int, Field(ge=1)] = 1 - max_nodes: Annotated[int, Field(ge=1)] = 1 - - -DEFAULT_DO_NODE_GROUPS = { - "general": DigitalOceanNodeGroup(instance="g-8vcpu-32gb", min_nodes=1, max_nodes=1), - "user": DigitalOceanNodeGroup(instance="g-4vcpu-16gb", min_nodes=1, max_nodes=5), - "worker": DigitalOceanNodeGroup(instance="g-4vcpu-16gb", min_nodes=1, max_nodes=5), -} - - -class DigitalOceanProvider(schema.Base): - region: str - kubernetes_version: Optional[str] = None - # Digital Ocean image slugs are listed here https://slugs.do-api.dev/ - node_groups: Dict[str, DigitalOceanNodeGroup] = DEFAULT_DO_NODE_GROUPS - tags: Optional[List[str]] = [] - - @model_validator(mode="before") - @classmethod - def _check_input(cls, data: Any) -> Any: - digital_ocean.check_credentials() - - # check if region is valid - available_regions = set(_["slug"] for _ in digital_ocean.regions()) - if data["region"] not in available_regions: - raise ValueError( - f"Digital Ocean region={data['region']} is not one of {available_regions}" - ) - - # check if kubernetes version is valid - available_kubernetes_versions = digital_ocean.kubernetes_versions() - if len(available_kubernetes_versions) == 0: - raise ValueError( - "Request to Digital Ocean for available Kubernetes versions failed." - ) - if data["kubernetes_version"] is None: - data["kubernetes_version"] = available_kubernetes_versions[-1] - elif data["kubernetes_version"] not in available_kubernetes_versions: - raise ValueError( - f"\nInvalid `kubernetes-version` provided: {data['kubernetes_version']}.\nPlease select from one of the following supported Kubernetes versions: {available_kubernetes_versions} or omit flag to use latest Kubernetes version available." - ) - - available_instances = {_["slug"] for _ in digital_ocean.instances()} - if "node_groups" in data: - for _, node_group in data["node_groups"].items(): - if node_group["instance"] not in available_instances: - raise ValueError( - f"Digital Ocean instance {node_group.instance} not one of available instance types={available_instances}" - ) - return data - - class GCPIPAllocationPolicy(schema.Base): cluster_secondary_range_name: str services_secondary_range_name: str @@ -358,6 +288,9 @@ class GCPNodeGroup(schema.Base): class GoogleCloudPlatformProvider(schema.Base): + # If you pass a major and minor version without a patch version + # yaml will pass it as a float, so we need to coerce it to a string + model_config = ConfigDict(coerce_numbers_to_str=True) region: str project: str kubernetes_version: str @@ -372,6 +305,12 @@ class GoogleCloudPlatformProvider(schema.Base): master_authorized_networks_config: Optional[Union[GCPCIDRBlock, None]] = None private_cluster_config: Optional[Union[GCPPrivateClusterConfig, None]] = None + @field_validator("kubernetes_version", mode="before") + @classmethod + def transform_version_to_str(cls, value) -> str: + """Transforms the version to a string if it is not already.""" + return str(value) + @model_validator(mode="before") @classmethod def _check_input(cls, data: Any) -> Any: @@ -382,11 +321,28 @@ def _check_input(cls, data: Any) -> Any: ) available_kubernetes_versions = google_cloud.kubernetes_versions(data["region"]) - print(available_kubernetes_versions) - if data["kubernetes_version"] not in available_kubernetes_versions: + if not any( + v.startswith(str(data["kubernetes_version"])) + for v in available_kubernetes_versions + ): raise ValueError( f"\nInvalid `kubernetes-version` provided: {data['kubernetes_version']}.\nPlease select from one of the following supported Kubernetes versions: {available_kubernetes_versions} or omit flag to use latest Kubernetes version available." ) + + # check if instances are valid + available_instances = google_cloud.instances(data["region"]) + if "node_groups" in data: + for _, node_group in data["node_groups"].items(): + instance = ( + node_group["instance"] + if hasattr(node_group, "__getitem__") + else node_group.instance + ) + if instance not in available_instances: + raise ValueError( + f"Google Cloud Platform instance {instance} not one of available instance types={available_instances}" + ) + return data @@ -407,6 +363,7 @@ class AzureProvider(schema.Base): region: str kubernetes_version: Optional[str] = None storage_account_postfix: str + authorized_ip_ranges: Optional[List[str]] = ["0.0.0.0/0"] resource_group_name: Optional[str] = None node_groups: Dict[str, AzureNodeGroup] = DEFAULT_AZURE_NODE_GROUPS storage_account_postfix: str @@ -417,6 +374,7 @@ class AzureProvider(schema.Base): network_profile: Optional[Dict[str, str]] = None max_pods: Optional[int] = None workload_identity_enabled: bool = False + azure_policy_enabled: Optional[bool] = None @model_validator(mode="before") @classmethod @@ -468,7 +426,16 @@ class AWSNodeGroup(schema.Base): gpu: bool = False single_subnet: bool = False permissions_boundary: Optional[str] = None - launch_template: Optional[AWSNodeLaunchTemplate] = None + # Disabled as part of 2024.11.1 until #2832 is resolved + # launch_template: Optional[AWSNodeLaunchTemplate] = None + + @model_validator(mode="before") + def check_launch_template(cls, values): + if "launch_template" in values: + raise ValueError( + "The 'launch_template' field is currently unavailable and has been removed from the configuration schema.\nPlease omit this field until it is reintroduced in a future update.", + ) + return values DEFAULT_AWS_NODE_GROUPS = { @@ -490,6 +457,7 @@ class AmazonWebServicesProvider(schema.Base): eks_endpoint_access: Optional[ Literal["private", "public", "public_and_private"] ] = "public" + eks_kms_arn: Optional[str] = None existing_subnet_ids: Optional[List[str]] = None existing_security_group_id: Optional[str] = None vpc_cidr_block: str = "10.10.0.0/16" @@ -546,6 +514,42 @@ def _check_input(cls, data: Any) -> Any: f"Amazon Web Services instance {node_group.instance} not one of available instance types={available_instances}" ) + # check if kms key is valid + available_kms_keys = amazon_web_services.kms_key_arns(data["region"]) + if "eks_kms_arn" in data and data["eks_kms_arn"] is not None: + key_id = [ + id for id in available_kms_keys.keys() if id in data["eks_kms_arn"] + ] + # Raise error if key_id is not found in available_kms_keys + if ( + len(key_id) != 1 + or available_kms_keys[key_id[0]].Arn != data["eks_kms_arn"] + ): + raise ValueError( + f"Amazon Web Services KMS Key with ARN {data['eks_kms_arn']} not one of available/enabled keys={[v.Arn for v in available_kms_keys.values() if v.KeyManager=='CUSTOMER' and v.KeySpec=='SYMMETRIC_DEFAULT']}" + ) + key_id = key_id[0] + # Raise error if key is not a customer managed key + if available_kms_keys[key_id].KeyManager != "CUSTOMER": + raise ValueError( + f"Amazon Web Services KMS Key with ID {key_id} is not a customer managed key" + ) + # Symmetric KMS keys with Encrypt and decrypt key-usage have the SYMMETRIC_DEFAULT key-spec + # EKS cluster encryption requires a Symmetric key that is set to encrypt and decrypt data + if available_kms_keys[key_id].KeySpec != "SYMMETRIC_DEFAULT": + if available_kms_keys[key_id].KeyUsage == "GENERATE_VERIFY_MAC": + raise ValueError( + f"Amazon Web Services KMS Key with ID {key_id} does not have KeyUsage set to 'Encrypt and decrypt' data" + ) + elif available_kms_keys[key_id].KeyUsage != "ENCRYPT_DECRYPT": + raise ValueError( + f"Amazon Web Services KMS Key with ID {key_id} is not of type Symmetric, and KeyUsage not set to 'Encrypt and decrypt' data" + ) + else: + raise ValueError( + f"Amazon Web Services KMS Key with ID {key_id} is not of type Symmetric" + ) + return data @@ -573,7 +577,6 @@ class ExistingProvider(schema.Base): schema.ProviderEnum.gcp: GoogleCloudPlatformProvider, schema.ProviderEnum.aws: AmazonWebServicesProvider, schema.ProviderEnum.azure: AzureProvider, - schema.ProviderEnum.do: DigitalOceanProvider, } provider_enum_name_map: Dict[schema.ProviderEnum, str] = { @@ -582,7 +585,6 @@ class ExistingProvider(schema.Base): schema.ProviderEnum.gcp: "google_cloud_platform", schema.ProviderEnum.aws: "amazon_web_services", schema.ProviderEnum.azure: "azure", - schema.ProviderEnum.do: "digital_ocean", } provider_name_abbreviation_map: Dict[str, str] = { @@ -593,7 +595,6 @@ class ExistingProvider(schema.Base): schema.ProviderEnum.gcp: node_groups_to_dict(DEFAULT_GCP_NODE_GROUPS), schema.ProviderEnum.aws: node_groups_to_dict(DEFAULT_AWS_NODE_GROUPS), schema.ProviderEnum.azure: node_groups_to_dict(DEFAULT_AZURE_NODE_GROUPS), - schema.ProviderEnum.do: node_groups_to_dict(DEFAULT_DO_NODE_GROUPS), } @@ -603,7 +604,6 @@ class InputSchema(schema.Base): google_cloud_platform: Optional[GoogleCloudPlatformProvider] = None amazon_web_services: Optional[AmazonWebServicesProvider] = None azure: Optional[AzureProvider] = None - digital_ocean: Optional[DigitalOceanProvider] = None @model_validator(mode="before") @classmethod @@ -618,11 +618,23 @@ def check_provider(cls, data: Any) -> Any: data[provider] = provider_enum_model_map[provider]() else: # if the provider field is invalid, it won't be set when this validator is called - # so we need to check for it explicitly here, and set the `pre` to True + # so we need to check for it explicitly here, and set mode to "before" # TODO: this is a workaround, check if there is a better way to do this in Pydantic v2 raise ValueError( - f"'{provider}' is not a valid enumeration member; permitted: local, existing, do, aws, gcp, azure" + f"'{provider}' is not a valid enumeration member; permitted: local, existing, aws, gcp, azure" + ) + set_providers = { + provider + for provider in provider_name_abbreviation_map.keys() + if provider in data and data[provider] + } + expected_provider_config = provider_enum_name_map[provider] + extra_provider_config = set_providers - {expected_provider_config} + if extra_provider_config: + warnings.warn( + f"Provider is set to {getattr(provider, 'value', provider)}, but configuration defined for other providers: {extra_provider_config}" ) + else: set_providers = [ provider @@ -636,6 +648,7 @@ def check_provider(cls, data: Any) -> Any: data["provider"] = provider_name_abbreviation_map[set_providers[0]] elif num_providers == 0: data["provider"] = schema.ProviderEnum.local.value + return data @@ -721,26 +734,20 @@ def state_imports(self) -> List[Tuple[str, str]]: def tf_objects(self) -> List[Dict]: if self.config.provider == schema.ProviderEnum.gcp: return [ - terraform.Provider( + opentofu.Provider( "google", project=self.config.google_cloud_platform.project, region=self.config.google_cloud_platform.region, ), NebariTerraformState(self.name, self.config), ] - elif self.config.provider == schema.ProviderEnum.do: - return [ - NebariTerraformState(self.name, self.config), - ] elif self.config.provider == schema.ProviderEnum.azure: return [ NebariTerraformState(self.name, self.config), ] elif self.config.provider == schema.ProviderEnum.aws: return [ - terraform.Provider( - "aws", region=self.config.amazon_web_services.region - ), + opentofu.Provider("aws", region=self.config.amazon_web_services.region), NebariTerraformState(self.name, self.config), ] else: @@ -755,15 +762,6 @@ def input_vars(self, stage_outputs: Dict[str, Dict[str, Any]]): return ExistingInputVars( kube_context=self.config.existing.kube_context ).model_dump() - elif self.config.provider == schema.ProviderEnum.do: - return DigitalOceanInputVars( - name=self.config.escaped_project_name, - environment=self.config.namespace, - region=self.config.digital_ocean.region, - tags=self.config.digital_ocean.tags, - kubernetes_version=self.config.digital_ocean.kubernetes_version, - node_groups=self.config.digital_ocean.node_groups, - ).model_dump() elif self.config.provider == schema.ProviderEnum.gcp: return GCPInputVars( name=self.config.escaped_project_name, @@ -804,6 +802,7 @@ def input_vars(self, stage_outputs: Dict[str, Dict[str, Any]]): environment=self.config.namespace, region=self.config.azure.region, kubernetes_version=self.config.azure.kubernetes_version, + authorized_ip_ranges=self.config.azure.authorized_ip_ranges, node_groups={ name: AzureNodeGroupInputVars( instance=node_group.instance, @@ -829,12 +828,14 @@ def input_vars(self, stage_outputs: Dict[str, Dict[str, Any]]): network_profile=self.config.azure.network_profile, max_pods=self.config.azure.max_pods, workload_identity_enabled=self.config.azure.workload_identity_enabled, + azure_policy_enabled=self.config.azure.azure_policy_enabled, ).model_dump() elif self.config.provider == schema.ProviderEnum.aws: return AWSInputVars( name=self.config.escaped_project_name, environment=self.config.namespace, eks_endpoint_access=self.config.amazon_web_services.eks_endpoint_access, + eks_kms_arn=self.config.amazon_web_services.eks_kms_arn, existing_subnet_ids=self.config.amazon_web_services.existing_subnet_ids, existing_security_group_id=self.config.amazon_web_services.existing_security_group_id, region=self.config.amazon_web_services.region, @@ -849,10 +850,10 @@ def input_vars(self, stage_outputs: Dict[str, Dict[str, Any]]): max_size=node_group.max_nodes, single_subnet=node_group.single_subnet, permissions_boundary=node_group.permissions_boundary, - launch_template=node_group.launch_template, + launch_template=None, ami_type=construct_aws_ami_type( gpu_enabled=node_group.gpu, - launch_template=node_group.launch_template, + launch_template=None, ), ) for name, node_group in self.config.amazon_web_services.node_groups.items() diff --git a/src/_nebari/stages/infrastructure/template/aws/main.tf b/src/_nebari/stages/infrastructure/template/aws/main.tf index feffd35291..ec0cbb6606 100644 --- a/src/_nebari/stages/infrastructure/template/aws/main.tf +++ b/src/_nebari/stages/infrastructure/template/aws/main.tf @@ -99,6 +99,7 @@ module "kubernetes" { endpoint_public_access = var.eks_endpoint_access == "private" ? false : true endpoint_private_access = var.eks_endpoint_access == "public" ? false : true + eks_kms_arn = var.eks_kms_arn public_access_cidrs = var.eks_public_access_cidrs permissions_boundary = var.permissions_boundary } diff --git a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/main.tf b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/main.tf index 5b66201f83..2537b12dad 100644 --- a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/main.tf +++ b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/main.tf @@ -14,8 +14,20 @@ resource "aws_eks_cluster" "main" { public_access_cidrs = var.public_access_cidrs } + # Only set encryption_config if eks_kms_arn is not null + dynamic "encryption_config" { + for_each = var.eks_kms_arn != null ? [1] : [] + content { + provider { + key_arn = var.eks_kms_arn + } + resources = ["secrets"] + } + } + depends_on = [ aws_iam_role_policy_attachment.cluster-policy, + aws_iam_role_policy_attachment.cluster_encryption, ] tags = merge({ Name = var.name }, var.tags) @@ -135,6 +147,9 @@ resource "aws_eks_addon" "aws-ebs-csi-driver" { "eks.amazonaws.com/nodegroup" = "general" } } + defaultStorageClass = { + enabled = true + } }) # Ensure cluster and node groups are created diff --git a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/policy.tf b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/policy.tf index 6916bc6532..d72b64edaa 100644 --- a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/policy.tf +++ b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/policy.tf @@ -32,6 +32,33 @@ resource "aws_iam_role_policy_attachment" "cluster-policy" { role = aws_iam_role.cluster.name } +data "aws_iam_policy_document" "cluster_encryption" { + count = var.eks_kms_arn != null ? 1 : 0 + statement { + actions = [ + "kms:Encrypt", + "kms:Decrypt", + "kms:ListGrants", + "kms:DescribeKey" + ] + resources = [var.eks_kms_arn] + } +} + +resource "aws_iam_policy" "cluster_encryption" { + count = var.eks_kms_arn != null ? 1 : 0 + name = "${var.name}-eks-encryption-policy" + description = "IAM policy for EKS cluster encryption" + policy = data.aws_iam_policy_document.cluster_encryption[count.index].json +} + +# Grant the EKS Cluster role KMS permissions if a key-arn is specified +resource "aws_iam_role_policy_attachment" "cluster_encryption" { + count = var.eks_kms_arn != null ? 1 : 0 + policy_arn = aws_iam_policy.cluster_encryption[count.index].arn + role = aws_iam_role.cluster.name +} + # ======================================================= # Kubernetes Node Group Policies # ======================================================= diff --git a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/variables.tf b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/variables.tf index 4d38d10a19..63558e550f 100644 --- a/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/variables.tf +++ b/src/_nebari/stages/infrastructure/template/aws/modules/kubernetes/variables.tf @@ -72,6 +72,12 @@ variable "endpoint_private_access" { default = false } +variable "eks_kms_arn" { + description = "kms key arn for EKS cluster encryption_config" + type = string + default = null +} + variable "public_access_cidrs" { type = list(string) default = ["0.0.0.0/0"] diff --git a/src/_nebari/stages/infrastructure/template/aws/modules/network/main.tf b/src/_nebari/stages/infrastructure/template/aws/modules/network/main.tf index da42767976..326da1e4bb 100644 --- a/src/_nebari/stages/infrastructure/template/aws/modules/network/main.tf +++ b/src/_nebari/stages/infrastructure/template/aws/modules/network/main.tf @@ -55,6 +55,7 @@ resource "aws_security_group" "main" { vpc_id = aws_vpc.main.id ingress { + description = "Allow all ports and protocols to enter the security group" from_port = 0 to_port = 0 protocol = "-1" @@ -62,6 +63,7 @@ resource "aws_security_group" "main" { } egress { + description = "Allow all ports and protocols to exit the security group" from_port = 0 to_port = 0 protocol = "-1" diff --git a/src/_nebari/stages/infrastructure/template/aws/variables.tf b/src/_nebari/stages/infrastructure/template/aws/variables.tf index a3f37b9eb9..a71df81d0f 100644 --- a/src/_nebari/stages/infrastructure/template/aws/variables.tf +++ b/src/_nebari/stages/infrastructure/template/aws/variables.tf @@ -69,6 +69,12 @@ variable "eks_endpoint_private_access" { default = false } +variable "eks_kms_arn" { + description = "kms key arn for EKS cluster encryption_config" + type = string + default = null +} + variable "eks_public_access_cidrs" { type = list(string) default = ["0.0.0.0/0"] diff --git a/src/_nebari/stages/infrastructure/template/azure/main.tf b/src/_nebari/stages/infrastructure/template/azure/main.tf index 2d6e2e2afa..960b755f8c 100644 --- a/src/_nebari/stages/infrastructure/template/azure/main.tf +++ b/src/_nebari/stages/infrastructure/template/azure/main.tf @@ -28,6 +28,7 @@ module "kubernetes" { kubernetes_version = var.kubernetes_version tags = var.tags max_pods = var.max_pods + authorized_ip_ranges = var.authorized_ip_ranges network_profile = var.network_profile @@ -43,4 +44,5 @@ module "kubernetes" { vnet_subnet_id = var.vnet_subnet_id private_cluster_enabled = var.private_cluster_enabled workload_identity_enabled = var.workload_identity_enabled + azure_policy_enabled = var.azure_policy_enabled } diff --git a/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/main.tf b/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/main.tf index f093f048c6..f97f1f6383 100644 --- a/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/main.tf +++ b/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/main.tf @@ -4,6 +4,9 @@ resource "azurerm_kubernetes_cluster" "main" { location = var.location resource_group_name = var.resource_group_name tags = var.tags + api_server_access_profile { + authorized_ip_ranges = var.authorized_ip_ranges + } # To enable Azure AD Workload Identity oidc_issuer_enabled must be set to true. oidc_issuer_enabled = var.workload_identity_enabled @@ -15,6 +18,9 @@ resource "azurerm_kubernetes_cluster" "main" { # Azure requires that a new, non-existent Resource Group is used, as otherwise the provisioning of the Kubernetes Service will fail. node_resource_group = var.node_resource_group_name private_cluster_enabled = var.private_cluster_enabled + # https://learn.microsoft.com/en-ie/azure/governance/policy/concepts/policy-for-kubernetes + azure_policy_enabled = var.azure_policy_enabled + dynamic "network_profile" { for_each = var.network_profile != null ? [var.network_profile] : [] diff --git a/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/variables.tf b/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/variables.tf index b93a9fae2d..95d2045420 100644 --- a/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/variables.tf +++ b/src/_nebari/stages/infrastructure/template/azure/modules/kubernetes/variables.tf @@ -76,3 +76,15 @@ variable "workload_identity_enabled" { type = bool default = false } + +variable "authorized_ip_ranges" { + description = "The ip range allowed to access the Kubernetes API server, defaults to 0.0.0.0/0" + type = list(string) + default = ["0.0.0.0/0"] +} + +variable "azure_policy_enabled" { + description = "Enable Azure Policy" + type = bool + default = false +} diff --git a/src/_nebari/stages/infrastructure/template/azure/variables.tf b/src/_nebari/stages/infrastructure/template/azure/variables.tf index dcef2c97cb..44ef90463f 100644 --- a/src/_nebari/stages/infrastructure/template/azure/variables.tf +++ b/src/_nebari/stages/infrastructure/template/azure/variables.tf @@ -82,3 +82,15 @@ variable "workload_identity_enabled" { type = bool default = false } + +variable "authorized_ip_ranges" { + description = "The ip range allowed to access the Kubernetes API server, defaults to 0.0.0.0/0" + type = list(string) + default = ["0.0.0.0/0"] +} + +variable "azure_policy_enabled" { + description = "Enable Azure Policy" + type = bool + default = false +} diff --git a/src/_nebari/stages/infrastructure/template/do/main.tf b/src/_nebari/stages/infrastructure/template/do/main.tf deleted file mode 100644 index 30a7aa2966..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/main.tf +++ /dev/null @@ -1,25 +0,0 @@ -module "kubernetes" { - source = "./modules/kubernetes" - - name = "${var.name}-${var.environment}" - - region = var.region - kubernetes_version = var.kubernetes_version - - node_groups = [ - for name, config in var.node_groups : { - name = name - auto_scale = true - size = config.instance - min_nodes = config.min_nodes - max_nodes = config.max_nodes - } - ] - - tags = concat([ - "provision::terraform", - "project::${var.name}", - "namespace::${var.environment}", - "owner::nebari", - ], var.tags) -} diff --git a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/locals.tf b/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/locals.tf deleted file mode 100644 index d88a874c5c..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/locals.tf +++ /dev/null @@ -1,5 +0,0 @@ -locals { - master_node_group = var.node_groups[0] - - additional_node_groups = slice(var.node_groups, 1, length(var.node_groups)) -} diff --git a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/main.tf b/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/main.tf deleted file mode 100644 index 0d1ce76a35..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/main.tf +++ /dev/null @@ -1,35 +0,0 @@ -resource "digitalocean_kubernetes_cluster" "main" { - name = var.name - region = var.region - - # Grab the latest from `doctl kubernetes options versions` - version = var.kubernetes_version - - node_pool { - name = local.master_node_group.name - # List available regions `doctl kubernetes options sizes` - size = lookup(local.master_node_group, "size", "s-1vcpu-2gb") - node_count = lookup(local.master_node_group, "node_count", 1) - } - - tags = var.tags -} - -resource "digitalocean_kubernetes_node_pool" "main" { - count = length(local.additional_node_groups) - - cluster_id = digitalocean_kubernetes_cluster.main.id - - name = local.additional_node_groups[count.index].name - size = lookup(local.additional_node_groups[count.index], "size", "s-1vcpu-2gb") - - auto_scale = lookup(local.additional_node_groups[count.index], "auto_scale", true) - min_nodes = lookup(local.additional_node_groups[count.index], "min_nodes", 1) - max_nodes = lookup(local.additional_node_groups[count.index], "max_nodes", 1) - - labels = { - "nebari.dev/node_group" : local.additional_node_groups[count.index].name - } - - tags = var.tags -} diff --git a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/outputs.tf b/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/outputs.tf deleted file mode 100644 index e2e1c2c6be..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/outputs.tf +++ /dev/null @@ -1,16 +0,0 @@ -output "credentials" { - description = "Credentials needs to connect to kubernetes instance" - value = { - endpoint = digitalocean_kubernetes_cluster.main.endpoint - token = digitalocean_kubernetes_cluster.main.kube_config[0].token - cluster_ca_certificate = base64decode( - digitalocean_kubernetes_cluster.main.kube_config[0].cluster_ca_certificate - ) - } -} - - -output "kubeconfig" { - description = "Kubeconfig for connecting to kubernetes cluster" - value = digitalocean_kubernetes_cluster.main.kube_config.0.raw_config -} diff --git a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/variables.tf b/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/variables.tf deleted file mode 100644 index 67843a7820..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/variables.tf +++ /dev/null @@ -1,29 +0,0 @@ -variable "name" { - description = "Prefix name to assign to digital ocean kubernetes cluster" - type = string -} - -variable "tags" { - description = "Additional tags to apply to each kubernetes resource" - type = set(string) - default = [] -} - -# `doctl kubernetes options regions` -variable "region" { - description = "Region to deploy digital ocean kubernetes resource" - type = string - default = "nyc1" -} - -# `doctl kubernetes options versions` -variable "kubernetes_version" { - description = "Version of digital ocean kubernetes resource" - type = string - default = "1.18.8-do.0" -} - -variable "node_groups" { - description = "List of node groups to include in digital ocean kubernetes cluster" - type = list(map(any)) -} diff --git a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/versions.tf b/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/versions.tf deleted file mode 100644 index b320a102dd..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/modules/kubernetes/versions.tf +++ /dev/null @@ -1,9 +0,0 @@ -terraform { - required_providers { - digitalocean = { - source = "digitalocean/digitalocean" - version = "2.29.0" - } - } - required_version = ">= 1.0" -} diff --git a/src/_nebari/stages/infrastructure/template/do/modules/registry/main.tf b/src/_nebari/stages/infrastructure/template/do/modules/registry/main.tf deleted file mode 100644 index 14e6896030..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/modules/registry/main.tf +++ /dev/null @@ -1,4 +0,0 @@ -resource "digitalocean_container_registry" "registry" { - name = var.name - subscription_tier_slug = "starter" -} diff --git a/src/_nebari/stages/infrastructure/template/do/modules/registry/variable.tf b/src/_nebari/stages/infrastructure/template/do/modules/registry/variable.tf deleted file mode 100644 index fce96bef08..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/modules/registry/variable.tf +++ /dev/null @@ -1,4 +0,0 @@ -variable "name" { - description = "Prefix name to git container registry" - type = string -} diff --git a/src/_nebari/stages/infrastructure/template/do/modules/registry/versions.tf b/src/_nebari/stages/infrastructure/template/do/modules/registry/versions.tf deleted file mode 100644 index b320a102dd..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/modules/registry/versions.tf +++ /dev/null @@ -1,9 +0,0 @@ -terraform { - required_providers { - digitalocean = { - source = "digitalocean/digitalocean" - version = "2.29.0" - } - } - required_version = ">= 1.0" -} diff --git a/src/_nebari/stages/infrastructure/template/do/outputs.tf b/src/_nebari/stages/infrastructure/template/do/outputs.tf deleted file mode 100644 index 53aae17634..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/outputs.tf +++ /dev/null @@ -1,21 +0,0 @@ -output "kubernetes_credentials" { - description = "Parameters needed to connect to kubernetes cluster" - sensitive = true - value = { - host = module.kubernetes.credentials.endpoint - cluster_ca_certificate = module.kubernetes.credentials.cluster_ca_certificate - token = module.kubernetes.credentials.token - } -} - -resource "local_file" "kubeconfig" { - count = var.kubeconfig_filename != null ? 1 : 0 - - content = module.kubernetes.kubeconfig - filename = var.kubeconfig_filename -} - -output "kubeconfig_filename" { - description = "filename for nebari kubeconfig" - value = var.kubeconfig_filename -} diff --git a/src/_nebari/stages/infrastructure/template/do/providers.tf b/src/_nebari/stages/infrastructure/template/do/providers.tf deleted file mode 100644 index a877aca363..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/providers.tf +++ /dev/null @@ -1,3 +0,0 @@ -provider "digitalocean" { - -} diff --git a/src/_nebari/stages/infrastructure/template/do/variables.tf b/src/_nebari/stages/infrastructure/template/do/variables.tf deleted file mode 100644 index b31a1ab039..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/variables.tf +++ /dev/null @@ -1,40 +0,0 @@ -variable "name" { - description = "Prefix name to assign to nebari resources" - type = string -} - -variable "environment" { - description = "Environment to create Kubernetes resources" - type = string -} - -variable "region" { - description = "DigitalOcean region" - type = string -} - -variable "tags" { - description = "DigitalOcean tags to assign to resources" - type = list(string) - default = [] -} - -variable "kubernetes_version" { - description = "DigitalOcean kubernetes version" - type = string -} - -variable "node_groups" { - description = "DigitalOcean node groups" - type = map(object({ - instance = string - min_nodes = number - max_nodes = number - })) -} - -variable "kubeconfig_filename" { - description = "Kubernetes kubeconfig written to filesystem" - type = string - default = null -} diff --git a/src/_nebari/stages/infrastructure/template/do/versions.tf b/src/_nebari/stages/infrastructure/template/do/versions.tf deleted file mode 100644 index b320a102dd..0000000000 --- a/src/_nebari/stages/infrastructure/template/do/versions.tf +++ /dev/null @@ -1,9 +0,0 @@ -terraform { - required_providers { - digitalocean = { - source = "digitalocean/digitalocean" - version = "2.29.0" - } - } - required_version = ">= 1.0" -} diff --git a/src/_nebari/stages/infrastructure/template/gcp/main.tf b/src/_nebari/stages/infrastructure/template/gcp/main.tf index 3d23af5571..ec80cefe16 100644 --- a/src/_nebari/stages/infrastructure/template/gcp/main.tf +++ b/src/_nebari/stages/infrastructure/template/gcp/main.tf @@ -5,6 +5,9 @@ data "google_compute_zones" "gcpzones" { module "registry-jupyterhub" { source = "./modules/registry" + + repository_id = "${var.name}-${var.environment}" + location = var.region } diff --git a/src/_nebari/stages/infrastructure/template/gcp/modules/registry/main.tf b/src/_nebari/stages/infrastructure/template/gcp/modules/registry/main.tf index a4e35bf1a3..9403872737 100644 --- a/src/_nebari/stages/infrastructure/template/gcp/modules/registry/main.tf +++ b/src/_nebari/stages/infrastructure/template/gcp/modules/registry/main.tf @@ -1,3 +1,6 @@ -resource "google_container_registry" "registry" { - location = var.location +resource "google_artifact_registry_repository" "registry" { + # https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/artifact_registry_repository#argument-reference + repository_id = var.repository_id + location = var.location + format = var.format } diff --git a/src/_nebari/stages/infrastructure/template/gcp/modules/registry/variables.tf b/src/_nebari/stages/infrastructure/template/gcp/modules/registry/variables.tf index 39f6d5ed28..9162425fa1 100644 --- a/src/_nebari/stages/infrastructure/template/gcp/modules/registry/variables.tf +++ b/src/_nebari/stages/infrastructure/template/gcp/modules/registry/variables.tf @@ -1,6 +1,17 @@ variable "location" { - # https://cloud.google.com/container-registry/docs/pushing-and-pulling#pushing_an_image_to_a_registry + # https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling description = "Location of registry" type = string - default = "US" +} + +variable "format" { + # https://cloud.google.com/artifact-registry/docs/reference/rest/v1/projects.locations.repositories#Format + description = "The format of packages that are stored in the repository" + type = string + default = "DOCKER" +} + +variable "repository_id" { + description = "Name of repository" + type = string } diff --git a/src/_nebari/stages/infrastructure/template/gcp/versions.tf b/src/_nebari/stages/infrastructure/template/gcp/versions.tf index ddea3c185c..92bd117367 100644 --- a/src/_nebari/stages/infrastructure/template/gcp/versions.tf +++ b/src/_nebari/stages/infrastructure/template/gcp/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { google = { source = "hashicorp/google" - version = "4.8.0" + version = "6.14.1" } } required_version = ">= 1.0" diff --git a/src/_nebari/stages/infrastructure/template/local/main.tf b/src/_nebari/stages/infrastructure/template/local/main.tf index fb0d0997e1..77aa799cbd 100644 --- a/src/_nebari/stages/infrastructure/template/local/main.tf +++ b/src/_nebari/stages/infrastructure/template/local/main.tf @@ -1,7 +1,7 @@ terraform { required_providers { kind = { - source = "tehcyx/kind" + source = "registry.terraform.io/tehcyx/kind" version = "0.4.0" } docker = { diff --git a/src/_nebari/stages/kubernetes_ingress/__init__.py b/src/_nebari/stages/kubernetes_ingress/__init__.py index ea5f8fa335..df70e12b1e 100644 --- a/src/_nebari/stages/kubernetes_ingress/__init__.py +++ b/src/_nebari/stages/kubernetes_ingress/__init__.py @@ -43,7 +43,6 @@ def provision_ingress_dns( record_name = ".".join(record_name) zone_name = ".".join(zone_name) if config.provider in { - schema.ProviderEnum.do, schema.ProviderEnum.gcp, schema.ProviderEnum.azure, }: diff --git a/src/_nebari/stages/kubernetes_ingress/template/versions.tf b/src/_nebari/stages/kubernetes_ingress/template/versions.tf index 341def1365..d1e5f8acfb 100644 --- a/src/_nebari/stages/kubernetes_ingress/template/versions.tf +++ b/src/_nebari/stages/kubernetes_ingress/template/versions.tf @@ -6,7 +6,7 @@ terraform { } kubernetes = { source = "hashicorp/kubernetes" - version = "2.20.0" + version = "2.35.1" } } required_version = ">= 1.0" diff --git a/src/_nebari/stages/kubernetes_initialize/template/versions.tf b/src/_nebari/stages/kubernetes_initialize/template/versions.tf index 341def1365..d1e5f8acfb 100644 --- a/src/_nebari/stages/kubernetes_initialize/template/versions.tf +++ b/src/_nebari/stages/kubernetes_initialize/template/versions.tf @@ -6,7 +6,7 @@ terraform { } kubernetes = { source = "hashicorp/kubernetes" - version = "2.20.0" + version = "2.35.1" } } required_version = ">= 1.0" diff --git a/src/_nebari/stages/kubernetes_keycloak/template/versions.tf b/src/_nebari/stages/kubernetes_keycloak/template/versions.tf index 341def1365..d1e5f8acfb 100644 --- a/src/_nebari/stages/kubernetes_keycloak/template/versions.tf +++ b/src/_nebari/stages/kubernetes_keycloak/template/versions.tf @@ -6,7 +6,7 @@ terraform { } kubernetes = { source = "hashicorp/kubernetes" - version = "2.20.0" + version = "2.35.1" } } required_version = ">= 1.0" diff --git a/src/_nebari/stages/kubernetes_keycloak_configuration/template/versions.tf b/src/_nebari/stages/kubernetes_keycloak_configuration/template/versions.tf index 00353a6d2f..d3f87478e2 100644 --- a/src/_nebari/stages/kubernetes_keycloak_configuration/template/versions.tf +++ b/src/_nebari/stages/kubernetes_keycloak_configuration/template/versions.tf @@ -6,7 +6,7 @@ terraform { } kubernetes = { source = "hashicorp/kubernetes" - version = "2.20.0" + version = "2.35.1" } keycloak = { source = "mrparkers/keycloak" diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/argo-workflows/versions.tf b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/argo-workflows/versions.tf index 341def1365..d1e5f8acfb 100644 --- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/argo-workflows/versions.tf +++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/argo-workflows/versions.tf @@ -6,7 +6,7 @@ terraform { } kubernetes = { source = "hashicorp/kubernetes" - version = "2.20.0" + version = "2.35.1" } } required_version = ">= 1.0" diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/conda-store/config/conda_store_config.py b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/conda-store/config/conda_store_config.py index ad9b79843a..3136d891bd 100644 --- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/conda-store/config/conda_store_config.py +++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/conda-store/config/conda_store_config.py @@ -10,9 +10,10 @@ from pathlib import Path import requests -from conda_store_server import api, orm, schema +from conda_store_server import api +from conda_store_server._internal.server.dependencies import get_conda_store +from conda_store_server.server import schema as auth_schema from conda_store_server.server.auth import GenericOAuthAuthentication -from conda_store_server.server.dependencies import get_conda_store from conda_store_server.storage import S3Storage @@ -356,7 +357,7 @@ def _get_conda_store_client_roles_for_user( return client_roles_rich def _get_current_entity_bindings(self, username): - entity = schema.AuthenticationToken( + entity = auth_schema.AuthenticationToken( primary_namespace=username, role_bindings={} ) self.log.info(f"entity: {entity}") @@ -386,7 +387,7 @@ async def authenticate(self, request): # superadmin gets access to everything if "conda_store_superadmin" in user_data.get("roles", []): - return schema.AuthenticationToken( + return auth_schema.AuthenticationToken( primary_namespace=username, role_bindings={"*/*": {"admin"}}, ) @@ -422,10 +423,9 @@ async def authenticate(self, request): for namespace in namespaces: _namespace = api.get_namespace(db, name=namespace) if _namespace is None: - db.add(orm.Namespace(name=namespace)) - db.commit() + api.ensure_namespace(db, name=namespace) - return schema.AuthenticationToken( + return auth_schema.AuthenticationToken( primary_namespace=username, role_bindings=role_bindings, ) diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/configmaps.tf b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/configmaps.tf index bfee219e9e..23f2ac9334 100644 --- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/configmaps.tf +++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/configmaps.tf @@ -60,6 +60,17 @@ resource "local_file" "overrides_json" { filename = "${path.module}/files/jupyterlab/overrides.json" } +resource "local_file" "page_config_json" { + content = jsonencode({ + "disabledExtensions" : { + "jupyterlab-jhub-apps" : !var.jhub-apps-enabled + }, + # `lockedExtensions` is an empty dict to signify that `jupyterlab-jhub-apps` is not being disabled and locked (but only disabled) + # which means users are still allowed to disable the jupyterlab-jhub-apps extension (if they have write access to page_config). + "lockedExtensions" : {} + }) + filename = "${path.module}/files/jupyterlab/page_config.json" +} resource "kubernetes_config_map" "etc-ipython" { metadata { @@ -92,6 +103,9 @@ locals { etc-jupyterlab-settings = { "overrides.json" = local_file.overrides_json.content } + etc-jupyterlab-page-config = { + "page_config.json" = local_file.page_config_json.content + } } resource "kubernetes_config_map" "etc-jupyter" { @@ -136,6 +150,20 @@ resource "kubernetes_config_map" "jupyterlab-settings" { data = local.etc-jupyterlab-settings } + +resource "kubernetes_config_map" "jupyterlab-page-config" { + depends_on = [ + local_file.page_config_json + ] + + metadata { + name = "jupyterlab-page-config" + namespace = var.namespace + } + + data = local.etc-jupyterlab-page-config +} + resource "kubernetes_config_map" "git_clone_update" { metadata { name = "git-clone-update" diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/files/jupyterhub/02-spawner.py b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/files/jupyterhub/02-spawner.py index 09bb649c01..2557a497a7 100644 --- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/files/jupyterhub/02-spawner.py +++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/files/jupyterhub/02-spawner.py @@ -10,6 +10,9 @@ from kubespawner import KubeSpawner # noqa: E402 +# conda-store default page size +DEFAULT_PAGE_SIZE_LIMIT = 100 + @gen.coroutine def get_username_hook(spawner): @@ -23,25 +26,66 @@ def get_username_hook(spawner): ) +def get_total_records(url: str, token: str) -> int: + import urllib3 + + http = urllib3.PoolManager() + response = http.request("GET", url, headers={"Authorization": f"Bearer {token}"}) + decoded_response = json.loads(response.data.decode("UTF-8")) + return decoded_response.get("count", 0) + + +def generate_paged_urls(base_url: str, total_records: int, page_size: int) -> list[str]: + import math + + urls = [] + # pages starts at 1 + for page in range(1, math.ceil(total_records / page_size) + 1): + urls.append(f"{base_url}?size={page_size}&page={page}") + + return urls + + +# TODO: this should get unit tests. Currently, since this is not a python module, +# adding tests in a traditional sense is not possible. See https://github.com/soapy1/nebari/tree/try-unit-test-spawner +# for a demo on one approach to adding test. def get_conda_store_environments(user_info: dict): + import os + import urllib3 - import yarl + + # Check for the environment variable `CONDA_STORE_API_PAGE_SIZE_LIMIT`. Fall + # back to using the default page size limit if not set. + page_size = os.environ.get( + "CONDA_STORE_API_PAGE_SIZE_LIMIT", DEFAULT_PAGE_SIZE_LIMIT + ) external_url = z2jh.get_config("custom.conda-store-service-name") token = z2jh.get_config("custom.conda-store-jhub-apps-token") endpoint = "conda-store/api/v1/environment" - url = yarl.URL(f"http://{external_url}/{endpoint}/") - + base_url = f"http://{external_url}/{endpoint}/" http = urllib3.PoolManager() - response = http.request( - "GET", str(url), headers={"Authorization": f"Bearer {token}"} - ) - # parse response - j = json.loads(response.data.decode("UTF-8")) + # get total number of records from the endpoint + total_records = get_total_records(base_url, token) + + # will contain all the environment info returned from the api + env_data = [] + + # generate a list of urls to hit to build the response + urls = generate_paged_urls(base_url, total_records, page_size) + + # get content from urls + for url in urls: + response = http.request( + "GET", url, headers={"Authorization": f"Bearer {token}"} + ) + decoded_response = json.loads(response.data.decode("UTF-8")) + env_data += decoded_response.get("data", []) + # Filter and return conda environments for the user - return [f"{env['namespace']['name']}-{env['name']}" for env in j.get("data", [])] + return [f"{env['namespace']['name']}-{env['name']}" for env in env_data] c.Spawner.pre_spawn_hook = get_username_hook diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/main.tf b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/main.tf index a36090f41c..9a0675fc85 100644 --- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/main.tf +++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/jupyterhub/main.tf @@ -104,6 +104,11 @@ resource "helm_release" "jupyterhub" { kind = "configmap" } + "/etc/jupyter/labconfig" = { + name = kubernetes_config_map.jupyterlab-page-config.metadata.0.name + namespace = kubernetes_config_map.jupyterlab-page-config.metadata.0.namespace + kind = "configmap" + } } ) environments = var.conda-store-environments diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/monitoring/versions.tf b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/monitoring/versions.tf index 341def1365..d1e5f8acfb 100644 --- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/monitoring/versions.tf +++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/monitoring/versions.tf @@ -6,7 +6,7 @@ terraform { } kubernetes = { source = "hashicorp/kubernetes" - version = "2.20.0" + version = "2.35.1" } } required_version = ">= 1.0" diff --git a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/rook-ceph/versions.tf b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/rook-ceph/versions.tf index 341def1365..d1e5f8acfb 100644 --- a/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/rook-ceph/versions.tf +++ b/src/_nebari/stages/kubernetes_services/template/modules/kubernetes/services/rook-ceph/versions.tf @@ -6,7 +6,7 @@ terraform { } kubernetes = { source = "hashicorp/kubernetes" - version = "2.20.0" + version = "2.35.1" } } required_version = ">= 1.0" diff --git a/src/_nebari/stages/kubernetes_services/template/versions.tf b/src/_nebari/stages/kubernetes_services/template/versions.tf index 00353a6d2f..d3f87478e2 100644 --- a/src/_nebari/stages/kubernetes_services/template/versions.tf +++ b/src/_nebari/stages/kubernetes_services/template/versions.tf @@ -6,7 +6,7 @@ terraform { } kubernetes = { source = "hashicorp/kubernetes" - version = "2.20.0" + version = "2.35.1" } keycloak = { source = "mrparkers/keycloak" diff --git a/src/_nebari/stages/nebari_tf_extensions/template/versions.tf b/src/_nebari/stages/nebari_tf_extensions/template/versions.tf index 00353a6d2f..d3f87478e2 100644 --- a/src/_nebari/stages/nebari_tf_extensions/template/versions.tf +++ b/src/_nebari/stages/nebari_tf_extensions/template/versions.tf @@ -6,7 +6,7 @@ terraform { } kubernetes = { source = "hashicorp/kubernetes" - version = "2.20.0" + version = "2.35.1" } keycloak = { source = "mrparkers/keycloak" diff --git a/src/_nebari/stages/terraform_state/__init__.py b/src/_nebari/stages/terraform_state/__init__.py index e0f643ed3d..e9a18ba7c5 100644 --- a/src/_nebari/stages/terraform_state/__init__.py +++ b/src/_nebari/stages/terraform_state/__init__.py @@ -9,7 +9,7 @@ from pydantic import BaseModel, field_validator from _nebari import utils -from _nebari.provider import terraform +from _nebari.provider import opentofu from _nebari.provider.cloud import azure_cloud from _nebari.stages.base import NebariTerraformStage from _nebari.stages.tf_objects import NebariConfig @@ -22,12 +22,6 @@ from nebari.hookspecs import NebariStage, hookimpl -class DigitalOceanInputVars(schema.Base): - name: str - namespace: str - region: str - - class GCPInputVars(schema.Base): name: str namespace: str @@ -117,14 +111,7 @@ def stage_prefix(self): return pathlib.Path("stages") / self.name / self.config.provider.value def state_imports(self) -> List[Tuple[str, str]]: - if self.config.provider == schema.ProviderEnum.do: - return [ - ( - "module.terraform-state.module.spaces.digitalocean_spaces_bucket.main", - f"{self.config.digital_ocean.region},{self.config.project_name}-{self.config.namespace}-terraform-state", - ) - ] - elif self.config.provider == schema.ProviderEnum.gcp: + if self.config.provider == schema.ProviderEnum.gcp: return [ ( "module.terraform-state.module.gcs.google_storage_bucket.static-site", @@ -175,7 +162,7 @@ def tf_objects(self) -> List[Dict]: resources = [NebariConfig(self.config)] if self.config.provider == schema.ProviderEnum.gcp: return resources + [ - terraform.Provider( + opentofu.Provider( "google", project=self.config.google_cloud_platform.project, region=self.config.google_cloud_platform.region, @@ -183,21 +170,13 @@ def tf_objects(self) -> List[Dict]: ] elif self.config.provider == schema.ProviderEnum.aws: return resources + [ - terraform.Provider( - "aws", region=self.config.amazon_web_services.region - ), + opentofu.Provider("aws", region=self.config.amazon_web_services.region), ] else: return resources def input_vars(self, stage_outputs: Dict[str, Dict[str, Any]]): - if self.config.provider == schema.ProviderEnum.do: - return DigitalOceanInputVars( - name=self.config.project_name, - namespace=self.config.namespace, - region=self.config.digital_ocean.region, - ).model_dump() - elif self.config.provider == schema.ProviderEnum.gcp: + if self.config.provider == schema.ProviderEnum.gcp: return GCPInputVars( name=self.config.project_name, namespace=self.config.namespace, @@ -236,19 +215,10 @@ def deploy( ): self.check_immutable_fields() - # No need to run terraform init here as it's being called when running the + # No need to run tofu init here as it's being called when running the # terraform show command, inside check_immutable_fields - with super().deploy(stage_outputs, disable_prompt, terraform_init=False): + with super().deploy(stage_outputs, disable_prompt, tofu_init=False): env_mapping = {} - # DigitalOcean terraform remote state using Spaces Bucket - # assumes aws credentials thus we set them to match spaces credentials - if self.config.provider == schema.ProviderEnum.do: - env_mapping.update( - { - "AWS_ACCESS_KEY_ID": os.environ["SPACES_ACCESS_KEY_ID"], - "AWS_SECRET_ACCESS_KEY": os.environ["SPACES_SECRET_ACCESS_KEY"], - } - ) with modified_environ(**env_mapping): yield @@ -292,7 +262,7 @@ def check_immutable_fields(self): def get_nebari_config_state(self) -> dict: directory = str(self.output_directory / self.stage_prefix) - tf_state = terraform.show(directory) + tf_state = opentofu.show(directory) nebari_config_state = None # get nebari config from state @@ -310,15 +280,6 @@ def destroy( ): with super().destroy(stage_outputs, status): env_mapping = {} - # DigitalOcean terraform remote state using Spaces Bucket - # assumes aws credentials thus we set them to match spaces credentials - if self.config.provider == schema.ProviderEnum.do: - env_mapping.update( - { - "AWS_ACCESS_KEY_ID": os.environ["SPACES_ACCESS_KEY_ID"], - "AWS_SECRET_ACCESS_KEY": os.environ["SPACES_SECRET_ACCESS_KEY"], - } - ) with modified_environ(**env_mapping): yield diff --git a/src/_nebari/stages/terraform_state/template/do/main.tf b/src/_nebari/stages/terraform_state/template/do/main.tf deleted file mode 100644 index a6db74f74d..0000000000 --- a/src/_nebari/stages/terraform_state/template/do/main.tf +++ /dev/null @@ -1,35 +0,0 @@ -variable "name" { - description = "Prefix name to assign to Nebari resources" - type = string -} - -variable "namespace" { - description = "Namespace to create Kubernetes resources" - type = string -} - -variable "region" { - description = "Region for Digital Ocean deployment" - type = string -} - -provider "digitalocean" { - -} - -module "terraform-state" { - source = "./modules/terraform-state" - - name = "${var.name}-${var.namespace}" - region = var.region -} - -terraform { - required_providers { - digitalocean = { - source = "digitalocean/digitalocean" - version = "2.29.0" - } - } - required_version = ">= 1.0" -} diff --git a/src/_nebari/stages/terraform_state/template/do/modules/spaces/main.tf b/src/_nebari/stages/terraform_state/template/do/modules/spaces/main.tf deleted file mode 100644 index fc2d34c604..0000000000 --- a/src/_nebari/stages/terraform_state/template/do/modules/spaces/main.tf +++ /dev/null @@ -1,12 +0,0 @@ -resource "digitalocean_spaces_bucket" "main" { - name = var.name - region = var.region - - force_destroy = var.force_destroy - - acl = (var.public ? "public-read" : "private") - - versioning { - enabled = false - } -} diff --git a/src/_nebari/stages/terraform_state/template/do/modules/spaces/variables.tf b/src/_nebari/stages/terraform_state/template/do/modules/spaces/variables.tf deleted file mode 100644 index db24a3dce5..0000000000 --- a/src/_nebari/stages/terraform_state/template/do/modules/spaces/variables.tf +++ /dev/null @@ -1,21 +0,0 @@ -variable "name" { - description = "Prefix name for bucket resource" - type = string -} - -variable "region" { - description = "Region for Digital Ocean bucket" - type = string -} - -variable "force_destroy" { - description = "force_destroy all bucket contents when bucket is deleted" - type = bool - default = false -} - -variable "public" { - description = "Digital Ocean s3 bucket is exposed publicly" - type = bool - default = false -} diff --git a/src/_nebari/stages/terraform_state/template/do/modules/spaces/versions.tf b/src/_nebari/stages/terraform_state/template/do/modules/spaces/versions.tf deleted file mode 100644 index b320a102dd..0000000000 --- a/src/_nebari/stages/terraform_state/template/do/modules/spaces/versions.tf +++ /dev/null @@ -1,9 +0,0 @@ -terraform { - required_providers { - digitalocean = { - source = "digitalocean/digitalocean" - version = "2.29.0" - } - } - required_version = ">= 1.0" -} diff --git a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/main.tf b/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/main.tf deleted file mode 100644 index e3445f362d..0000000000 --- a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/main.tf +++ /dev/null @@ -1,9 +0,0 @@ -module "spaces" { - source = "../spaces" - - name = "${var.name}-terraform-state" - region = var.region - public = false - - force_destroy = true -} diff --git a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/variables.tf b/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/variables.tf deleted file mode 100644 index 8010647d39..0000000000 --- a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/variables.tf +++ /dev/null @@ -1,9 +0,0 @@ -variable "name" { - description = "Prefix name for terraform state" - type = string -} - -variable "region" { - description = "Region for terraform state" - type = string -} diff --git a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/versions.tf b/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/versions.tf deleted file mode 100644 index b320a102dd..0000000000 --- a/src/_nebari/stages/terraform_state/template/do/modules/terraform-state/versions.tf +++ /dev/null @@ -1,9 +0,0 @@ -terraform { - required_providers { - digitalocean = { - source = "digitalocean/digitalocean" - version = "2.29.0" - } - } - required_version = ">= 1.0" -} diff --git a/src/_nebari/stages/terraform_state/template/gcp/main.tf b/src/_nebari/stages/terraform_state/template/gcp/main.tf index dea6c03ac0..34a45d354a 100644 --- a/src/_nebari/stages/terraform_state/template/gcp/main.tf +++ b/src/_nebari/stages/terraform_state/template/gcp/main.tf @@ -24,7 +24,7 @@ terraform { required_providers { google = { source = "hashicorp/google" - version = "4.83.0" + version = "6.14.1" } } required_version = ">= 1.0" diff --git a/src/_nebari/stages/tf_objects.py b/src/_nebari/stages/tf_objects.py index 04c6d434aa..28884d4789 100644 --- a/src/_nebari/stages/tf_objects.py +++ b/src/_nebari/stages/tf_objects.py @@ -1,4 +1,4 @@ -from _nebari.provider.terraform import Data, Provider, Resource, TerraformBackend +from _nebari.provider.opentofu import Data, Provider, Resource, TerraformBackend from _nebari.utils import ( AZURE_TF_STATE_RESOURCE_GROUP_SUFFIX, construct_azure_resource_group_name, @@ -69,16 +69,6 @@ def NebariTerraformState(directory: str, nebari_config: schema.Main): bucket=f"{nebari_config.escaped_project_name}-{nebari_config.namespace}-terraform-state", prefix=f"terraform/{nebari_config.escaped_project_name}/{directory}", ) - elif nebari_config.provider == "do": - return TerraformBackend( - "s3", - endpoint=f"{nebari_config.digital_ocean.region}.digitaloceanspaces.com", - region="us-west-1", # fake aws region required by terraform - bucket=f"{nebari_config.escaped_project_name}-{nebari_config.namespace}-terraform-state", - key=f"terraform/{nebari_config.escaped_project_name}-{nebari_config.namespace}/{directory}.tfstate", - skip_credentials_validation=True, - skip_metadata_api_check=True, - ) elif nebari_config.provider == "azure": return TerraformBackend( "azurerm", diff --git a/src/_nebari/subcommands/info.py b/src/_nebari/subcommands/info.py index 1a36afceb1..3f5999e300 100644 --- a/src/_nebari/subcommands/info.py +++ b/src/_nebari/subcommands/info.py @@ -10,12 +10,19 @@ @hookimpl def nebari_subcommand(cli: typer.Typer): + EXTERNAL_PLUGIN_STYLE = "cyan" + @cli.command() def info(ctx: typer.Context): + """ + Display information about installed Nebari plugins and their configurations. + """ from nebari.plugins import nebari_plugin_manager rich.print(f"Nebari version: {__version__}") + external_plugins = nebari_plugin_manager.get_external_plugins() + hooks = collections.defaultdict(list) for plugin in nebari_plugin_manager.plugin_manager.get_plugins(): for hook in nebari_plugin_manager.plugin_manager.get_hookcallers(plugin): @@ -27,7 +34,8 @@ def info(ctx: typer.Context): for hook_name, modules in hooks.items(): for module in modules: - table.add_row(hook_name, module) + style = EXTERNAL_PLUGIN_STYLE if module in external_plugins else None + table.add_row(hook_name, module, style=style) rich.print(table) @@ -36,8 +44,14 @@ def info(ctx: typer.Context): table.add_column("priority") table.add_column("module") for stage in nebari_plugin_manager.ordered_stages: + style = ( + EXTERNAL_PLUGIN_STYLE if stage.__module__ in external_plugins else None + ) table.add_row( - stage.name, str(stage.priority), f"{stage.__module__}.{stage.__name__}" + stage.name, + str(stage.priority), + f"{stage.__module__}.{stage.__name__}", + style=style, ) rich.print(table) diff --git a/src/_nebari/subcommands/init.py b/src/_nebari/subcommands/init.py index 743d30cb40..c2f8d416e9 100644 --- a/src/_nebari/subcommands/init.py +++ b/src/_nebari/subcommands/init.py @@ -13,16 +13,10 @@ from _nebari.constants import ( AWS_DEFAULT_REGION, AZURE_DEFAULT_REGION, - DO_DEFAULT_REGION, GCP_DEFAULT_REGION, ) from _nebari.initialize import render_config -from _nebari.provider.cloud import ( - amazon_web_services, - azure_cloud, - digital_ocean, - google_cloud, -) +from _nebari.provider.cloud import amazon_web_services, azure_cloud, google_cloud from _nebari.stages.bootstrap import CiEnum from _nebari.stages.kubernetes_keycloak import AuthenticationEnum from _nebari.stages.terraform_state import TerraformStateEnum @@ -44,18 +38,13 @@ CREATE_GCP_CREDS = ( "https://cloud.google.com/iam/docs/creating-managing-service-accounts" ) -CREATE_DO_CREDS = ( - "https://docs.digitalocean.com/reference/api/create-personal-access-token" -) CREATE_AZURE_CREDS = "https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#creating-a-service-principal-in-the-azure-portal" CREATE_AUTH0_CREDS = "https://auth0.com/docs/get-started/auth0-overview/create-applications/machine-to-machine-apps" CREATE_GITHUB_OAUTH_CREDS = "https://docs.github.com/en/developers/apps/building-oauth-apps/creating-an-oauth-app" AWS_REGIONS = "https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions" GCP_REGIONS = "https://cloud.google.com/compute/docs/regions-zones" AZURE_REGIONS = "https://azure.microsoft.com/en-us/explore/global-infrastructure/geographies/#overview" -DO_REGIONS = ( - "https://docs.digitalocean.com/products/platform/availability-matrix/#regions" -) + # links to Nebari docs DOCS_HOME = "https://nebari.dev/docs/" @@ -78,7 +67,6 @@ CLOUD_PROVIDER_FULL_NAME = { "Local": ProviderEnum.local.name, "Existing": ProviderEnum.existing.name, - "Digital Ocean": ProviderEnum.do.name, "Amazon Web Services": ProviderEnum.aws.name, "Google Cloud Platform": ProviderEnum.gcp.name, "Microsoft Azure": ProviderEnum.azure.name, @@ -105,6 +93,7 @@ class InitInputs(schema.Base): region: Optional[str] = None ssl_cert_email: Optional[schema.email_pydantic] = None disable_prompt: bool = False + config_set: Optional[str] = None output: pathlib.Path = pathlib.Path("nebari-config.yaml") explicit: int = 0 @@ -120,8 +109,6 @@ def get_region_docs(cloud_provider: str): return GCP_REGIONS elif cloud_provider == ProviderEnum.azure.value.lower(): return AZURE_REGIONS - elif cloud_provider == ProviderEnum.do.value.lower(): - return DO_REGIONS def handle_init(inputs: InitInputs, config_schema: BaseModel): @@ -148,6 +135,7 @@ def handle_init(inputs: InitInputs, config_schema: BaseModel): terraform_state=inputs.terraform_state, ssl_cert_email=inputs.ssl_cert_email, disable_prompt=inputs.disable_prompt, + config_set=inputs.config_set, ) try: @@ -312,36 +300,6 @@ def check_cloud_provider_creds(cloud_provider: ProviderEnum, disable_prompt: boo hide_input=True, ) - # DO - elif cloud_provider == ProviderEnum.do.value.lower() and ( - not os.environ.get("DIGITALOCEAN_TOKEN") - or not os.environ.get("SPACES_ACCESS_KEY_ID") - or not os.environ.get("SPACES_SECRET_ACCESS_KEY") - ): - rich.print( - MISSING_CREDS_TEMPLATE.format( - provider="Digital Ocean", link_to_docs=CREATE_DO_CREDS - ) - ) - - os.environ["DIGITALOCEAN_TOKEN"] = typer.prompt( - "Paste your DIGITALOCEAN_TOKEN", - hide_input=True, - ) - os.environ["SPACES_ACCESS_KEY_ID"] = typer.prompt( - "Paste your SPACES_ACCESS_KEY_ID", - hide_input=True, - ) - os.environ["SPACES_SECRET_ACCESS_KEY"] = typer.prompt( - "Paste your SPACES_SECRET_ACCESS_KEY", - hide_input=True, - ) - # Set spaces credentials. Spaces are API compatible with s3 - # Setting spaces credentials to AWS credentials allows us to - # reuse s3 code - os.environ["AWS_ACCESS_KEY_ID"] = os.getenv("SPACES_ACCESS_KEY_ID") - os.environ["AWS_SECRET_ACCESS_KEY"] = os.getenv("SPACES_SECRET_ACCESS_KEY") - # AZURE elif cloud_provider == ProviderEnum.azure.value.lower() and ( not os.environ.get("ARM_CLIENT_ID") @@ -409,29 +367,17 @@ def check_cloud_provider_kubernetes_version( versions = google_cloud.kubernetes_versions(region) if not kubernetes_version or kubernetes_version == LATEST: - kubernetes_version = get_latest_kubernetes_version(versions) - rich.print( - DEFAULT_KUBERNETES_VERSION_MSG.format( - kubernetes_version=kubernetes_version - ) + kubernetes_version = google_cloud.get_patch_version( + get_latest_kubernetes_version(versions) ) - if kubernetes_version not in versions: - raise ValueError( - f"Invalid Kubernetes version `{kubernetes_version}`. Please refer to the GCP docs for a list of valid versions: {versions}" - ) - elif cloud_provider == ProviderEnum.do.value.lower(): - versions = digital_ocean.kubernetes_versions() - - if not kubernetes_version or kubernetes_version == LATEST: - kubernetes_version = get_latest_kubernetes_version(versions) rich.print( DEFAULT_KUBERNETES_VERSION_MSG.format( kubernetes_version=kubernetes_version ) ) - if kubernetes_version not in versions: + if not any(v.startswith(kubernetes_version) for v in versions): raise ValueError( - f"Invalid Kubernetes version `{kubernetes_version}`. Please refer to the DO docs for a list of valid versions: {versions}" + f"Invalid Kubernetes version `{kubernetes_version}`. Please refer to the GCP docs for a list of valid versions: {versions}" ) return kubernetes_version @@ -462,15 +408,7 @@ def check_cloud_provider_region(region: str, cloud_provider: str) -> str: raise ValueError( f"Invalid region `{region}`. Please refer to the GCP docs for a list of valid regions: {GCP_REGIONS}" ) - elif cloud_provider == ProviderEnum.do.value.lower(): - if not region: - region = DO_DEFAULT_REGION - rich.print(DEFAULT_REGION_MSG.format(region=region)) - if region not in set(_["slug"] for _ in digital_ocean.regions()): - raise ValueError( - f"Invalid region `{region}`. Please refer to the DO docs for a list of valid regions: {DO_REGIONS}" - ) return region @@ -560,6 +498,12 @@ def init( False, is_eager=True, ), + config_set: str = typer.Option( + None, + "--config-set", + "-s", + help="Apply a pre-defined set of nebari configuration options.", + ), output: str = typer.Option( pathlib.Path("nebari-config.yaml"), "--output", @@ -596,10 +540,10 @@ def init( cloud_provider, disable_prompt ) - # Digital Ocean deprecation warning -- Nebari 2024.7.1 - if inputs.cloud_provider == ProviderEnum.do.value.lower(): + # DigitalOcean is no longer supported + if inputs.cloud_provider == "do": rich.print( - ":warning: Digital Ocean support is being deprecated and support will be removed in the future. :warning:\n" + ":warning: DigitalOcean is no longer supported. You'll need to deploy to an existing k8s cluster if you plan to use Nebari on DigitalOcean :warning:\n" ) inputs.region = check_cloud_provider_region(region, inputs.cloud_provider) @@ -618,6 +562,7 @@ def init( inputs.terraform_state = terraform_state inputs.ssl_cert_email = ssl_cert_email inputs.disable_prompt = disable_prompt + inputs.config_set = config_set inputs.output = output inputs.explicit = explicit diff --git a/src/_nebari/subcommands/plugin.py b/src/_nebari/subcommands/plugin.py new file mode 100644 index 0000000000..28305848cd --- /dev/null +++ b/src/_nebari/subcommands/plugin.py @@ -0,0 +1,42 @@ +from importlib.metadata import version + +import rich +import typer +from rich.table import Table + +from nebari.hookspecs import hookimpl + + +@hookimpl +def nebari_subcommand(cli: typer.Typer): + plugin_cmd = typer.Typer( + add_completion=False, + no_args_is_help=True, + rich_markup_mode="rich", + context_settings={"help_option_names": ["-h", "--help"]}, + ) + + cli.add_typer( + plugin_cmd, + name="plugin", + help="Interact with nebari plugins", + rich_help_panel="Additional Commands", + ) + + @plugin_cmd.command() + def list(ctx: typer.Context): + """ + List installed plugins + """ + from nebari.plugins import nebari_plugin_manager + + external_plugins = nebari_plugin_manager.get_external_plugins() + + table = Table(title="Plugins") + table.add_column("name", justify="left", no_wrap=True) + table.add_column("version", justify="left", no_wrap=True) + + for plugin in external_plugins: + table.add_row(plugin, version(plugin)) + + rich.print(table) diff --git a/src/_nebari/upgrade.py b/src/_nebari/upgrade.py index 6536612f2d..18e75c1827 100644 --- a/src/_nebari/upgrade.py +++ b/src/_nebari/upgrade.py @@ -6,6 +6,7 @@ import json import logging +import os import re import secrets import string @@ -20,7 +21,7 @@ import rich from packaging.version import Version from pydantic import ValidationError -from rich.prompt import Prompt +from rich.prompt import Confirm, Prompt from typing_extensions import override from _nebari.config import backup_configuration @@ -47,6 +48,20 @@ UPGRADE_KUBERNETES_MESSAGE = "Please see the [green][link=https://www.nebari.dev/docs/how-tos/kubernetes-version-upgrade]Kubernetes upgrade docs[/link][/green] for more information." DESTRUCTIVE_UPGRADE_WARNING = "-> This version upgrade will result in your cluster being completely torn down and redeployed. Please ensure you have backed up any data you wish to keep before proceeding!!!" +TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION = ( + "Nebari needs to generate an updated set of Terraform scripts for your deployment and delete the old scripts.\n" + "Do you want Nebari to remove your [green]stages[/green] directory automatically for you? It will be recreated the next time Nebari is run.\n" + "[red]Warning:[/red] This will remove everything in the [green]stages[/green] directory.\n" + "If you do not have Nebari do it automatically here, you will need to remove the [green]stages[/green] manually with a command" + "like [green]rm -rf stages[/green]." +) +DESTROY_STAGE_FILES_WITH_TF_STATE_NOT_REMOTE = ( + "⚠️ CAUTION ⚠️\n" + "Nebari would like to remove your old Terraform/Opentofu [green]stages[/green] files. Your [blue]terraform_state[/blue] configuration is not set to [blue]remote[/blue], so destroying your [green]stages[/green] files could potentially be very detructive.\n" + "If you don't have active Terraform/Opentofu deployment state files contained within your [green]stages[/green] directory, you may proceed by entering [red]y[/red] at the prompt." + "If you have an active Terraform/Opentofu deployment with active state files in your [green]stages[/green] folder, you will need to either bring Nebari down temporarily to redeploy or pursue some other means to upgrade. Enter [red]n[/red] at the prompt.\n\n" + "Do you want to proceed by deleting your [green]stages[/green] directory and everything in it? ([red]POTENTIALLY VERY DESTRUCTIVE[/red])" +) def do_upgrade(config_filename, attempt_fixes=False): @@ -213,6 +228,54 @@ def upgrade( return config + @classmethod + def _rm_rf_stages(cls, config_filename, dry_run: bool = False, verbose=False): + """ + Remove stage files during and upgrade step + + Usually used when you need files in your `stages` directory to be + removed in order to avoid resource conflicts + + Args: + config_filename (str): The path to the configuration file. + Returns: + None + """ + config_dir = Path(config_filename).resolve().parent + + if Path.is_dir(config_dir): + stage_dir = config_dir / "stages" + + stage_filenames = [d for d in stage_dir.rglob("*") if d.is_file()] + + for stage_filename in stage_filenames: + if dry_run and verbose: + rich.print(f"Dry run: Would remove {stage_filename}") + else: + stage_filename.unlink(missing_ok=True) + if verbose: + rich.print(f"Removed {stage_filename}") + + stage_filedirs = sorted( + (d for d in stage_dir.rglob("*") if d.is_dir()), + reverse=True, + ) + + for stage_filedir in stage_filedirs: + if dry_run and verbose: + rich.print(f"Dry run: Would remove {stage_filedir}") + else: + stage_filedir.rmdir() + if verbose: + rich.print(f"Removed {stage_filedir}") + + if dry_run and verbose: + rich.print(f"Dry run: Would remove {stage_dir}") + elif stage_dir.is_dir(): + stage_dir.rmdir() + if verbose: + rich.print(f"Removed {stage_dir}") + def get_version(self): """ Returns: @@ -306,7 +369,9 @@ def replace_image_tag_legacy( return ":".join([m.groups()[0], f"v{new_version}"]) return None - def replace_image_tag(s: str, new_version: str, config_path: str) -> str: + def replace_image_tag( + s: str, new_version: str, config_path: str, attempt_fixes: bool + ) -> str: """ Replace the image tag with the new version. @@ -328,11 +393,11 @@ def replace_image_tag(s: str, new_version: str, config_path: str) -> str: if current_tag == new_version: return s loc = f"{config_path}: {image_name}" - response = Prompt.ask( - f"\nDo you want to replace current tag [green]{current_tag}[/green] with [green]{new_version}[/green] for:\n[purple]{loc}[/purple]? [Y/n] ", - default="Y", + response = attempt_fixes or Confirm.ask( + f"\nDo you want to replace current tag [green]{current_tag}[/green] with [green]{new_version}[/green] for:\n[purple]{loc}[/purple]?", + default=True, ) - if response.lower() in ["y", "yes", ""]: + if response: return s.replace(current_tag, new_version) else: return s @@ -363,7 +428,11 @@ def set_nested_item(config: dict, config_path: list, value: str): config[config_path[-1]] = value def update_image_tag( - config: dict, config_path: str, current_image: str, new_version: str + config: dict, + config_path: str, + current_image: str, + new_version: str, + attempt_fixes: bool, ) -> dict: """ Update the image tag in the configuration. @@ -377,7 +446,12 @@ def update_image_tag( Returns: dict: The updated configuration dictionary. """ - new_image = replace_image_tag(current_image, new_version, config_path) + new_image = replace_image_tag( + current_image, + new_version, + config_path, + attempt_fixes, + ) if new_image != current_image: set_nested_item(config, config_path, new_image) @@ -387,7 +461,11 @@ def update_image_tag( for k, v in config.get("default_images", {}).items(): config_path = f"default_images.{k}" config = update_image_tag( - config, config_path, v, __rounded_finish_version__ + config, + config_path, + v, + __rounded_finish_version__, + kwargs.get("attempt_fixes", False), ) # update profiles.jupyterlab images @@ -399,6 +477,7 @@ def update_image_tag( f"profiles.jupyterlab.{i}.kubespawner_override.image", current_image, __rounded_finish_version__, + kwargs.get("attempt_fixes", False), ) # update profiles.dask_worker images @@ -410,11 +489,16 @@ def update_image_tag( f"profiles.dask_worker.{k}.image", current_image, __rounded_finish_version__, + kwargs.get("attempt_fixes", False), ) # Run any version-specific tasks return self._version_specific_upgrade( - config, start_version, config_filename, *args, **kwargs + config, + start_version, + config_filename, + *args, + **kwargs, ) def _version_specific_upgrade( @@ -628,27 +712,93 @@ def _version_specific_upgrade( """ Prompt users to delete Argo CRDs """ + argo_crds = [ + "clusterworkflowtemplates.argoproj.io", + "cronworkflows.argoproj.io", + "workfloweventbindings.argoproj.io", + "workflows.argoproj.io", + "workflowtasksets.argoproj.io", + "workflowtemplates.argoproj.io", + ] - kubectl_delete_argo_crds_cmd = "kubectl delete crds clusterworkflowtemplates.argoproj.io cronworkflows.argoproj.io workfloweventbindings.argoproj.io workflows.argoproj.io workflowtasksets.argoproj.io workflowtemplates.argoproj.io" + argo_sa = ["argo-admin", "argo-dev", "argo-view"] - kubectl_delete_argo_sa_cmd = ( - f"kubectl delete sa -n {config['namespace']} argo-admin argo-dev argo-view" - ) + namespace = config.get("namespace", "default") - rich.print( - f"\n\n[bold cyan]Note:[/] Upgrading requires a one-time manual deletion of the Argo Workflows Custom Resource Definitions (CRDs) and service accounts. \n\n[red bold]Warning: [link=https://{config['domain']}/argo/workflows]Workflows[/link] and [link=https://{config['domain']}/argo/workflows]CronWorkflows[/link] created before deleting the CRDs will be erased when the CRDs are deleted and will not be restored.[/red bold] \n\nThe updated CRDs will be installed during the next [cyan bold]nebari deploy[/cyan bold] step. Argo Workflows will not function after deleting the CRDs until the updated CRDs and service accounts are installed in the next nebari deploy. You must delete the Argo Workflows CRDs and service accounts before upgrading to {self.version} (or later) or the deploy step will fail. Please delete them before proceeding by generating a kubeconfig (see [link=https://www.nebari.dev/docs/how-tos/debug-nebari/#generating-the-kubeconfig]docs[/link]), installing kubectl (see [link=https://www.nebari.dev/docs/how-tos/debug-nebari#installing-kubectl]docs[/link]), and running the following two commands:\n\n\t[cyan bold]{kubectl_delete_argo_crds_cmd} [/cyan bold]\n\n\t[cyan bold]{kubectl_delete_argo_sa_cmd} [/cyan bold]" - "" - ) + if kwargs.get("attempt_fixes", False): + try: + kubernetes.config.load_kube_config() + except kubernetes.config.config_exception.ConfigException: + rich.print( + "[red bold]No default kube configuration file was found. Make sure to [link=https://www.nebari.dev/docs/how-tos/debug-nebari#generating-the-kubeconfig]have one pointing to your Nebari cluster[/link] before upgrading.[/red bold]" + ) + exit() - continue_ = Prompt.ask( - "Have you deleted the Argo Workflows CRDs and service accounts? [y/N] ", - default="N", - ) - if not continue_ == "y": + for crd in argo_crds: + api_instance = kubernetes.client.ApiextensionsV1Api() + try: + api_instance.delete_custom_resource_definition( + name=crd, + ) + except kubernetes.client.exceptions.ApiException as e: + if e.status == 404: + rich.print(f"CRD [yellow]{crd}[/yellow] not found. Ignoring.") + else: + raise e + else: + rich.print(f"Successfully removed CRD [green]{crd}[/green]") + + for sa in argo_sa: + api_instance = kubernetes.client.CoreV1Api() + try: + api_instance.delete_namespaced_service_account( + sa, + namespace, + ) + except kubernetes.client.exceptions.ApiException as e: + if e.status == 404: + rich.print( + f"Service account [yellow]{sa}[/yellow] not found. Ignoring." + ) + else: + raise e + else: + rich.print( + f"Successfully removed service account [green]{sa}[/green]" + ) + else: + kubectl_delete_argo_crds_cmd = " ".join( + ( + *("kubectl delete crds",), + *argo_crds, + ), + ) + kubectl_delete_argo_sa_cmd = " ".join( + ( + *( + "kubectl delete sa", + f"-n {namespace}", + ), + *argo_sa, + ), + ) rich.print( - f"You must delete the Argo Workflows CRDs and service accounts before upgrading to [green]{self.version}[/green] (or later)." + f"\n\n[bold cyan]Note:[/] Upgrading requires a one-time manual deletion of the Argo Workflows Custom Resource Definitions (CRDs) and service accounts. \n\n[red bold]" + f"Warning: [link=https://{config['domain']}/argo/workflows]Workflows[/link] and [link=https://{config['domain']}/argo/workflows]CronWorkflows[/link] created before deleting the CRDs will be erased when the CRDs are deleted and will not be restored.[/red bold] \n\n" + f"The updated CRDs will be installed during the next [cyan bold]nebari deploy[/cyan bold] step. Argo Workflows will not function after deleting the CRDs until the updated CRDs and service accounts are installed in the next nebari deploy. " + f"You must delete the Argo Workflows CRDs and service accounts before upgrading to {self.version} (or later) or the deploy step will fail. " + f"Please delete them before proceeding by generating a kubeconfig (see [link=https://www.nebari.dev/docs/how-tos/debug-nebari/#generating-the-kubeconfig]docs[/link]), installing kubectl (see [link=https://www.nebari.dev/docs/how-tos/debug-nebari#installing-kubectl]docs[/link]), and running the following two commands:\n\n\t[cyan bold]{kubectl_delete_argo_crds_cmd} [/cyan bold]\n\n\t[cyan bold]{kubectl_delete_argo_sa_cmd} [/cyan bold]" ) - exit() + + continue_ = Confirm.ask( + "Have you deleted the Argo Workflows CRDs and service accounts?", + default=False, + ) + if not continue_: + rich.print( + f"You must delete the Argo Workflows CRDs and service accounts before upgrading to [green]{self.version}[/green] (or later)." + ) + exit() return config @@ -681,11 +831,11 @@ def _version_specific_upgrade( ): argo = config.get("argo_workflows", {}) if argo.get("enabled"): - response = Prompt.ask( - f"\nDo you want to enable the [green][link={NEBARI_WORKFLOW_CONTROLLER_DOCS}]Nebari Workflow Controller[/link][/green], required for [green][link={ARGO_JUPYTER_SCHEDULER_REPO}]Argo-Jupyter-Scheduler[/link][green]? [Y/n] ", - default="Y", + response = kwargs.get("attempt_fixes", False) or Confirm.ask( + f"\nDo you want to enable the [green][link={NEBARI_WORKFLOW_CONTROLLER_DOCS}]Nebari Workflow Controller[/link][/green], required for [green][link={ARGO_JUPYTER_SCHEDULER_REPO}]Argo-Jupyter-Scheduler[/link][green]?", + default=True, ) - if response.lower() in ["y", "yes", ""]: + if response: argo["nebari_workflow_controller"] = {"enabled": True} rich.print("\n ⚠️ Deprecation Warnings ⚠️") @@ -725,9 +875,6 @@ def _version_specific_upgrade( rich.print( "-> Data should be backed up before performing this upgrade ([green][link=https://www.nebari.dev/docs/how-tos/manual-backup]see docs[/link][/green]) The 'prevent_deploy' flag has been set in your config file and must be manually removed to deploy." ) - rich.print( - "-> Please also run the [green]rm -rf stages[/green] so that we can regenerate an updated set of Terraform scripts for your deployment." - ) # Setting the following flag will prevent deployment and display guidance to the user # which they can override if they are happy they understand the situation. @@ -811,6 +958,26 @@ def _version_specific_upgrade( rich.print("\n ⚠️ DANGER ⚠️") rich.print(DESTRUCTIVE_UPGRADE_WARNING) + if kwargs.get("attempt_fixes", False) or Confirm.ask( + TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION, + default=False, + ): + if ( + (_terraform_state_config := config.get("terraform_state")) + and (_terraform_state_config.get("type") != "remote") + and not Confirm.ask( + DESTROY_STAGE_FILES_WITH_TF_STATE_NOT_REMOTE, + default=False, + ) + ): + exit() + + self._rm_rf_stages( + config_filename, + dry_run=kwargs.get("dry_run", False), + verbose=True, + ) + return config @@ -828,15 +995,31 @@ class Upgrade_2023_11_1(UpgradeStep): def _version_specific_upgrade( self, config, start_version, config_filename: Path, *args, **kwargs ): - rich.print("\n ⚠️ Warning ⚠️") - rich.print( - "-> Please run the [green]rm -rf stages[/green] so that we can regenerate an updated set of Terraform scripts for your deployment." - ) rich.print("\n ⚠️ Deprecation Warning ⚠️") rich.print( f"-> ClearML, Prefect and kbatch are no longer supported in Nebari version [green]{self.version}[/green] and will be uninstalled." ) + if kwargs.get("attempt_fixes", False) or Confirm.ask( + TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION, + default=False, + ): + if ( + (_terraform_state_config := config.get("terraform_state")) + and (_terraform_state_config.get("type") != "remote") + and not Confirm.ask( + DESTROY_STAGE_FILES_WITH_TF_STATE_NOT_REMOTE, + default=False, + ) + ): + exit() + + self._rm_rf_stages( + config_filename, + dry_run=kwargs.get("dry_run", False), + verbose=True, + ) + return config @@ -854,16 +1037,32 @@ class Upgrade_2023_12_1(UpgradeStep): def _version_specific_upgrade( self, config, start_version, config_filename: Path, *args, **kwargs ): - rich.print("\n ⚠️ Warning ⚠️") - rich.print( - "-> Please run the [green]rm -rf stages[/green] so that we can regenerate an updated set of Terraform scripts for your deployment." - ) rich.print("\n ⚠️ Deprecation Warning ⚠️") rich.print( f"-> [green]{self.version}[/green] is the last Nebari version that supports the jupyterlab-videochat extension." ) rich.print() + if kwargs.get("attempt_fixes", False) or Confirm.ask( + TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION, + default=False, + ): + if ( + (_terraform_state_config := config.get("terraform_state")) + and (_terraform_state_config.get("type") != "remote") + and not Confirm.ask( + DESTROY_STAGE_FILES_WITH_TF_STATE_NOT_REMOTE, + default=False, + ) + ): + exit() + + self._rm_rf_stages( + config_filename, + dry_run=kwargs.get("dry_run", False), + verbose=True, + ) + return config @@ -881,10 +1080,6 @@ class Upgrade_2024_1_1(UpgradeStep): def _version_specific_upgrade( self, config, start_version, config_filename: Path, *args, **kwargs ): - rich.print("\n ⚠️ Warning ⚠️") - rich.print( - "-> Please run the [green]rm -rf stages[/green] so that we can regenerate an updated set of Terraform scripts for your deployment." - ) rich.print("\n ⚠️ Deprecation Warning ⚠️") rich.print( "-> jupyterlab-videochat, retrolab, jupyter-tensorboard, jupyterlab-conda-store and jupyter-nvdashboard", @@ -892,6 +1087,26 @@ def _version_specific_upgrade( ) rich.print() + if kwargs.get("attempt_fixes", False) or Confirm.ask( + TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION, + default=False, + ): + if ( + (_terraform_state_config := config.get("terraform_state")) + and (_terraform_state_config.get("type") != "remote") + and not Confirm.ask( + DESTROY_STAGE_FILES_WITH_TF_STATE_NOT_REMOTE, + default=False, + ) + ): + exit() + + self._rm_rf_stages( + config_filename, + dry_run=kwargs.get("dry_run", False), + verbose=True, + ) + return config @@ -957,12 +1172,11 @@ def _version_specific_upgrade( default_node_groups = provider_enum_default_node_groups_map[ provider ] - continue_ = Prompt.ask( + continue_ = kwargs.get("attempt_fixes", False) or Confirm.ask( f"Would you like to include the default configuration for the node groups in [purple]{config_filename}[/purple]?", - choices=["y", "N"], - default="N", + default=False, ) - if continue_ == "y": + if continue_: config[provider_full_name]["node_groups"] = default_node_groups except KeyError: pass @@ -999,7 +1213,6 @@ def _version_specific_upgrade( ): # Prompt users to manually update kube-prometheus-stack CRDs if monitoring is enabled if config.get("monitoring", {}).get("enabled", True): - crd_urls = [ "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagerconfigs.yaml", "https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v0.73.0/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml", @@ -1029,10 +1242,9 @@ def _version_specific_upgrade( "\n-> [red bold]Nebari version 2024.6.1 comes with a new version of Grafana. Any custom dashboards that you created will be deleted after upgrading Nebari. Make sure to [link=https://grafana.com/docs/grafana/latest/dashboards/share-dashboards-panels/#export-a-dashboard-as-json]export them as JSON[/link] so you can [link=https://grafana.com/docs/grafana/latest/dashboards/build-dashboards/import-dashboards/#import-a-dashboard]import them[/link] again afterwards.[/red bold]" f"\n-> [red bold]Before upgrading, kube-prometheus-stack CRDs need to be updated and the {daemonset_name} daemonset needs to be deleted.[/red bold]" ) - run_commands = Prompt.ask( + run_commands = kwargs.get("attempt_fixes", False) or Confirm.ask( "\nDo you want Nebari to update the kube-prometheus-stack CRDs and delete the prometheus-node-exporter for you? If not, you'll have to do it manually.", - choices=["y", "N"], - default="N", + default=False, ) # By default, rich wraps lines by splitting them into multiple lines. This is @@ -1040,7 +1252,7 @@ def _version_specific_upgrade( # To avoid this, we use a rich console with a larger width to print the entire commands # and let the terminal wrap them if needed. console = rich.console.Console(width=220) - if run_commands == "y": + if run_commands: try: kubernetes.config.load_kube_config() except kubernetes.config.config_exception.ConfigException: @@ -1053,10 +1265,14 @@ def _version_specific_upgrade( rich.print( f"The following commands will be run for the [cyan bold]{cluster_name}[/cyan bold] cluster" ) - Prompt.ask("Hit enter to show the commands") + _ = kwargs.get("attempt_fixes", False) or Prompt.ask( + "Hit enter to show the commands" + ) console.print(commands) - Prompt.ask("Hit enter to continue") + _ = kwargs.get("attempt_fixes", False) or Prompt.ask( + "Hit enter to continue" + ) # We need to add a special constructor to the yaml loader to handle a specific # tag as otherwise the kubernetes API will fail when updating the CRD. yaml.constructor.add_constructor( @@ -1098,16 +1314,15 @@ def _version_specific_upgrade( rich.print( "[red bold]Before upgrading, you need to manually delete the prometheus-node-exporter daemonset and update the kube-prometheus-stack CRDs. To do that, please run the following commands.[/red bold]" ) - Prompt.ask("Hit enter to show the commands") + _ = Prompt.ask("Hit enter to show the commands") console.print(commands) - Prompt.ask("Hit enter to continue") - continue_ = Prompt.ask( + _ = Prompt.ask("Hit enter to continue") + continue_ = Confirm.ask( f"Have you backed up your custom dashboards (if necessary), deleted the {daemonset_name} daemonset and updated the kube-prometheus-stack CRDs?", - choices=["y", "N"], - default="N", + default=False, ) - if not continue_ == "y": + if not continue_: rich.print( f"[red bold]You must back up your custom dashboards (if necessary), delete the {daemonset_name} daemonset and update the kube-prometheus-stack CRDs before upgrading to [green]{self.version}[/green] (or later).[/bold red]" ) @@ -1132,12 +1347,11 @@ def _version_specific_upgrade( If not, select "N" and the old default node groups will be added to the nebari config file. """ ) - continue_ = Prompt.ask( + continue_ = kwargs.get("attempt_fixes", False) or Confirm.ask( text, - choices=["y", "N"], - default="y", + default=True, ) - if continue_ == "N": + if not continue_: config[provider_full_name]["node_groups"] = { "general": { "instance": "n1-standard-8", @@ -1178,8 +1392,9 @@ def _version_specific_upgrade( }, indent=4, ) - text += "\n\nHit enter to continue" - Prompt.ask(text) + rich.print(text) + if not kwargs.get("attempt_fixes", False): + _ = Prompt.ask("\n\nHit enter to continue") return config @@ -1197,7 +1412,7 @@ class Upgrade_2024_7_1(UpgradeStep): def _version_specific_upgrade( self, config, start_version, config_filename: Path, *args, **kwargs ): - if config.get("provider", "") == ProviderEnum.do.value: + if config.get("provider", "") == "do": rich.print("\n ⚠️ Deprecation Warning ⚠️") rich.print( "-> Digital Ocean support is currently being deprecated and will be removed in a future release.", @@ -1214,6 +1429,22 @@ class Upgrade_2024_9_1(UpgradeStep): version = "2024.9.1" + # Nebari version 2024.9.1 has been marked as broken, and will be skipped: + # https://github.com/nebari-dev/nebari/issues/2798 + @override + def _version_specific_upgrade( + self, config, start_version, config_filename: Path, *args, **kwargs + ): + return config + + +class Upgrade_2024_11_1(UpgradeStep): + """ + Upgrade step for Nebari version 2024.11.1 + """ + + version = "2024.11.1" + @override def _version_specific_upgrade( self, config, start_version, config_filename: Path, *args, **kwargs @@ -1229,7 +1460,7 @@ def _version_specific_upgrade( ), ) rich.print("") - elif config.get("provider", "") == ProviderEnum.do.value: + elif config.get("provider", "") == "do": rich.print("\n ⚠️ Deprecation Warning ⚠️") rich.print( "-> Digital Ocean support is currently being deprecated and will be removed in a future release.", @@ -1243,16 +1474,16 @@ def _version_specific_upgrade( Please ensure no users are currently logged in prior to deploying this update. - Nebari [green]2024.9.1[/green] introduces changes to how group - directories are mounted in JupyterLab pods. + This release introduces changes to how group directories are mounted in + JupyterLab pods. Previously, every Keycloak group in the Nebari realm automatically created a shared directory at ~/shared/, accessible to all group members in their JupyterLab pods. - Starting with Nebari [green]2024.9.1[/green], only groups assigned the - JupyterHub client role [magenta]allow-group-directory-creation[/magenta] will have their - directories mounted. + Moving forward, only groups assigned the JupyterHub client role + [magenta]allow-group-directory-creation[/magenta] or its affiliated scope + [magenta]write:shared-mount[/magenta] will have their directories mounted. By default, the admin, analyst, and developer groups will have this role assigned during the upgrade. For other groups, you'll now need to @@ -1266,13 +1497,10 @@ def _version_specific_upgrade( keycloak_admin = None # Prompt the user for role assignment (if yes, transforms the response into bool) - assign_roles = ( - Prompt.ask( - "[bold]Would you like Nebari to assign the corresponding role to all of your current groups automatically?[/bold]", - choices=["y", "N"], - default="N", - ).lower() - == "y" + # This needs to be monkeypatched and will be addressed in a future PR. Until then, this causes test failures. + assign_roles = kwargs.get("attempt_fixes", False) or Confirm.ask( + "[bold]Would you like Nebari to assign the corresponding role/scopes to all of your current groups automatically?[/bold]", + default=False, ) if assign_roles: @@ -1281,18 +1509,63 @@ def _version_specific_upgrade( urllib3.disable_warnings() - keycloak_admin = get_keycloak_admin( - server_url=f"https://{config['domain']}/auth/", - username="root", - password=config["security"]["keycloak"]["initial_root_password"], + keycloak_username = os.environ.get("KEYCLOAK_ADMIN_USERNAME", "root") + keycloak_password = os.environ.get( + "KEYCLOAK_ADMIN_PASSWORD", + config["security"]["keycloak"]["initial_root_password"], ) - # Proceed with updating group permissions + try: + # Quick test to connect to Keycloak + keycloak_admin = get_keycloak_admin( + server_url=f"https://{config['domain']}/auth/", + username=keycloak_username, + password=keycloak_password, + ) + except ValueError as e: + if "invalid_grant" in str(e): + rich.print( + textwrap.dedent( + """ + [red bold]Failed to connect to the Keycloak server.[/red bold]\n + [yellow]Please set the [bold]KEYCLOAK_ADMIN_USERNAME[/bold] and [bold]KEYCLOAK_ADMIN_PASSWORD[/bold] + environment variables with the Keycloak root credentials and try again.[/yellow] + """ + ) + ) + exit() + else: + # Handle other exceptions + rich.print( + f"[red bold]An unexpected error occurred: {repr(e)}[/red bold]" + ) + exit() + + # Get client ID as role is bound to the JupyterHub client client_id = keycloak_admin.get_client_id("jupyterhub") - role_name = "allow-group-directory-creation-role" + role_name = "legacy-group-directory-creation-role" + + # Create role with shared scopes + keycloak_admin.create_client_role( + client_role_id=client_id, + skip_exists=True, + payload={ + "name": role_name, + "attributes": { + "scopes": ["write:shared-mount"], + "component": ["shared-directory"], + }, + "description": ( + "Role to allow group directory creation, created as part of the " + "Nebari 2024.11.1 upgrade workflow." + ), + }, + ) + role_id = keycloak_admin.get_client_role_id( client_id=client_id, role_name=role_name ) + role_representation = keycloak_admin.get_role_by_id(role_id=role_id) # Fetch all groups and groups with the role @@ -1328,6 +1601,61 @@ def _version_specific_upgrade( return config +class Upgrade_2024_12_1(UpgradeStep): + """ + Upgrade step for Nebari version 2024.12.1 + """ + + version = "2024.12.1" + + @override + def _version_specific_upgrade( + self, config, start_version, config_filename: Path, *args, **kwargs + ): + if config.get("provider", "") == "do": + rich.print( + "\n[red bold]Error: DigitalOcean is no longer supported as a provider[/red bold].", + ) + rich.print( + "You can still deploy Nebari to a Kubernetes cluster on DigitalOcean by using 'existing' as the provider in the config file." + ) + exit() + + rich.print("Ready to upgrade to Nebari version [green]2024.12.1[/green].") + + return config + + +class Upgrade_2025_2_1(UpgradeStep): + version = "2025.2.1" + + @override + def _version_specific_upgrade( + self, config, start_version, config_filename: Path, *args, **kwargs + ): + rich.print("\n ⚠️ Upgrade Warning ⚠️") + + text = textwrap.dedent( + """ + In this release, we have updated our maximum supported Kubernetes version from 1.29 to 1.31. + Please note that Nebari will NOT automatically upgrade your running Kubernetes version as part of + the redeployment process. + + After completing this upgrade step, we strongly recommend updating the Kubernetes version + specified in your nebari-config YAML file and redeploying to apply the changes. Remember that + Kubernetes minor versions must be upgraded incrementally (1.29 → 1.30 → 1.31). + + For more information on upgrading Kubernetes for your specific cloud provider, please visit: + https://www.nebari.dev/docs/how-tos/kubernetes-version-upgrade + """ + ) + + rich.print(text) + rich.print("Ready to upgrade to Nebari version [green]2025.2.1[/green].") + + return config + + __rounded_version__ = str(rounded_ver_parse(__version__)) # Manually-added upgrade steps must go above this line diff --git a/src/_nebari/utils.py b/src/_nebari/utils.py index 5f0877666a..48b8a91e9b 100644 --- a/src/_nebari/utils.py +++ b/src/_nebari/utils.py @@ -160,7 +160,7 @@ def modified_environ(*remove: List[str], **update: Dict[str, str]): def deep_merge(*args): - """Deep merge multiple dictionaries. + """Deep merge multiple dictionaries. Preserves order in dicts and lists. >>> value_1 = { 'a': [1, 2], @@ -190,7 +190,7 @@ def deep_merge(*args): if isinstance(d1, dict) and isinstance(d2, dict): d3 = {} - for key in d1.keys() | d2.keys(): + for key in tuple(d1.keys()) + tuple(d2.keys()): if key in d1 and key in d2: d3[key] = deep_merge(d1[key], d2[key]) elif key in d1: @@ -286,11 +286,6 @@ def random_secure_string( return "".join(secrets.choice(chars) for i in range(length)) -def set_do_environment(): - os.environ["AWS_ACCESS_KEY_ID"] = os.environ["SPACES_ACCESS_KEY_ID"] - os.environ["AWS_SECRET_ACCESS_KEY"] = os.environ["SPACES_SECRET_ACCESS_KEY"] - - def set_docker_image_tag() -> str: """Set docker image tag for `jupyterlab`, `jupyterhub`, and `dask-worker`.""" return os.environ.get("NEBARI_IMAGE_TAG", constants.DEFAULT_NEBARI_IMAGE_TAG) @@ -348,7 +343,6 @@ def get_provider_config_block_name(provider): PROVIDER_CONFIG_NAMES = { "aws": "amazon_web_services", "azure": "azure", - "do": "digital_ocean", "gcp": "google_cloud_platform", } diff --git a/src/nebari/plugins.py b/src/nebari/plugins.py index 71db0ade96..a6cb1aa688 100644 --- a/src/nebari/plugins.py +++ b/src/nebari/plugins.py @@ -19,6 +19,7 @@ "_nebari.subcommands.deploy", "_nebari.subcommands.destroy", "_nebari.subcommands.keycloak", + "_nebari.subcommands.plugin", "_nebari.subcommands.render", "_nebari.subcommands.support", "_nebari.subcommands.upgrade", @@ -121,6 +122,14 @@ def read_config(self, config_path: typing.Union[str, Path], **kwargs): return read_configuration(config_path, self.config_schema, **kwargs) + def get_external_plugins(self): + external_plugins = [] + all_plugins = DEFAULT_SUBCOMMAND_PLUGINS + DEFAULT_STAGES_PLUGINS + for plugin in self.plugin_manager.get_plugins(): + if plugin.__name__ not in all_plugins: + external_plugins.append(plugin.__name__) + return external_plugins + @property def ordered_stages(self): return self.get_available_stages() diff --git a/src/nebari/schema.py b/src/nebari/schema.py index 6a809842d7..b45af521be 100644 --- a/src/nebari/schema.py +++ b/src/nebari/schema.py @@ -35,7 +35,6 @@ class Base(pydantic.BaseModel): class ProviderEnum(str, enum.Enum): local = "local" existing = "existing" - do = "do" aws = "aws" gcp = "gcp" azure = "azure" diff --git a/tests/common/handlers.py b/tests/common/handlers.py index 51964d3ac5..5485059141 100644 --- a/tests/common/handlers.py +++ b/tests/common/handlers.py @@ -86,20 +86,31 @@ def _dismiss_kernel_popup(self): def _shutdown_all_kernels(self): """Shutdown all running kernels.""" logger.debug(">>> Shutting down all kernels") - kernel_menu = self.page.get_by_role("menuitem", name="Kernel") - kernel_menu.click() + + # Open the "Kernel" menu + self.page.get_by_role("menuitem", name="Kernel").click() + + # Locate the "Shut Down All Kernels…" menu item shut_down_all = self.page.get_by_role("menuitem", name="Shut Down All Kernels…") - logger.debug( - f">>> Shut down all kernels visible: {shut_down_all.is_visible()} enabled: {shut_down_all.is_enabled()}" - ) - if shut_down_all.is_visible() and shut_down_all.is_enabled(): - shut_down_all.click() - self.page.get_by_role("button", name="Shut Down All").click() - else: + + # If it's not visible or is disabled, there's nothing to shut down + if not shut_down_all.is_visible() or shut_down_all.is_disabled(): logger.debug(">>> No kernels to shut down") + return + + # Otherwise, click to shut down all kernels and confirm + shut_down_all.click() + self.page.get_by_role("button", name="Shut Down All").click() def _navigate_to_root_folder(self): """Navigate back to the root folder in JupyterLab.""" + # Make sure the home directory is select in the sidebar + if not self.page.get_by_role( + "region", name="File Browser Section" + ).is_visible(): + file_browser_tab = self.page.get_by_role("tab", name="File Browser") + file_browser_tab.click() + logger.debug(">>> Navigating to root folder") self.page.get_by_title(f"/home/{self.nav.username}", exact=True).locator( "path" @@ -298,13 +309,24 @@ def _open_conda_store_service(self): def _open_new_environment_tab(self): self.page.get_by_label("Create a new environment in").click() - expect(self.page.get_by_text("Create Environment")).to_be_visible() - - def _assert_user_namespace(self): expect( - self.page.get_by_role("button", name=f"{self.nav.username} Create a new") + self.page.get_by_role("button", name="Create", exact=True) ).to_be_visible() + def _assert_user_namespace(self): + user_namespace_dropdown = self.page.get_by_role( + "button", name=f"{self.nav.username} Create a new" + ) + + if not ( + expect( + user_namespace_dropdown + ).to_be_visible() # this asserts the user namespace shows in the UI + or self.nav.username + in user_namespace_dropdown.text_content() # this attests that the namespace corresponds to the logged in user + ): + raise ValueError(f"User namespace {self.nav.username} not found") + def _get_shown_namespaces(self): _envs = self.page.locator("#environmentsScroll").get_by_role("button") _env_contents = [env.text_content() for env in _envs.all()] diff --git a/tests/common/navigator.py b/tests/common/navigator.py index 04e019a7a6..e0b404fd26 100644 --- a/tests/common/navigator.py +++ b/tests/common/navigator.py @@ -5,6 +5,7 @@ from pathlib import Path from playwright.sync_api import expect, sync_playwright +from yarl import URL logger = logging.getLogger() @@ -50,7 +51,7 @@ def setup(self): self.page = self.context.new_page() self.initialized = True - def _rename_test_video_path(self, video_path): + def _rename_test_video_path(self, video_path: Path): """Rename the test video file to the test unique identifier.""" video_file_name = ( f"{self.video_name_prefix}.mp4" if self.video_name_prefix else None @@ -62,7 +63,7 @@ def teardown(self) -> None: """Teardown Playwright browser and context.""" if self.initialized: # Rename the video file to the test unique identifier - current_video_path = self.page.video.path() + current_video_path = Path(self.page.video.path()) self._rename_test_video_path(current_video_path) self.context.close() @@ -87,10 +88,17 @@ class LoginNavigator(NavigatorMixin): def __init__(self, nebari_url, username, password, auth="password", **kwargs): super().__init__(**kwargs) - self.nebari_url = nebari_url + self._nebari_url = URL(nebari_url) self.username = username self.password = password self.auth = auth + logger.debug( + f"LoginNavigator initialized with {self.auth} auth method. :: {self.nebari_url}" + ) + + @property + def nebari_url(self): + return self._nebari_url.human_repr() def login(self): """Login to Nebari deployment using the provided authentication method.""" @@ -110,7 +118,7 @@ def logout(self): def _login_google(self): logger.debug(">>> Sign in via Google and start the server") - self.page.goto(self.nebari_url) + self.page.goto(url=self.nebari_url) expect(self.page).to_have_url(re.compile(f"{self.nebari_url}*")) self.page.get_by_role("button", name="Sign in with Keycloak").click() @@ -123,7 +131,7 @@ def _login_google(self): def _login_password(self): logger.debug(">>> Sign in via Username/Password") - self.page.goto(self.nebari_url) + self.page.goto(url=self.nebari_url) expect(self.page).to_have_url(re.compile(f"{self.nebari_url}*")) self.page.get_by_role("button", name="Sign in with Keycloak").click() diff --git a/tests/common/playwright_fixtures.py b/tests/common/playwright_fixtures.py index 35ea36baad..581d9347f8 100644 --- a/tests/common/playwright_fixtures.py +++ b/tests/common/playwright_fixtures.py @@ -23,17 +23,43 @@ def load_env_vars(): def build_params(request, pytestconfig, extra_params=None): """Construct and return parameters for navigator instances.""" env_vars = load_env_vars() + + # Retrieve values from request or environment + nebari_url = request.param.get("nebari_url") or env_vars.get("nebari_url") + username = request.param.get("keycloak_username") or env_vars.get("username") + password = request.param.get("keycloak_password") or env_vars.get("password") + + # Validate that required fields are present + if not nebari_url: + raise ValueError( + "Error: 'nebari_url' is required but was not provided in " + "'request.param' or environment variables." + ) + if not username: + raise ValueError( + "Error: 'username' is required but was not provided in " + "'request.param' or environment variables." + ) + if not password: + raise ValueError( + "Error: 'password' is required but was not provided in " + "'request.param' or environment variables." + ) + + # Build the params dictionary once all required fields are validated params = { - "nebari_url": request.param.get("nebari_url") or env_vars["nebari_url"], - "username": request.param.get("keycloak_username") or env_vars["username"], - "password": request.param.get("keycloak_password") or env_vars["password"], + "nebari_url": nebari_url, + "username": username, + "password": password, "auth": "password", "video_dir": "videos/", "headless": pytestconfig.getoption("--headed"), "slow_mo": pytestconfig.getoption("--slowmo"), } + if extra_params: params.update(extra_params) + return params diff --git a/tests/tests_deployment/test_jupyterhub_ssh.py b/tests/tests_deployment/test_jupyterhub_ssh.py index d65bd4800f..f21247162b 100644 --- a/tests/tests_deployment/test_jupyterhub_ssh.py +++ b/tests/tests_deployment/test_jupyterhub_ssh.py @@ -1,5 +1,6 @@ import re import string +import time import uuid import paramiko @@ -14,9 +15,14 @@ TIMEOUT_SECS = 300 -@pytest.fixture(scope="function") +@pytest.fixture(scope="session") def paramiko_object(jupyterhub_access_token): - """Connects to JupyterHub ssh cluster from outside the cluster.""" + """Connects to JupyterHub SSH cluster from outside the cluster. + + Ensures the JupyterLab pod is ready before attempting reauthentication + by setting both `auth_timeout` and `banner_timeout` appropriately, + and by retrying the connection until the pod is ready or a timeout occurs. + """ params = { "hostname": constants.NEBARI_HOSTNAME, "port": 8022, @@ -24,54 +30,65 @@ def paramiko_object(jupyterhub_access_token): "password": jupyterhub_access_token, "allow_agent": constants.PARAMIKO_SSH_ALLOW_AGENT, "look_for_keys": constants.PARAMIKO_SSH_LOOK_FOR_KEYS, - "auth_timeout": 5 * 60, } ssh_client = paramiko.SSHClient() ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) - try: - ssh_client.connect(**params) - yield ssh_client - finally: - ssh_client.close() - - -def run_command(command, stdin, stdout, stderr): - delimiter = uuid.uuid4().hex - stdin.write(f"echo {delimiter}start; {command}; echo {delimiter}end\n") - - output = [] - - line = stdout.readline() - while not re.match(f"^{delimiter}start$", line.strip()): - line = stdout.readline() - line = stdout.readline() - if delimiter not in line: - output.append(line) - - while not re.match(f"^{delimiter}end$", line.strip()): - line = stdout.readline() - if delimiter not in line: - output.append(line) - - return "".join(output).strip() - - -@pytest.mark.timeout(TIMEOUT_SECS) -@pytest.mark.filterwarnings("ignore::urllib3.exceptions.InsecureRequestWarning") -@pytest.mark.filterwarnings("ignore::ResourceWarning") -def test_simple_jupyterhub_ssh(paramiko_object): - stdin, stdout, stderr = paramiko_object.exec_command("") + yield ssh_client, params + + ssh_client.close() + + +def invoke_shell( + client: paramiko.SSHClient, params: dict[str, any] +) -> paramiko.Channel: + client.connect(**params) + return client.invoke_shell() + + +def extract_output(delimiter: str, output: str) -> str: + # Extract the command output between the start and end delimiters + match = re.search(rf"{delimiter}start\n(.*)\n{delimiter}end", output, re.DOTALL) + if match: + print(match.group(1).strip()) + return match.group(1).strip() + else: + return output.strip() + + +def run_command_list( + commands: list[str], channel: paramiko.Channel, wait_time: int = 0 +) -> dict[str, str]: + command_delimiters = {} + for command in commands: + delimiter = uuid.uuid4().hex + command_delimiters[command] = delimiter + b = channel.send(f"echo {delimiter}start; {command}; echo {delimiter}end\n") + if b == 0: + print(f"Command '{command}' failed to send") + # Wait for the output to be ready before reading + time.sleep(wait_time) + while not channel.recv_ready(): + time.sleep(1) + print("Waiting for output") + output = "" + while channel.recv_ready(): + output += channel.recv(65535).decode("utf-8") + outputs = {} + for command, delimiter in command_delimiters.items(): + command_output = extract_output(delimiter, output) + outputs[command] = command_output + return outputs @pytest.mark.timeout(TIMEOUT_SECS) @pytest.mark.filterwarnings("ignore::urllib3.exceptions.InsecureRequestWarning") @pytest.mark.filterwarnings("ignore::ResourceWarning") def test_print_jupyterhub_ssh(paramiko_object): - stdin, stdout, stderr = paramiko_object.exec_command("") - - # commands to run and just print the output + client, params = paramiko_object + channel = invoke_shell(client, params) + # Commands to run and just print the output commands_print = [ "id", "env", @@ -80,52 +97,60 @@ def test_print_jupyterhub_ssh(paramiko_object): "ls -la", "umask", ] - - for command in commands_print: - print(f'COMMAND: "{command}"') - print(run_command(command, stdin, stdout, stderr)) + outputs = run_command_list(commands_print, channel) + for command, output in outputs.items(): + print(f"COMMAND: {command}") + print(f"OUTPUT: {output}") + channel.close() @pytest.mark.timeout(TIMEOUT_SECS) @pytest.mark.filterwarnings("ignore::urllib3.exceptions.InsecureRequestWarning") @pytest.mark.filterwarnings("ignore::ResourceWarning") def test_exact_jupyterhub_ssh(paramiko_object): - stdin, stdout, stderr = paramiko_object.exec_command("") - - # commands to run and exactly match output - commands_exact = [ - ("id -u", "1000"), - ("id -g", "100"), - ("whoami", constants.KEYCLOAK_USERNAME), - ("pwd", f"/home/{constants.KEYCLOAK_USERNAME}"), - ("echo $HOME", f"/home/{constants.KEYCLOAK_USERNAME}"), - ("conda activate default && echo $CONDA_PREFIX", "/opt/conda/envs/default"), - ( - "hostname", - f"jupyter-{escape_string(constants.KEYCLOAK_USERNAME, safe=set(string.ascii_lowercase + string.digits), escape_char='-').lower()}", - ), - ] + client, params = paramiko_object + channel = invoke_shell(client, params) + # Commands to run and exactly match output + commands_exact = { + "id -u": "1000", + "id -g": "100", + "whoami": constants.KEYCLOAK_USERNAME, + "pwd": f"/home/{constants.KEYCLOAK_USERNAME}", + "echo $HOME": f"/home/{constants.KEYCLOAK_USERNAME}", + "conda activate default && echo $CONDA_PREFIX": "/opt/conda/envs/default", + "hostname": f"jupyter-{escape_string(constants.KEYCLOAK_USERNAME, safe=set(string.ascii_lowercase + string.digits), escape_char='-').lower()}", + } + outputs = run_command_list(list(commands_exact.keys()), channel) + for command, output in outputs.items(): + assert ( + output == outputs[command] + ), f"Command '{command}' output '{outputs[command]}' does not match expected '{output}'" - for command, output in commands_exact: - assert output == run_command(command, stdin, stdout, stderr) + channel.close() @pytest.mark.timeout(TIMEOUT_SECS) @pytest.mark.filterwarnings("ignore::urllib3.exceptions.InsecureRequestWarning") @pytest.mark.filterwarnings("ignore::ResourceWarning") def test_contains_jupyterhub_ssh(paramiko_object): - stdin, stdout, stderr = paramiko_object.exec_command("") - - # commands to run and string need to be contained in output - commands_contain = [ - ("ls -la", ".bashrc"), - ("cat ~/.bashrc", "Managed by Nebari"), - ("cat ~/.profile", "Managed by Nebari"), - ("cat ~/.bash_logout", "Managed by Nebari"), - # ensure we don't copy over extra files from /etc/skel in init container - ("ls -la ~/..202*", "No such file or directory"), - ("ls -la ~/..data", "No such file or directory"), - ] + client, params = paramiko_object + channel = invoke_shell(client, params) + + # Commands to run and check if the output contains specific strings + commands_contain = { + "ls -la": ".bashrc", + "cat ~/.bashrc": "Managed by Nebari", + "cat ~/.profile": "Managed by Nebari", + "cat ~/.bash_logout": "Managed by Nebari", + # Ensure we don't copy over extra files from /etc/skel in init container + "ls -la ~/..202*": "No such file or directory", + "ls -la ~/..data": "No such file or directory", + } + + outputs = run_command_list(commands_contain.keys(), channel, 30) + for command, expected_output in commands_contain.items(): + assert ( + expected_output in outputs[command] + ), f"Command '{command}' output does not contain expected substring '{expected_output}'. Instead got '{outputs[command]}'" - for command, output in commands_contain: - assert output in run_command(command, stdin, stdout, stderr) + channel.close() diff --git a/tests/tests_e2e/playwright/.env.tpl b/tests/tests_e2e/playwright/.env.tpl index 399eff80c7..d1fad0a084 100644 --- a/tests/tests_e2e/playwright/.env.tpl +++ b/tests/tests_e2e/playwright/.env.tpl @@ -1,3 +1,3 @@ KEYCLOAK_USERNAME="USERNAME_OR_GOOGLE_EMAIL" KEYCLOAK_PASSWORD="PASSWORD" -NEBARI_FULL_URL="https://nebari.quansight.dev/" +NEBARI_FULL_URL="https://localhost/" diff --git a/tests/tests_e2e/playwright/Makefile b/tests/tests_e2e/playwright/Makefile new file mode 100644 index 0000000000..429a8a4ac5 --- /dev/null +++ b/tests/tests_e2e/playwright/Makefile @@ -0,0 +1,10 @@ +.PHONY: setup + +setup: + @echo "Setting up correct pins for playwright user-journey tests" + pip install -r requirements.txt + @echo "Setting up playwright browser dependencies" + playwright install + @echo "Setting up .env file" + cp .env.tpl .env + @echo "Please fill in the .env file with the correct values" diff --git a/tests/tests_e2e/playwright/README.md b/tests/tests_e2e/playwright/README.md index c328681273..bb3592c9b2 100644 --- a/tests/tests_e2e/playwright/README.md +++ b/tests/tests_e2e/playwright/README.md @@ -33,48 +33,57 @@ tests - `handlers.py`: Contains classes fore handling the different level of access to services a User might encounter, such as Notebook, Conda-store and others. -## Setup - -1. **Install Nebari with Development Requirements** +Below is an example of how you might update the **Setup** and **Running the Playwright Tests** sections of your README to reflect the new `Makefile` and the updated `pytest` invocation. - Install Nebari including development requirements (which include Playwright): +--- - ```bash - pip install -e ".[dev]" - ``` +## Setup -2. **Install Playwright** +1. **Use the provided Makefile to install dependencies** - Install Playwright: + Navigate to the Playwright tests directory and run the `setup` target: ```bash - playwright install + cd tests_e2e/playwright + make setup ``` - *Note:* If you see the warning `BEWARE: your OS is not officially supported by Playwright; downloading fallback build`, it is not critical. Playwright should still work (see microsoft/playwright#15124). + This command will: -3. **Create Environment Vars** + - Install the pinned dependencies from `requirements.txt`. + - Install Playwright and its required browser dependencies. + - Create a new `.env` file from `.env.tpl`. - Fill in your execution space environment with the following values: +2. **Fill in the `.env` file** - - `KEYCLOAK_USERNAME`: Nebari username for username/password login or Google email address/Google sign-in. - - `KEYCLOAK_PASSWORD`: Password associated with `KEYCLOAK_USERNAME`. - - `NEBARI_FULL_URL`: Full URL path including scheme to the Nebari instance (e.g., "https://nebari.quansight.dev/"). + Open the newly created `.env` file and fill in the following values: - This user can be created with the following command (or use an existing non-root user): + - `KEYCLOAK_USERNAME`: Nebari username for username/password login (or Google email for Google sign-in). + - `KEYCLOAK_PASSWORD`: Password associated with the above username. + - `NEBARI_FULL_URL`: Full URL (including `https://`) to the Nebari instance (e.g., `https://nebari.quansight.dev/`). + + If you need to create a user for testing, you can do so with: ```bash nebari keycloak adduser --user --config ``` -## Running the Playwright Tests +*Note:* If you see the warning: +``` +BEWARE: your OS is not officially supported by Playwright; downloading fallback build +``` +it is not critical. Playwright should still work despite the warning. -Playwright tests are run inside of pytest using: +## Running the Playwright Tests +You can run the Playwright tests with `pytest`. ```bash -pytest tests_e2e/playwright/test_playwright.py +pytest tests_e2e/playwright/test_playwright.py --numprocesses auto ``` +> **Important**: Due to how Pytest manages async code; Playwright’s sync calls can conflict with default Pytest concurrency settings, and using `--numprocesses auto` helps mitigate potential thread-blocking issues. + + Videos of the test playback will be available in `$PWD/videos/`. To disabled the browser runtime preview of what is happening while the test runs, pass the `--headed` option to `pytest`. You can also add the `--slowmo=$MILLI_SECONDS` option to introduce a delay before each @@ -188,3 +197,17 @@ If your test suit presents a need for a more complex sequence of actions or spec parsing around the contents present in each page, you can create your own handler to execute the auxiliary actions while the test is running. Check the `handlers.py` over some examples of how that's being done. + + +## Debugging Playwright tests + +Playwright supports a debug mode called +[Inspector](https://playwright.dev/python/docs/debug#playwright-inspector) that can be +used to inspect the browser and the page while the test is running. To enabled this +debugging option within the tests execution you can pass the `PWDEBUG=1` variable within +your test execution command. + +For example, to run a single test with the debug mode enabled, you can use the following +```bash +PWDEBUG=1 pytest -s test_playwright.py::test_notebook --numprocesses 1 +``` diff --git a/tests/tests_e2e/playwright/requirements.txt b/tests/tests_e2e/playwright/requirements.txt new file mode 100644 index 0000000000..0e5093a62d --- /dev/null +++ b/tests/tests_e2e/playwright/requirements.txt @@ -0,0 +1,4 @@ +playwright==1.50.0 +pytest==8.0.0 +pytest-playwright==0.7.0 +pytest-xdist==3.6.1 diff --git a/tests/tests_e2e/playwright/test_playwright.py b/tests/tests_e2e/playwright/test_playwright.py index 9d04a4e027..0a835c8413 100644 --- a/tests/tests_e2e/playwright/test_playwright.py +++ b/tests/tests_e2e/playwright/test_playwright.py @@ -30,7 +30,8 @@ def test_login_logout(navigator): ) @login_parameterized() def test_navbar_services(navigator, services): - navigator.page.goto(navigator.nebari_url + "hub/home") + home_url = navigator._nebari_url / "hub/home" + navigator.page.goto(home_url.human_repr()) navigator.page.wait_for_load_state("networkidle") navbar_items = navigator.page.locator("#thenavbar").get_by_role("link") navbar_items_names = [item.text_content() for item in navbar_items.all()] diff --git a/tests/tests_integration/README.md b/tests/tests_integration/README.md index 759a70a594..79c037a390 100644 --- a/tests/tests_integration/README.md +++ b/tests/tests_integration/README.md @@ -3,26 +3,6 @@ These tests are designed to test things on Nebari deployed on cloud. - -## Digital Ocean - -```bash -DIGITALOCEAN_TOKEN -NEBARI_K8S_VERSION -SPACES_ACCESS_KEY_ID -SPACES_SECRET_ACCESS_KEY -CLOUDFLARE_TOKEN -``` - -Assuming you're in the `tests_integration` directory, run: - -```bash -pytest -vvv -s --cloud do -``` - -This will deploy on Nebari on Digital Ocean, run tests on the deployment -and then teardown the cluster. - ## Amazon Web Services ```bash diff --git a/tests/tests_integration/conftest.py b/tests/tests_integration/conftest.py index 4a64fd4274..b4b7a9af79 100644 --- a/tests/tests_integration/conftest.py +++ b/tests/tests_integration/conftest.py @@ -7,5 +7,5 @@ # argparse under-the-hood def pytest_addoption(parser): parser.addoption( - "--cloud", action="store", help="Cloud to deploy on: aws/do/gcp/azure" + "--cloud", action="store", help="Cloud to deploy on: aws/gcp/azure" ) diff --git a/tests/tests_integration/deployment_fixtures.py b/tests/tests_integration/deployment_fixtures.py index f5752d4c24..4ece916667 100644 --- a/tests/tests_integration/deployment_fixtures.py +++ b/tests/tests_integration/deployment_fixtures.py @@ -16,10 +16,8 @@ from _nebari.destroy import destroy_configuration from _nebari.provider.cloud.amazon_web_services import aws_cleanup from _nebari.provider.cloud.azure_cloud import azure_cleanup -from _nebari.provider.cloud.digital_ocean import digital_ocean_cleanup from _nebari.provider.cloud.google_cloud import gcp_cleanup from _nebari.render import render_template -from _nebari.utils import set_do_environment from nebari import schema from tests.common.config_mod_utils import add_gpu_config, add_preemptible_node_group from tests.tests_unit.utils import render_config_partial @@ -98,10 +96,7 @@ def _cleanup_nebari(config: schema.Main): cloud_provider = config.provider - if cloud_provider == schema.ProviderEnum.do.value.lower(): - logger.info("Forcefully clean up Digital Ocean resources") - digital_ocean_cleanup(config) - elif cloud_provider == schema.ProviderEnum.aws.lower(): + if cloud_provider == schema.ProviderEnum.aws.lower(): logger.info("Forcefully clean up AWS resources") aws_cleanup(config) elif cloud_provider == schema.ProviderEnum.gcp.lower(): @@ -119,9 +114,6 @@ def deploy(request): cloud = request.config.getoption("--cloud") # initialize - if cloud == "do": - set_do_environment() - deployment_dir = _get_or_create_deployment_directory(cloud) config = render_config_partial( project_name=deployment_dir.name, diff --git a/tests/tests_integration/test_all_clouds.py b/tests/tests_integration/test_all_clouds.py index 8a163fb7b6..6a9bf87dd4 100644 --- a/tests/tests_integration/test_all_clouds.py +++ b/tests/tests_integration/test_all_clouds.py @@ -2,7 +2,6 @@ def test_service_status(deploy): - """Tests if deployment on DigitalOcean succeeds""" service_urls = deploy["stages/07-kubernetes-services"]["service_urls"]["value"] assert ( requests.get(service_urls["jupyterhub"]["health_url"], verify=False).status_code diff --git a/tests/tests_unit/cli_validate/do.happy.yaml b/tests/tests_unit/cli_validate/do.happy.yaml deleted file mode 100644 index 4ca0b2e62f..0000000000 --- a/tests/tests_unit/cli_validate/do.happy.yaml +++ /dev/null @@ -1,28 +0,0 @@ -provider: do -namespace: dev -nebari_version: 2023.7.2.dev23+g53d17964.d20230824 -project_name: test -domain: test.example.com -ci_cd: - type: none -terraform_state: - type: local -security: - keycloak: - initial_root_password: m1s25vc4k43dxbk5jaxubxcq39n4vmjq - authentication: - type: password -theme: - jupyterhub: - hub_title: Nebari - test - welcome: Welcome! Learn about Nebari's features and configurations in the - documentation. If you have any questions or feedback, reach the team on - Nebari's support - forums. - hub_subtitle: Your open source data science platform, hosted on Azure -certificate: - type: lets-encrypt - acme_email: test@example.com -digital_ocean: - kubernetes_version: '1.20.2-do.0' - region: nyc3 diff --git a/tests/tests_unit/conftest.py b/tests/tests_unit/conftest.py index ce60e44799..54528cbd23 100644 --- a/tests/tests_unit/conftest.py +++ b/tests/tests_unit/conftest.py @@ -7,7 +7,6 @@ from _nebari.constants import ( AWS_DEFAULT_REGION, AZURE_DEFAULT_REGION, - DO_DEFAULT_REGION, GCP_DEFAULT_REGION, ) from _nebari.initialize import render_config @@ -56,6 +55,18 @@ def _mock_return_value(return_value): "m5.xlarge": "m5.xlarge", "m5.2xlarge": "m5.2xlarge", }, + "_nebari.provider.cloud.amazon_web_services.kms_key_arns": { + "xxxxxxxx-east-zzzz": { + "Arn": "arn:aws:kms:us-east-1:100000:key/xxxxxxxx-east-zzzz", + "KeyUsage": "ENCRYPT_DECRYPT", + "KeySpec": "SYMMETRIC_DEFAULT", + }, + "xxxxxxxx-west-zzzz": { + "Arn": "arn:aws:kms:us-west-2:100000:key/xxxxxxxx-west-zzzz", + "KeyUsage": "ENCRYPT_DECRYPT", + "KeySpec": "SYMMETRIC_DEFAULT", + }, + }, # Azure "_nebari.provider.cloud.azure_cloud.kubernetes_versions": [ "1.18", @@ -63,22 +74,6 @@ def _mock_return_value(return_value): "1.20", ], "_nebari.provider.cloud.azure_cloud.check_credentials": None, - # Digital Ocean - "_nebari.provider.cloud.digital_ocean.kubernetes_versions": [ - "1.19.2-do.3", - "1.20.2-do.0", - "1.21.5-do.0", - ], - "_nebari.provider.cloud.digital_ocean.check_credentials": None, - "_nebari.provider.cloud.digital_ocean.regions": [ - {"name": "New York 3", "slug": "nyc3"}, - ], - "_nebari.provider.cloud.digital_ocean.instances": [ - {"name": "s-2vcpu-4gb", "slug": "s-2vcpu-4gb"}, - {"name": "g-2vcpu-8gb", "slug": "g-2vcpu-8gb"}, - {"name": "g-8vcpu-32gb", "slug": "g-8vcpu-32gb"}, - {"name": "g-4vcpu-16gb", "slug": "g-4vcpu-16gb"}, - ], # Google Cloud "_nebari.provider.cloud.google_cloud.kubernetes_versions": [ "1.18", @@ -90,6 +85,11 @@ def _mock_return_value(return_value): "us-central1", "us-east1", ], + "_nebari.provider.cloud.google_cloud.instances": [ + "e2-standard-4", + "e2-standard-8", + "e2-highmem-4", + ], } for attribute_path, return_value in MOCK_VALUES.items(): @@ -101,15 +101,6 @@ def _mock_return_value(return_value): @pytest.fixture( params=[ # project, namespace, domain, cloud_provider, region, ci_provider, auth_provider - ( - "pytestdo", - "dev", - "do.nebari.dev", - schema.ProviderEnum.do, - DO_DEFAULT_REGION, - CiEnum.github_actions, - AuthenticationEnum.password, - ), ( "pytestaws", "dev", diff --git a/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310-customauth.yaml b/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310-customauth.yaml similarity index 85% rename from tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310-customauth.yaml rename to tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310-customauth.yaml index 50a2b89af4..28877bf1bc 100644 --- a/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310-customauth.yaml +++ b/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310-customauth.yaml @@ -1,6 +1,6 @@ -project_name: do-pytest -provider: do -domain: do.nebari.dev +project_name: aws-pytest +provider: aws +domain: aws.nebari.dev certificate: type: self-signed security: @@ -32,7 +32,7 @@ storage: theme: jupyterhub: hub_title: Nebari - do-pytest - hub_subtitle: Autoscaling Compute Environment on Digital Ocean + hub_subtitle: Autoscaling Compute Environment on AWS welcome: Welcome to do.nebari.dev. It is maintained by Quansight staff. The hub's configuration is stored in a github repository based on https://github.com/Quansight/nebari/. @@ -48,22 +48,31 @@ theme: terraform_state: type: remote namespace: dev -digital_ocean: - region: nyc3 - kubernetes_version: 1.21.5-do.0 +amazon_web_services: + kubernetes_version: '1.20' + region: us-east-1 node_groups: general: - instance: s-2vcpu-4gb + instance: m5.2xlarge min_nodes: 1 max_nodes: 1 + gpu: false + single_subnet: false + permissions_boundary: user: - instance: g-2vcpu-8gb - min_nodes: 1 + instance: m5.xlarge + min_nodes: 0 max_nodes: 5 + gpu: false + single_subnet: false + permissions_boundary: worker: - instance: g-2vcpu-8gb - min_nodes: 1 + instance: m5.xlarge + min_nodes: 0 max_nodes: 5 + gpu: false + single_subnet: false + permissions_boundary: profiles: jupyterlab: - display_name: Small Instance diff --git a/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310.yaml b/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310.yaml similarity index 85% rename from tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310.yaml rename to tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310.yaml index a3a06da6a2..874de58b61 100644 --- a/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-do-310.yaml +++ b/tests/tests_unit/qhub-config-yaml-files-for-upgrade/qhub-config-aws-310.yaml @@ -1,6 +1,6 @@ -project_name: do-pytest -provider: do -domain: do.nebari.dev +project_name: aws-pytest +provider: aws +domain: aws.nebari.dev certificate: type: self-signed security: @@ -29,7 +29,7 @@ storage: theme: jupyterhub: hub_title: Nebari - do-pytest - hub_subtitle: Autoscaling Compute Environment on Digital Ocean + hub_subtitle: Autoscaling Compute Environment on AWS welcome: Welcome to do.nebari.dev. It is maintained by Quansight staff. The hub's configuration is stored in a github repository based on https://github.com/Quansight/nebari/. @@ -45,22 +45,31 @@ theme: terraform_state: type: remote namespace: dev -digital_ocean: - region: nyc3 - kubernetes_version: 1.21.5-do.0 +amazon_web_services: + kubernetes_version: '1.20' + region: us-east-1 node_groups: general: - instance: s-2vcpu-4gb + instance: m5.2xlarge min_nodes: 1 max_nodes: 1 + gpu: false + single_subnet: false + permissions_boundary: user: - instance: g-2vcpu-8gb - min_nodes: 1 + instance: m5.xlarge + min_nodes: 0 max_nodes: 5 + gpu: false + single_subnet: false + permissions_boundary: worker: - instance: g-2vcpu-8gb - min_nodes: 1 + instance: m5.xlarge + min_nodes: 0 max_nodes: 5 + gpu: false + single_subnet: false + permissions_boundary: profiles: jupyterlab: - display_name: Small Instance diff --git a/tests/tests_unit/test_cli_init.py b/tests/tests_unit/test_cli_init.py index 9afab5ddc5..03b22557ae 100644 --- a/tests/tests_unit/test_cli_init.py +++ b/tests/tests_unit/test_cli_init.py @@ -17,13 +17,11 @@ "aws": ["1.20"], "azure": ["1.20"], "gcp": ["1.20"], - "do": ["1.21.5-do.0"], } MOCK_CLOUD_REGIONS = { "aws": ["us-east-1"], "azure": [AZURE_DEFAULT_REGION], "gcp": ["us-central1"], - "do": ["nyc3"], } @@ -70,7 +68,7 @@ def generate_test_data_test_cli_init_happy_path(): """ test_data = [] - for provider in ["local", "aws", "azure", "gcp", "do", "existing"]: + for provider in ["local", "aws", "azure", "gcp", "existing"]: for region in get_cloud_regions(provider): for project_name in ["testproject"]: for domain_name in [f"{project_name}.example.com"]: @@ -265,9 +263,6 @@ def get_provider_section_header(provider: str): return "google_cloud_platform" if provider == "azure": return "azure" - if provider == "do": - return "digital_ocean" - return "" @@ -278,8 +273,6 @@ def get_cloud_regions(provider: str): return MOCK_CLOUD_REGIONS["gcp"] if provider == "azure": return MOCK_CLOUD_REGIONS["azure"] - if provider == "do": - return MOCK_CLOUD_REGIONS["do"] return "" @@ -291,7 +284,4 @@ def get_kubernetes_versions(provider: str): return MOCK_KUBERNETES_VERSIONS["gcp"] if provider == "azure": return MOCK_KUBERNETES_VERSIONS["azure"] - if provider == "do": - return MOCK_KUBERNETES_VERSIONS["do"] - return "" diff --git a/tests/tests_unit/test_cli_plugin.py b/tests/tests_unit/test_cli_plugin.py new file mode 100644 index 0000000000..2f6257050e --- /dev/null +++ b/tests/tests_unit/test_cli_plugin.py @@ -0,0 +1,64 @@ +from typing import List +from unittest.mock import Mock, patch + +import pytest +from typer.testing import CliRunner + +from _nebari.cli import create_cli + +runner = CliRunner() + + +@pytest.mark.parametrize( + "args, exit_code, content", + [ + # --help + ([], 0, ["Usage:"]), + (["--help"], 0, ["Usage:"]), + (["-h"], 0, ["Usage:"]), + (["list", "--help"], 0, ["Usage:"]), + (["list", "-h"], 0, ["Usage:"]), + (["list"], 0, ["Plugins"]), + ], +) +def test_cli_plugin_stdout(args: List[str], exit_code: int, content: List[str]): + app = create_cli() + result = runner.invoke(app, ["plugin"] + args) + assert result.exit_code == exit_code + for c in content: + assert c in result.stdout + + +def mock_get_plugins(): + mytestexternalplugin = Mock() + mytestexternalplugin.__name__ = "mytestexternalplugin" + + otherplugin = Mock() + otherplugin.__name__ = "otherplugin" + + return [mytestexternalplugin, otherplugin] + + +def mock_version(pkg): + pkg_version_map = { + "mytestexternalplugin": "0.4.4", + "otherplugin": "1.1.1", + } + return pkg_version_map.get(pkg) + + +@patch( + "nebari.plugins.NebariPluginManager.plugin_manager.get_plugins", mock_get_plugins +) +@patch("_nebari.subcommands.plugin.version", mock_version) +def test_cli_plugin_list_external_plugins(): + app = create_cli() + result = runner.invoke(app, ["plugin", "list"]) + assert result.exit_code == 0 + expected_output = [ + "Plugins", + "mytestexternalplugin │ 0.4.4", + "otherplugin │ 1.1.1", + ] + for c in expected_output: + assert c in result.stdout diff --git a/tests/tests_unit/test_cli_upgrade.py b/tests/tests_unit/test_cli_upgrade.py index aa79838bee..364b51b23b 100644 --- a/tests/tests_unit/test_cli_upgrade.py +++ b/tests/tests_unit/test_cli_upgrade.py @@ -5,6 +5,7 @@ import pytest import yaml +from rich.prompt import Confirm, Prompt from typer.testing import CliRunner import _nebari.upgrade @@ -18,13 +19,11 @@ "aws": ["1.20"], "azure": ["1.20"], "gcp": ["1.20"], - "do": ["1.21.5-do.0"], } MOCK_CLOUD_REGIONS = { "aws": ["us-east-1"], "azure": [AZURE_DEFAULT_REGION], "gcp": ["us-central1"], - "do": ["nyc3"], } @@ -106,7 +105,7 @@ def test_cli_upgrade_2023_4_1_to_2023_5_1(monkeypatch: pytest.MonkeyPatch): @pytest.mark.parametrize( "provider", - ["aws", "azure", "do", "gcp"], + ["aws", "azure", "gcp"], ) def test_cli_upgrade_2023_5_1_to_2023_7_1( monkeypatch: pytest.MonkeyPatch, provider: str @@ -434,9 +433,6 @@ def test_cli_upgrade_to_2023_10_1_cdsdashboard_removed(monkeypatch: pytest.Monke ("azure", "compatible"), ("azure", "incompatible"), ("azure", "invalid"), - ("do", "compatible"), - ("do", "incompatible"), - ("do", "invalid"), ("gcp", "compatible"), ("gcp", "incompatible"), ("gcp", "invalid"), @@ -452,14 +448,27 @@ def test_cli_upgrade_to_2023_10_1_kubernetes_validations( kubernetes_configs = { "aws": {"incompatible": "1.19", "compatible": "1.26", "invalid": "badname"}, "azure": {"incompatible": "1.23", "compatible": "1.26", "invalid": "badname"}, - "do": { - "incompatible": "1.19.2-do.3", - "compatible": "1.26.0-do.custom", - "invalid": "badname", - }, "gcp": {"incompatible": "1.23", "compatible": "1.26", "invalid": "badname"}, } + def mock_input_ask(prompt, *args, **kwargs): + from _nebari.upgrade import TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION + + # For more about structural pattern matching, see: + # https://peps.python.org/pep-0636/ + match prompt: + case str(s) if s == TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION: + return kwargs.get("attempt_fixes", False) + case _: + return kwargs.get("default", False) + + monkeypatch.setattr(Confirm, "ask", mock_input_ask) + monkeypatch.setattr( + Prompt, + "ask", + lambda x, *args, **kwargs: "", + ) + with tempfile.TemporaryDirectory() as tmp: tmp_file = Path(tmp).resolve() / "nebari-config.yaml" assert tmp_file.exists() is False diff --git a/tests/tests_unit/test_cli_validate.py b/tests/tests_unit/test_cli_validate.py index faf2efa8a1..b12d3cfea0 100644 --- a/tests/tests_unit/test_cli_validate.py +++ b/tests/tests_unit/test_cli_validate.py @@ -221,7 +221,6 @@ def test_cli_validate_error_from_env( } }, ), - ("do", {"digital_ocean": {"kubernetes_version": "1.20", "region": "nyc3"}}), pytest.param( "local", {"security": {"authentication": {"type": "Auth0"}}}, @@ -248,7 +247,6 @@ def test_cli_validate_error_missing_cloud_env( "ARM_TENANT_ID", "ARM_CLIENT_ID", "ARM_CLIENT_SECRET", - "DIGITALOCEAN_TOKEN", "SPACES_ACCESS_KEY_ID", "SPACES_SECRET_ACCESS_KEY", "AUTH0_CLIENT_ID", diff --git a/tests/tests_unit/test_config_set.py b/tests/tests_unit/test_config_set.py new file mode 100644 index 0000000000..81f5a8a11c --- /dev/null +++ b/tests/tests_unit/test_config_set.py @@ -0,0 +1,73 @@ +from unittest.mock import patch + +import pytest +from packaging.requirements import SpecifierSet + +from _nebari.config_set import ConfigSetMetadata, read_config_set + +test_version = "2024.12.2" + + +@pytest.mark.parametrize( + "version_input,test_version,should_pass", + [ + # Standard version tests + (">=2024.12.0,<2025.0.0", "2024.12.2", True), + (SpecifierSet(">=2024.12.0,<2025.0.0"), "2024.12.2", True), + # Pre-release version requirement tests + (">=2024.12.0rc1,<2025.0.0", "2024.12.0rc1", True), + (SpecifierSet(">=2024.12.0rc1"), "2024.12.0rc2", True), + # Pre-release test version against standard requirement + (">=2024.12.0,<2025.0.0", "2024.12.1rc1", True), + (SpecifierSet(">=2024.12.0,<2025.0.0"), "2024.12.1rc1", True), + # Failing cases + (">=2025.0.0", "2024.12.2rc1", False), + (SpecifierSet(">=2025.0.0rc1"), "2024.12.2", False), + ], +) +def test_version_requirement(version_input, test_version, should_pass): + metadata = ConfigSetMetadata(name="test-config", nebari_version=version_input) + + if should_pass: + metadata.check_version(test_version) + else: + with pytest.raises(ValueError) as exc_info: + metadata.check_version(test_version) + assert "Nebari version" in str(exc_info.value) + + +def test_read_config_set_valid(tmp_path): + config_set_yaml = """ + metadata: + name: test-config + nebari_version: ">=2024.12.0" + config: + key: value + """ + config_set_filepath = tmp_path / "config_set.yaml" + config_set_filepath.write_text(config_set_yaml) + with patch("_nebari.config_set.__version__", "2024.12.2"): + config_set = read_config_set(str(config_set_filepath)) + assert config_set.metadata.name == "test-config" + assert config_set.config["key"] == "value" + + +def test_read_config_set_invalid_version(tmp_path): + config_set_yaml = """ + metadata: + name: test-config + nebari_version: ">=2025.0.0" + config: + key: value + """ + config_set_filepath = tmp_path / "config_set.yaml" + config_set_filepath.write_text(config_set_yaml) + + with patch("_nebari.config_set.__version__", "2024.12.2"): + with pytest.raises(ValueError) as exc_info: + read_config_set(str(config_set_filepath)) + assert "Nebari version" in str(exc_info.value) + + +if __name__ == "__main__": + pytest.main() diff --git a/tests/tests_unit/test_dependencies.py b/tests/tests_unit/test_dependencies.py deleted file mode 100644 index bcde584e08..0000000000 --- a/tests/tests_unit/test_dependencies.py +++ /dev/null @@ -1,18 +0,0 @@ -import urllib - -from _nebari.provider import terraform - - -def test_terraform_open_source_license(): - tf_version = terraform.version() - license_url = ( - f"https://raw.githubusercontent.com/hashicorp/terraform/v{tf_version}/LICENSE" - ) - - request = urllib.request.Request(license_url) - with urllib.request.urlopen(request) as response: - assert 200 == response.getcode() - - license = str(response.read()) - assert "Mozilla Public License" in license - assert "Business Source License" not in license diff --git a/tests/tests_unit/test_links.py b/tests/tests_unit/test_links.py index a393391ce9..6e8529149e 100644 --- a/tests/tests_unit/test_links.py +++ b/tests/tests_unit/test_links.py @@ -1,10 +1,9 @@ import pytest import requests -from _nebari.constants import AWS_ENV_DOCS, AZURE_ENV_DOCS, DO_ENV_DOCS, GCP_ENV_DOCS +from _nebari.constants import AWS_ENV_DOCS, AZURE_ENV_DOCS, GCP_ENV_DOCS LINKS_TO_TEST = [ - DO_ENV_DOCS, AWS_ENV_DOCS, GCP_ENV_DOCS, AZURE_ENV_DOCS, diff --git a/tests/tests_unit/test_schema.py b/tests/tests_unit/test_schema.py index fa6a0c747c..e445ba37da 100644 --- a/tests/tests_unit/test_schema.py +++ b/tests/tests_unit/test_schema.py @@ -62,12 +62,11 @@ def test_render_schema(nebari_config): "fake", pytest.raises( ValueError, - match="'fake' is not a valid enumeration member; permitted: local, existing, do, aws, gcp, azure", + match="'fake' is not a valid enumeration member; permitted: local, existing, aws, gcp, azure", ), ), ("aws", nullcontext()), ("gcp", nullcontext()), - ("do", nullcontext()), ("azure", nullcontext()), ("existing", nullcontext()), ("local", nullcontext()), @@ -102,11 +101,6 @@ def test_provider_validation(config_schema, provider, exception): "kubernetes_version": "1.18", }, ), - ( - "do", - "digital_ocean", - {"region": "nyc3", "kubernetes_version": "1.19.2-do.3"}, - ), ( "azure", "azure", @@ -167,3 +161,13 @@ def test_set_provider(config_schema, provider): result_config_dict = config.model_dump() assert provider in result_config_dict assert result_config_dict[provider]["kube_context"] == "some_context" + + +def test_provider_config_mismatch_warning(config_schema): + config_dict = { + "project_name": "test", + "provider": "local", + "existing": {"kube_context": "some_context"}, # <-- Doesn't match the provider + } + with pytest.warns(UserWarning, match="configuration defined for other providers"): + config_schema(**config_dict) diff --git a/tests/tests_unit/test_stages.py b/tests/tests_unit/test_stages.py index c716d93030..c15aa6d9fc 100644 --- a/tests/tests_unit/test_stages.py +++ b/tests/tests_unit/test_stages.py @@ -53,6 +53,7 @@ def test_check_immutable_fields_immutable_change( mock_model_fields, mock_get_state, terraform_state_stage, mock_config ): old_config = mock_config.model_copy(deep=True) + old_config.local = None old_config.provider = schema.ProviderEnum.gcp mock_get_state.return_value = old_config.model_dump() diff --git a/tests/tests_unit/test_upgrade.py b/tests/tests_unit/test_upgrade.py index f6e3f80348..8f4a62630b 100644 --- a/tests/tests_unit/test_upgrade.py +++ b/tests/tests_unit/test_upgrade.py @@ -2,7 +2,7 @@ from pathlib import Path import pytest -from rich.prompt import Prompt +from rich.prompt import Confirm, Prompt from _nebari.upgrade import do_upgrade from _nebari.version import __version__, rounded_ver_parse @@ -21,21 +21,51 @@ def qhub_users_import_json(): ) +class MockKeycloakAdmin: + @staticmethod + def get_client_id(*args, **kwargs): + return "test-client" + + @staticmethod + def create_client_role(*args, **kwargs): + return "test-client-role" + + @staticmethod + def get_client_role_id(*args, **kwargs): + return "test-client-role-id" + + @staticmethod + def get_role_by_id(*args, **kwargs): + return bytearray("test-role-id", "utf-8") + + @staticmethod + def get_groups(*args, **kwargs): + return [] + + @staticmethod + def get_client_role_groups(*args, **kwargs): + return [] + + @staticmethod + def assign_group_client_roles(*args, **kwargs): + pass + + @pytest.mark.parametrize( "old_qhub_config_path_str,attempt_fixes,expect_upgrade_error", [ ( - "./qhub-config-yaml-files-for-upgrade/qhub-config-do-310.yaml", + "./qhub-config-yaml-files-for-upgrade/qhub-config-aws-310.yaml", False, False, ), ( - "./qhub-config-yaml-files-for-upgrade/qhub-config-do-310-customauth.yaml", + "./qhub-config-yaml-files-for-upgrade/qhub-config-aws-310-customauth.yaml", False, True, ), ( - "./qhub-config-yaml-files-for-upgrade/qhub-config-do-310-customauth.yaml", + "./qhub-config-yaml-files-for-upgrade/qhub-config-aws-310-customauth.yaml", True, False, ), @@ -49,34 +79,100 @@ def test_upgrade_4_0( qhub_users_import_json, monkeypatch, ): - def mock_input(prompt, **kwargs): + from _nebari.upgrade import TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION + # Mock different upgrade steps prompt answers - if ( - prompt - == "Have you deleted the Argo Workflows CRDs and service accounts? [y/N] " - ): - return "y" + if prompt == "Have you deleted the Argo Workflows CRDs and service accounts?": + return True elif ( prompt == "\nDo you want Nebari to update the kube-prometheus-stack CRDs and delete the prometheus-node-exporter for you? If not, you'll have to do it manually." ): - return "N" + return False elif ( prompt == "Have you backed up your custom dashboards (if necessary), deleted the prometheus-node-exporter daemonset and updated the kube-prometheus-stack CRDs?" ): - return "y" + return True elif ( prompt - == "[bold]Would you like Nebari to assign the corresponding role to all of your current groups automatically?[/bold]" + == "[bold]Would you like Nebari to assign the corresponding role/scopes to all of your current groups automatically?[/bold]" ): - return "N" + return False + elif prompt == TERRAFORM_REMOVE_TERRAFORM_STAGE_FILES_CONFIRMATION: + return attempt_fixes # All other prompts will be answered with "y" else: - return "y" + return True + + monkeypatch.setattr(Confirm, "ask", mock_input) + monkeypatch.setattr(Prompt, "ask", lambda x: "") + + from kubernetes import config as _kube_config + from kubernetes.client import ApiextensionsV1Api as _ApiextensionsV1Api + from kubernetes.client import AppsV1Api as _AppsV1Api + from kubernetes.client import CoreV1Api as _CoreV1Api + from kubernetes.client import V1Status as _V1Status + + def monkey_patch_delete_crd(*args, **kwargs): + return _V1Status(code=200) - monkeypatch.setattr(Prompt, "ask", mock_input) + def monkey_patch_delete_namespaced_sa(*args, **kwargs): + return _V1Status(code=200) + + def monkey_patch_list_namespaced_daemon_set(*args, **kwargs): + class MonkeypatchApiResponse: + items = False + + return MonkeypatchApiResponse + + monkeypatch.setattr( + _kube_config, + "load_kube_config", + lambda *args, **kwargs: None, + ) + monkeypatch.setattr( + _kube_config, + "list_kube_config_contexts", + lambda *args, **kwargs: [None, {"context": {"cluster": "test"}}], + ) + monkeypatch.setattr( + _ApiextensionsV1Api, + "delete_custom_resource_definition", + monkey_patch_delete_crd, + ) + monkeypatch.setattr( + _CoreV1Api, + "delete_namespaced_service_account", + monkey_patch_delete_namespaced_sa, + ) + monkeypatch.setattr( + _ApiextensionsV1Api, + "read_custom_resource_definition", + lambda *args, **kwargs: True, + ) + monkeypatch.setattr( + _ApiextensionsV1Api, + "patch_custom_resource_definition", + lambda *args, **kwargs: True, + ) + monkeypatch.setattr( + _AppsV1Api, + "list_namespaced_daemon_set", + monkey_patch_list_namespaced_daemon_set, + ) + + from _nebari import upgrade as _upgrade + + def monkey_patch_get_keycloak_admin(*args, **kwargs): + return MockKeycloakAdmin() + + monkeypatch.setattr( + _upgrade, + "get_keycloak_admin", + monkey_patch_get_keycloak_admin, + ) old_qhub_config_path = Path(__file__).parent / old_qhub_config_path_str diff --git a/tests/tests_unit/test_utils.py b/tests/tests_unit/test_utils.py index 678cd1f230..88b911ff60 100644 --- a/tests/tests_unit/test_utils.py +++ b/tests/tests_unit/test_utils.py @@ -1,6 +1,6 @@ import pytest -from _nebari.utils import JsonDiff, JsonDiffEnum, byte_unit_conversion +from _nebari.utils import JsonDiff, JsonDiffEnum, byte_unit_conversion, deep_merge @pytest.mark.parametrize( @@ -64,3 +64,75 @@ def test_JsonDiff_modified(): diff = JsonDiff(obj1, obj2) modifieds = diff.modified() assert sorted(modifieds) == sorted([(["b", "!"], 2, 3), (["+"], 4, 5)]) + + +def test_deep_merge_order_preservation_dict(): + value_1 = { + "a": [1, 2], + "b": {"c": 1, "z": [5, 6]}, + "e": {"f": {"g": {}}}, + "m": 1, + } + + value_2 = { + "a": [3, 4], + "b": {"d": 2, "z": [7]}, + "e": {"f": {"h": 1}}, + "m": [1], + } + + expected_result = { + "a": [1, 2, 3, 4], + "b": {"c": 1, "z": [5, 6, 7], "d": 2}, + "e": {"f": {"g": {}, "h": 1}}, + "m": 1, + } + + result = deep_merge(value_1, value_2) + assert result == expected_result + assert list(result.keys()) == list(expected_result.keys()) + assert list(result["b"].keys()) == list(expected_result["b"].keys()) + assert list(result["e"]["f"].keys()) == list(expected_result["e"]["f"].keys()) + + +def test_deep_merge_order_preservation_list(): + value_1 = { + "a": [1, 2], + "b": {"c": 1, "z": [5, 6]}, + } + + value_2 = { + "a": [3, 4], + "b": {"d": 2, "z": [7]}, + } + + expected_result = { + "a": [1, 2, 3, 4], + "b": {"c": 1, "z": [5, 6, 7], "d": 2}, + } + + result = deep_merge(value_1, value_2) + assert result == expected_result + assert result["a"] == expected_result["a"] + assert result["b"]["z"] == expected_result["b"]["z"] + + +def test_deep_merge_single_dict(): + value_1 = { + "a": [1, 2], + "b": {"c": 1, "z": [5, 6]}, + } + + expected_result = value_1 + + result = deep_merge(value_1) + assert result == expected_result + assert list(result.keys()) == list(expected_result.keys()) + assert list(result["b"].keys()) == list(expected_result["b"].keys()) + + +def test_deep_merge_empty(): + expected_result = {} + + result = deep_merge() + assert result == expected_result diff --git a/tests/tests_unit/utils.py b/tests/tests_unit/utils.py index 82dffdcd3c..eddc66f52f 100644 --- a/tests/tests_unit/utils.py +++ b/tests/tests_unit/utils.py @@ -15,7 +15,6 @@ ) INIT_INPUTS = [ # project, namespace, domain, cloud_provider, ci_provider, auth_provider - ("pytestdo", "dev", "do.nebari.dev", "do", "github-actions", "github"), ("pytestaws", "dev", "aws.nebari.dev", "aws", "github-actions", "github"), ("pytestgcp", "dev", "gcp.nebari.dev", "gcp", "github-actions", "github"), ("pytestazure", "dev", "azure.nebari.dev", "azure", "github-actions", "github"), diff --git a/tests/utils.py b/tests/utils.py index 82dffdcd3c..eddc66f52f 100644 --- a/tests/utils.py +++ b/tests/utils.py @@ -15,7 +15,6 @@ ) INIT_INPUTS = [ # project, namespace, domain, cloud_provider, ci_provider, auth_provider - ("pytestdo", "dev", "do.nebari.dev", "do", "github-actions", "github"), ("pytestaws", "dev", "aws.nebari.dev", "aws", "github-actions", "github"), ("pytestgcp", "dev", "gcp.nebari.dev", "gcp", "github-actions", "github"), ("pytestazure", "dev", "azure.nebari.dev", "azure", "github-actions", "github"),
upgrade -
validate¶
+validate¶
Validate the values in the [purple]nebari-config.yaml[/purple] file are acceptable.
@@ -579,13 +611,13 @@nebari validate [OPTIONS]
validateOptions
render -
support¶
+support¶
Support tool to write all Kubernetes logs locally and compress them into a zip file.
The Nebari team recommends k9s to manage and inspect the state of the cluster. However, this command occasionally helpful for debugging purposes should the logs need to be shared.
@@ -527,24 +559,24 @@supportOptions
- upgrade¶
+upgrade¶
Upgrade your [purple]nebari-config.yaml[/purple].
Upgrade your [purple]nebari-config.yaml[/purple] after an nebari upgrade. If necessary, prompts users to perform manual upgrade steps required for the deploy process.
See the project [green]RELEASE.md[/green] for details.
@@ -554,13 +586,13 @@upgradeOptions
init
destroy -
dev¶
+dev¶
Development tools and advanced features.
nebari dev [OPTIONS] COMMAND [ARGS]...
- keycloak-api¶
+keycloak-api¶
Interact with the Keycloak REST API directly.
This is an advanced tool which can have potentially destructive consequences. Please use this at your own risk.
@@ -232,26 +228,27 @@keycloak-apiOptions
- info¶
+info¶
+Display information about installed Nebari plugins and their configurations.
nebari info [OPTIONS]
- init¶
+init¶
Create and initialize your [purple]nebari-config.yaml[/purple] file.
This command will create and initialize your [purple]nebari-config.yaml[/purple] :sparkles:
This file contains all your Nebari cluster configuration details and, @@ -261,13 +258,13 @@
init
nebari init [OPTIONS] [CLOUD_PROVIDER]:[local|existing|do|aws|gcp|azure] +
nebari init [OPTIONS] [CLOUD_PROVIDER]:[local|existing|aws|gcp|azure]
Options
destroy
deploy -
destroy¶
+destroy¶
Destroy the Nebari cluster from your [purple]nebari-config.yaml[/purple] file.
@@ -177,24 +173,24 @@nebari destroy [OPTIONS]
destroyOptions
deploy
deploy
deploy
deploy
nebari