From be39b6ce7d8cfd80c0ca79c7a2676d29b8a0e811 Mon Sep 17 00:00:00 2001 From: <> Date: Mon, 20 May 2024 00:13:04 +0000 Subject: [PATCH] Deployed c48b04e with MkDocs version: 1.6.0 --- .nojekyll | 0 404.html | 1452 ++++ CNAME | 1 + addons/fleetlock/index.html | 1576 ++++ addons/grafana/index.html | 1561 ++++ addons/ingress/index.html | 1783 +++++ addons/overview/index.html | 1703 +++++ addons/prometheus/index.html | 1605 ++++ advanced/arm64/index.html | 1777 +++++ advanced/customization/index.html | 1850 +++++ advanced/nodes/index.html | 1705 +++++ advanced/overview/index.html | 1505 ++++ advanced/worker-pools/index.html | 2370 ++++++ announce/index.html | 1680 +++++ architecture/aws/index.html | 1771 +++++ architecture/azure/index.html | 1731 +++++ architecture/bare-metal/index.html | 1671 ++++ architecture/concepts/index.html | 1813 +++++ architecture/digitalocean/index.html | 1766 +++++ architecture/google-cloud/index.html | 1749 +++++ architecture/operating-systems/index.html | 1750 +++++ assets/images/favicon.png | Bin 0 -> 1870 bytes assets/javascripts/bundle.ebd0bdb7.min.js | 29 + assets/javascripts/bundle.ebd0bdb7.min.js.map | 7 + assets/javascripts/lunr/min/lunr.ar.min.js | 1 + assets/javascripts/lunr/min/lunr.da.min.js | 18 + assets/javascripts/lunr/min/lunr.de.min.js | 18 + assets/javascripts/lunr/min/lunr.du.min.js | 18 + assets/javascripts/lunr/min/lunr.el.min.js | 1 + assets/javascripts/lunr/min/lunr.es.min.js | 18 + assets/javascripts/lunr/min/lunr.fi.min.js | 18 + assets/javascripts/lunr/min/lunr.fr.min.js | 18 + assets/javascripts/lunr/min/lunr.he.min.js | 1 + assets/javascripts/lunr/min/lunr.hi.min.js | 1 + assets/javascripts/lunr/min/lunr.hu.min.js | 18 + assets/javascripts/lunr/min/lunr.hy.min.js | 1 + assets/javascripts/lunr/min/lunr.it.min.js | 18 + assets/javascripts/lunr/min/lunr.ja.min.js | 1 + assets/javascripts/lunr/min/lunr.jp.min.js | 1 + assets/javascripts/lunr/min/lunr.kn.min.js | 1 + assets/javascripts/lunr/min/lunr.ko.min.js | 1 + assets/javascripts/lunr/min/lunr.multi.min.js | 1 + assets/javascripts/lunr/min/lunr.nl.min.js | 18 + assets/javascripts/lunr/min/lunr.no.min.js | 18 + assets/javascripts/lunr/min/lunr.pt.min.js | 18 + assets/javascripts/lunr/min/lunr.ro.min.js | 18 + assets/javascripts/lunr/min/lunr.ru.min.js | 18 + assets/javascripts/lunr/min/lunr.sa.min.js | 1 + .../lunr/min/lunr.stemmer.support.min.js | 1 + assets/javascripts/lunr/min/lunr.sv.min.js | 18 + assets/javascripts/lunr/min/lunr.ta.min.js | 1 + assets/javascripts/lunr/min/lunr.te.min.js | 1 + assets/javascripts/lunr/min/lunr.th.min.js | 1 + assets/javascripts/lunr/min/lunr.tr.min.js | 18 + assets/javascripts/lunr/min/lunr.vi.min.js | 1 + assets/javascripts/lunr/min/lunr.zh.min.js | 1 + assets/javascripts/lunr/tinyseg.js | 206 + assets/javascripts/lunr/wordcut.js | 6708 +++++++++++++++++ .../workers/search.b8dbb3d2.min.js | 42 + .../workers/search.b8dbb3d2.min.js.map | 7 + assets/stylesheets/main.6543a935.min.css | 1 + assets/stylesheets/main.6543a935.min.css.map | 1 + assets/stylesheets/palette.06af60db.min.css | 1 + .../stylesheets/palette.06af60db.min.css.map | 1 + fedora-coreos/aws/index.html | 2106 ++++++ fedora-coreos/azure/index.html | 2124 ++++++ fedora-coreos/bare-metal/index.html | 2256 ++++++ fedora-coreos/digitalocean/index.html | 2070 +++++ fedora-coreos/google-cloud/index.html | 2076 +++++ flatcar-linux/aws/index.html | 2106 ++++++ flatcar-linux/azure/index.html | 2118 ++++++ flatcar-linux/bare-metal/index.html | 2275 ++++++ flatcar-linux/digitalocean/index.html | 2082 +++++ flatcar-linux/google-cloud/index.html | 2076 +++++ img/favicon.ico | Bin 0 -> 3520 bytes img/grafana-etcd.png | Bin 0 -> 90362 bytes img/grafana-resources-cluster.png | Bin 0 -> 88285 bytes img/grafana-usage-cluster.png | Bin 0 -> 94682 bytes img/grafana-usage-node.png | Bin 0 -> 105880 bytes img/prometheus-alerts.png | Bin 0 -> 140828 bytes img/prometheus-graph.png | Bin 0 -> 229583 bytes img/prometheus-targets.png | Bin 0 -> 185744 bytes img/spin.png | Bin 0 -> 2324 bytes img/typhoon-aws-load-balancing.png | Bin 0 -> 38362 bytes img/typhoon-azure-load-balancing.png | Bin 0 -> 39794 bytes img/typhoon-digitalocean-load-balancing.png | Bin 0 -> 50263 bytes img/typhoon-gcp-load-balancing.png | Bin 0 -> 70481 bytes img/typhoon-logo.png | Bin 0 -> 20301 bytes img/typhoon.png | Bin 0 -> 22831 bytes index.html | 1921 +++++ search/search_index.json | 1 + sitemap.xml | 3 + sitemap.xml.gz | Bin 0 -> 127 bytes topics/faq/index.html | 1601 ++++ topics/hardware/index.html | 1935 +++++ topics/maintenance/index.html | 2170 ++++++ topics/performance/index.html | 1702 +++++ topics/security/index.html | 1874 +++++ 98 files changed, 72605 insertions(+) create mode 100644 .nojekyll create mode 100644 404.html create mode 100644 CNAME create mode 100644 addons/fleetlock/index.html create mode 100644 addons/grafana/index.html create mode 100644 addons/ingress/index.html create mode 100644 addons/overview/index.html create mode 100644 addons/prometheus/index.html create mode 100644 advanced/arm64/index.html create mode 100644 advanced/customization/index.html create mode 100644 advanced/nodes/index.html create mode 100644 advanced/overview/index.html create mode 100644 advanced/worker-pools/index.html create mode 100644 announce/index.html create mode 100644 architecture/aws/index.html create mode 100644 architecture/azure/index.html create mode 100644 architecture/bare-metal/index.html create mode 100644 architecture/concepts/index.html create mode 100644 architecture/digitalocean/index.html create mode 100644 architecture/google-cloud/index.html create mode 100644 architecture/operating-systems/index.html create mode 100644 assets/images/favicon.png create mode 100644 assets/javascripts/bundle.ebd0bdb7.min.js create mode 100644 assets/javascripts/bundle.ebd0bdb7.min.js.map create mode 100644 assets/javascripts/lunr/min/lunr.ar.min.js create mode 100644 assets/javascripts/lunr/min/lunr.da.min.js create mode 100644 assets/javascripts/lunr/min/lunr.de.min.js create mode 100644 assets/javascripts/lunr/min/lunr.du.min.js create mode 100644 assets/javascripts/lunr/min/lunr.el.min.js create mode 100644 assets/javascripts/lunr/min/lunr.es.min.js create mode 100644 assets/javascripts/lunr/min/lunr.fi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.fr.min.js create mode 100644 assets/javascripts/lunr/min/lunr.he.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hu.min.js create mode 100644 assets/javascripts/lunr/min/lunr.hy.min.js create mode 100644 assets/javascripts/lunr/min/lunr.it.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ja.min.js create mode 100644 assets/javascripts/lunr/min/lunr.jp.min.js create mode 100644 assets/javascripts/lunr/min/lunr.kn.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ko.min.js create mode 100644 assets/javascripts/lunr/min/lunr.multi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.nl.min.js create mode 100644 assets/javascripts/lunr/min/lunr.no.min.js create mode 100644 assets/javascripts/lunr/min/lunr.pt.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ro.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ru.min.js create mode 100644 assets/javascripts/lunr/min/lunr.sa.min.js create mode 100644 assets/javascripts/lunr/min/lunr.stemmer.support.min.js create mode 100644 assets/javascripts/lunr/min/lunr.sv.min.js create mode 100644 assets/javascripts/lunr/min/lunr.ta.min.js create mode 100644 assets/javascripts/lunr/min/lunr.te.min.js create mode 100644 assets/javascripts/lunr/min/lunr.th.min.js create mode 100644 assets/javascripts/lunr/min/lunr.tr.min.js create mode 100644 assets/javascripts/lunr/min/lunr.vi.min.js create mode 100644 assets/javascripts/lunr/min/lunr.zh.min.js create mode 100644 assets/javascripts/lunr/tinyseg.js create mode 100644 assets/javascripts/lunr/wordcut.js create mode 100644 assets/javascripts/workers/search.b8dbb3d2.min.js create mode 100644 assets/javascripts/workers/search.b8dbb3d2.min.js.map create mode 100644 assets/stylesheets/main.6543a935.min.css create mode 100644 assets/stylesheets/main.6543a935.min.css.map create mode 100644 assets/stylesheets/palette.06af60db.min.css create mode 100644 assets/stylesheets/palette.06af60db.min.css.map create mode 100644 fedora-coreos/aws/index.html create mode 100644 fedora-coreos/azure/index.html create mode 100644 fedora-coreos/bare-metal/index.html create mode 100644 fedora-coreos/digitalocean/index.html create mode 100644 fedora-coreos/google-cloud/index.html create mode 100644 flatcar-linux/aws/index.html create mode 100644 flatcar-linux/azure/index.html create mode 100644 flatcar-linux/bare-metal/index.html create mode 100644 flatcar-linux/digitalocean/index.html create mode 100644 flatcar-linux/google-cloud/index.html create mode 100644 img/favicon.ico create mode 100644 img/grafana-etcd.png create mode 100644 img/grafana-resources-cluster.png create mode 100644 img/grafana-usage-cluster.png create mode 100644 img/grafana-usage-node.png create mode 100644 img/prometheus-alerts.png create mode 100644 img/prometheus-graph.png create mode 100644 img/prometheus-targets.png create mode 100644 img/spin.png create mode 100644 img/typhoon-aws-load-balancing.png create mode 100644 img/typhoon-azure-load-balancing.png create mode 100644 img/typhoon-digitalocean-load-balancing.png create mode 100644 img/typhoon-gcp-load-balancing.png create mode 100644 img/typhoon-logo.png create mode 100644 img/typhoon.png create mode 100644 index.html create mode 100644 search/search_index.json create mode 100644 sitemap.xml create mode 100644 sitemap.xml.gz create mode 100644 topics/faq/index.html create mode 100644 topics/hardware/index.html create mode 100644 topics/maintenance/index.html create mode 100644 topics/performance/index.html create mode 100644 topics/security/index.html diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 000000000..e69de29bb diff --git a/404.html b/404.html new file mode 100644 index 000000000..4db7b6193 --- /dev/null +++ b/404.html @@ -0,0 +1,1452 @@ + + + +
+ + + + + + + + + + + + + + + + + + +fleetlock is a reboot coordinator for Fedora CoreOS nodes. It implements the FleetLock protocol for use as a Zincati lock strategy backend.
+Declare a Zincati fleet_lock
strategy when provisioning Fedora CoreOS nodes via snippets.
variant: fcos
+version: 1.5.0
+storage:
+ files:
+ - path: /etc/zincati/config.d/55-update-strategy.toml
+ contents:
+ inline: |
+ [updates]
+ strategy = "fleet_lock"
+ [updates.fleet_lock]
+ base_url = "http://10.3.0.15/"
+
module "nemo" {
+ ...
+ controller_snippets = [
+ file("./snippets/zincati-strategy.yaml"),
+ ]
+ worker_snippets = [
+ file("./snippets/zincati-strategy.yaml"),
+ ]
+}
+
Apply fleetlock based on the example manifests.
+git clone git@github.com:poseidon/fleetlock.git
+kubectl apply -f examples/k8s
+
Grafana can be used to build dashboards and visualizations that use Prometheus as the datasource. Create the grafana deployment and service.
+kubectl apply -f addons/grafana -R
+
Use kubectl
to authenticate to the apiserver and create a local port-forward to the Grafana pod.
kubectl port-forward grafana-POD-ID 8080 -n monitoring
+
Visit 127.0.0.1:8080 to view the bundled dashboards.
++ + +
+ + + + + + + + + + + + + +Nginx Ingress controller pods accept and demultiplex HTTP, HTTPS, TCP, or UDP traffic to backend services. Ingress controllers watch the Kubernetes API for Ingress resources and update their configuration accordingly. Ingress resources for HTTP(S) applications support virtual hosts (FQDNs), path rules, TLS termination, and SNI.
+On AWS, a network load balancer (NLB) distributes TCP traffic across two target groups (port 80 and 443) of worker nodes running an Ingress controller deployment. Security groups rules allow traffic to ports 80 and 443. Health checks ensure only workers with a healthy Ingress controller receive traffic.
+Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, and namespace.
+kubectl apply -R -f addons/nginx-ingress/aws
+
For each application, add a DNS CNAME resolving to the NLB's DNS record.
+app1.example.com -> tempest-ingress.123456.us-west2.elb.amazonaws.com
+app2.example.com -> tempest-ingress.123456.us-west2.elb.amazonaws.com
+app3.example.com -> tempest-ingress.123456.us-west2.elb.amazonaws.com
+
Find the NLB's DNS name through the console or use the Typhoon module's output ingress_dns_name
. For example, you might use Terraform to manage a Google Cloud DNS record:
resource "google_dns_record_set" "some-application" {
+ # DNS zone name
+ managed_zone = "example-zone"
+
+ # DNS record
+ name = "app.example.com."
+ type = "CNAME"
+ ttl = 300
+ rrdatas = ["${module.tempest.ingress_dns_name}."]
+}
+
On Azure, a load balancer distributes traffic across a backend address pool of worker nodes running an Ingress controller deployment. Security group rules allow traffic to ports 80 and 443. Health probes ensure only workers with a healthy Ingress controller receive traffic.
+Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, and namespace.
+kubectl apply -R -f addons/nginx-ingress/azure
+
For each application, add a DNS record resolving to the load balancer's IPv4 address.
+app1.example.com -> 11.22.33.44
+app2.example.com -> 11.22.33.44
+app3.example.com -> 11.22.33.44
+
Find the load balancer's IPv4 address with the Azure console or use the Typhoon module's output ingress_static_ipv4
. For example, you might use Terraform to manage a Google Cloud DNS record:
resource "google_dns_record_set" "some-application" {
+ # DNS zone name
+ managed_zone = "example-zone"
+
+ # DNS record
+ name = "app.example.com."
+ type = "A"
+ ttl = 300
+ rrdatas = [module.ramius.ingress_static_ipv4]
+}
+
On bare-metal, routing traffic to Ingress controller pods can be done in number of ways.
+Create the Ingress controller deployment, service, RBAC roles, and RBAC bindings. The service should use a fixed ClusterIP (e.g. 10.3.0.12) in the Kubernetes service IPv4 CIDR range.
+kubectl apply -R -f addons/nginx-ingress/bare-metal
+
There is no need for pods to use host networking or for the ingress service to use NodePort or LoadBalancer. Nodes already proxy packets destined for the service's ClusterIP to node(s) with a pod endpoint.
+Configure the network router or load balancer with a static route for the Kubernetes service range and set the next hop to a node. Repeat for each node, as desired, and set the metric (i.e. cost) of each. Finally, DNAT traffic destined for the WAN on ports 80 or 443 to the service's fixed ClusterIP.
+For each application, add a DNS record resolving to the WAN(s).
+resource "google_dns_record_set" "some-application" {
+ # Managed DNS Zone name
+ managed_zone = "zone-name"
+
+ # Name of the DNS record
+ name = "app.example.com."
+ type = "A"
+ ttl = 300
+ rrdatas = ["SOME-WAN-IP"]
+}
+
On DigitalOcean, DNS A and AAAA records (e.g. FQDN nemo-workers.example.com
) resolve to each worker1 running an Ingress controller DaemonSet on host ports 80 and 443. Firewall rules allow IPv4 and IPv6 traffic to ports 80 and 443.
Create the Ingress controller daemonset, service, RBAC roles, RBAC bindings, and namespace.
+kubectl apply -R -f addons/nginx-ingress/digital-ocean
+
For each application, add a CNAME record resolving to the worker(s) DNS record. Use the Typhoon module's output workers_dns
to find the worker DNS value. For example, you might use Terraform to manage a Google Cloud DNS record:
resource "google_dns_record_set" "some-application" {
+ # DNS zone name
+ managed_zone = "example-zone"
+
+ # DNS record
+ name = "app.example.com."
+ type = "CNAME"
+ ttl = 300
+ rrdatas = ["${module.nemo.workers_dns}."]
+}
+
Note
+Hosting IPv6 apps is possible, but requires editing the nginx-ingress addon to use hostNetwork: true
.
On Google Cloud, a TCP Proxy load balancer distributes IPv4 and IPv6 TCP traffic across a backend service of worker nodes running an Ingress controller deployment. Firewall rules allow traffic to ports 80 and 443. Health check rules ensure only workers with a healthy Ingress controller receive traffic.
+Create the Ingress controller deployment, service, RBAC roles, RBAC bindings, and namespace.
+kubectl apply -R -f addons/nginx-ingress/google-cloud
+
For each application, add DNS A records resolving to the load balancer's IPv4 address and DNS AAAA records resolving to the load balancer's IPv6 address.
+app1.example.com -> 11.22.33.44
+app2.example.com -> 11.22.33.44
+app3.example.com -> 11.22.33.44
+
Find the IPv4 address with gcloud compute addresses list
or use the Typhoon module's outputs ingress_static_ipv4
and ingress_static_ipv6
. For example, you might use Terraform to manage a Google Cloud DNS record:
resource "google_dns_record_set" "app-record-a" {
+ # DNS zone name
+ managed_zone = "example-zone"
+
+ # DNS record
+ name = "app.example.com."
+ type = "A"
+ ttl = 300
+ rrdatas = [module.yavin.ingress_static_ipv4]
+}
+
+resource "google_dns_record_set" "app-record-aaaa" {
+ # DNS zone name
+ managed_zone = "example-zone"
+
+ # DNS record
+ name = "app.example.com."
+ type = "AAAA"
+ ttl = 300
+ rrdatas = [module.yavin.ingress_static_ipv6]
+}
+
DigitalOcean does offer load balancers. We've opted not to use them to keep the DigitalOcean cluster cheap for developers. ↩
+Typhoon's component model allows for managing cluster components independent from the cluster's lifecycle, upgrading in a rolling or automated fashion, or customizing components in advanced ways.
+Typhoon clusters install core components like CoreDNS
, kube-proxy
, and a chosen CNI provider (flannel
, calico
, or cilium
) by default. Since v1.30.1, pre-installed components are optional. Other "addon" components like Nginx Ingress, Prometheus, or Grafana may be optionally applied though the component model (after cluster creation).
Pre-installed by default:
+var.networking
)Addons:
+By default, Typhoon clusters install CoreDNS
, kube-proxy
, and a chosen CNI provider (flannel
, calico
, or cilium
). Disable any or all of these components using the components
system.
module "yavin" {
+ source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.1"
+
+ # Google Cloud
+ cluster_name = "yavin"
+ region = "us-central1"
+ dns_zone = "example.com"
+ dns_zone_name = "example-zone"
+
+ # configuration
+ ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
+
+ # pre-installed components (defaults shown)
+ components = {
+ enable = true
+ coredns = {
+ enable = true
+ }
+ kube_proxy = {
+ enable = true
+ }
+ # Only the CNI set in var.networking will be installed
+ flannel = {
+ enable = true
+ }
+ calico = {
+ enable = true
+ }
+ cilium = {
+ enable = true
+ }
+ }
+}
+
Warn
+Disabling pre-installed components is for advanced users who intend to manage these components separately. Without a CNI provider, cluster nodes will be NotReady and wait for the CNI provider to be applied.
+If you choose to manage components youself, a recommended pattern is to use a separate Terraform workspace per component, like you would any application.
+mkdir -p infra/components/{coredns, cilium}
+
+tree components/coredns
+components/coredns/
+├── backend.tf
+├── manifests.tf
+└── providers.tf
+
Let's consider managing CoreDNS resources. Configure the kubernetes
provider to use the kubeconfig credentials of your Typhoon cluster(s) in a providers.tf
file. Here we show provider blocks for interacting with Typhoon clusters on AWS, Azure, or Google Cloud, assuming each cluster's kubeconfig-admin
output was written to local file.
provider "kubernetes" {
+ alias = "aws"
+ config_path = "~/.kube/configs/aws-config"
+}
+
+provider "kubernetes" {
+ alias = "google"
+ config_path = "~/.kube/configs/google-config"
+}
+
+...
+
Typhoon maintains Terraform modules for most addon components. You can reference main
, a tagged release, a SHA revision, or custom module of your own. Define the CoreDNS manifests using the addons/coredns
module in a manifests.tf
file.
# CoreDNS manifests for the aws cluster
+module "aws" {
+ source = "git::https://github.com/poseidon/typhoon//addons/coredns?ref=v1.30.1"
+ providers = {
+ kubernetes = kubernetes.aws
+ }
+}
+
+# CoreDNS manifests for the google cloud cluster
+module "aws" {
+ source = "git::https://github.com/poseidon/typhoon//addons/coredns?ref=v1.30.1"
+ providers = {
+ kubernetes = kubernetes.google
+ }
+}
+...
+
Plan and apply the CoreDNS Kubernetes resources to cluster(s).
+terraform plan
+terraform apply
+...
+module.aws.kubernetes_service_account.coredns: Refreshing state... [id=kube-system/coredns]
+module.aws.kubernetes_config_map.coredns: Refreshing state... [id=kube-system/coredns]
+module.aws.kubernetes_cluster_role.coredns: Refreshing state... [id=system:coredns]
+module.aws.kubernetes_cluster_role_binding.coredns: Refreshing state... [id=system:coredns]
+module.aws.kubernetes_service.coredns: Refreshing state... [id=kube-system/coredns]
+...
+
Prometheus collects metrics (e.g. node_memory_usage_bytes
) from targets by scraping their HTTP metrics endpoints. Targets are organized into jobs, defined in the Prometheus config. Targets may expose counter, gauge, histogram, or summary metrics.
Here's a simple config from the Prometheus tutorial.
+global:
+ scrape_interval: 15s
+scrape_configs:
+ - job_name: 'prometheus'
+ scrape_interval: 5s
+ static_configs:
+ - targets: ['localhost:9090']
+
On Kubernetes clusters, Prometheus is run as a Deployment, configured with a ConfigMap, and accessed via a Service or Ingress.
+kubectl apply -f addons/prometheus -R
+
The ConfigMap configures Prometheus to discover apiservers, kubelets, cAdvisor, services, endpoints, and exporters. By default, data is kept in an emptyDir
so it is persisted until the pod is rescheduled.
Exporters expose metrics for 3rd-party systems that don't natively expose Prometheus metrics.
+Prometheus provides a basic UI for querying metrics and viewing alerts. Use kubectl
to authenticate to the apiserver and create a local port-forward to the Prometheus pod.
kubectl get pods -n monitoring
+kubectl port-forward prometheus-POD-ID 9090 -n monitoring
+
Visit 127.0.0.1:9090 to query expressions, view targets, or check alerts.
+
+
+
+
+
Use Grafana to view or build dashboards that use Prometheus as the datasource.
+ + + + + + + + + + + + + +Typhoon supports ARM64 Kubernetes clusters with ARM64 controller and worker nodes (full-cluster) or adding worker pools of ARM64 nodes to clusters with an x86/amd64 control plane for a hybdrid (mixed-arch) cluster.
+Typhoon ARM64 clusters (full-cluster or mixed-arch) are available on:
+Create a cluster on AWS with ARM64 controller and worker nodes. Container workloads must be arm64
compatible and use arm64
(or multi-arch) container images.
module "gravitas" {
+ source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.1"
+
+ # AWS
+ cluster_name = "gravitas"
+ dns_zone = "aws.example.com"
+ dns_zone_id = "Z3PAABBCFAKEC0"
+
+ # configuration
+ ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
+
+ # optional
+ arch = "arm64"
+ networking = "cilium"
+ worker_count = 2
+ worker_price = "0.0168"
+
+ controller_type = "t4g.small"
+ worker_type = "t4g.small"
+}
+
module "gravitas" {
+ source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.1"
+
+ # AWS
+ cluster_name = "gravitas"
+ dns_zone = "aws.example.com"
+ dns_zone_id = "Z3PAABBCFAKEC0"
+
+ # configuration
+ ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
+
+ # optional
+ arch = "arm64"
+ networking = "cilium"
+ worker_count = 2
+ worker_price = "0.0168"
+
+ controller_type = "t4g.small"
+ worker_type = "t4g.small"
+}
+
Verify the cluster has only arm64 (aarch64
) nodes. For Flatcar Linux, describe nodes.
$ kubectl get nodes -o wide
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+ip-10-0-21-119 Ready <none> 77s v1.30.1 10.0.21.119 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
+ip-10-0-32-166 Ready <none> 80s v1.30.1 10.0.32.166 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
+ip-10-0-5-79 Ready <none> 77s v1.30.1 10.0.5.79 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.aarch64 containerd://1.5.8
+
Create a hybrid/mixed arch cluster by defining an AWS cluster. Then define a worker pool with ARM64 workers. Optional taints are added to aid in scheduling.
+module "gravitas" {
+ source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes?ref=v1.30.1"
+
+ # AWS
+ cluster_name = "gravitas"
+ dns_zone = "aws.example.com"
+ dns_zone_id = "Z3PAABBCFAKEC0"
+
+ # configuration
+ ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
+
+ # optional
+ networking = "cilium"
+ worker_count = 2
+ worker_price = "0.021"
+
+ daemonset_tolerations = ["arch"] # important
+}
+
module "gravitas" {
+ source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes?ref=v1.30.1"
+
+ # AWS
+ cluster_name = "gravitas"
+ dns_zone = "aws.example.com"
+ dns_zone_id = "Z3PAABBCFAKEC0"
+
+ # configuration
+ ssh_authorized_key = "ssh-ed25519 AAAAB3Nz..."
+
+ # optional
+ networking = "cilium"
+ worker_count = 2
+ worker_price = "0.021"
+
+ daemonset_tolerations = ["arch"] # important
+}
+
module "gravitas-arm64" {
+ source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.30.1"
+
+ # AWS
+ vpc_id = module.gravitas.vpc_id
+ subnet_ids = module.gravitas.subnet_ids
+ security_groups = module.gravitas.worker_security_groups
+
+ # configuration
+ name = "gravitas-arm64"
+ kubeconfig = module.gravitas.kubeconfig
+ ssh_authorized_key = var.ssh_authorized_key
+
+ # optional
+ arch = "arm64"
+ instance_type = "t4g.small"
+ spot_price = "0.0168"
+ node_taints = ["arch=arm64:NoSchedule"]
+}
+
module "gravitas-arm64" {
+ source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.30.1"
+
+ # AWS
+ vpc_id = module.gravitas.vpc_id
+ subnet_ids = module.gravitas.subnet_ids
+ security_groups = module.gravitas.worker_security_groups
+
+ # configuration
+ name = "gravitas-arm64"
+ kubeconfig = module.gravitas.kubeconfig
+ ssh_authorized_key = var.ssh_authorized_key
+
+ # optional
+ arch = "arm64"
+ instance_type = "t4g.small"
+ spot_price = "0.0168"
+ node_taints = ["arch=arm64:NoSchedule"]
+}
+
Verify amd64 (x86_64) and arm64 (aarch64) nodes are present.
+$ kubectl get nodes -o wide
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+ip-10-0-1-73 Ready <none> 111m v1.30.1 10.0.1.73 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
+ip-10-0-22-79... Ready <none> 111m v1.30.1 10.0.22.79 <none> Flatcar Container Linux by Kinvolk 3033.2.0 (Oklo) 5.10.84-flatcar containerd://1.5.8
+ip-10-0-24-130 Ready <none> 111m v1.30.1 10.0.24.130 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
+ip-10-0-39-19 Ready <none> 111m v1.30.1 10.0.39.19 <none> Fedora CoreOS 35.20211215.3.0 5.15.7-200.fc35.x86_64 containerd://1.5.8
+
Create a cluster on Azure with ARM64 controller and worker nodes. Container workloads must be arm64
compatible and use arm64
(or multi-arch) container images.
module "ramius" {
+ source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes?ref=v1.30.1"
+
+ # Azure
+ cluster_name = "ramius"
+ region = "centralus"
+ dns_zone = "azure.example.com"
+ dns_zone_group = "example-group"
+
+ # configuration
+ ssh_authorized_key = "ssh-rsa AAAAB3Nz..."
+
+ # optional
+ arch = "arm64"
+ controller_type = "Standard_D2pls_v5"
+ worker_type = "Standard_D2pls_v5"
+ worker_count = 2
+ host_cidr = "10.0.0.0/20"
+}
+
Typhoon provides Kubernetes clusters with defaults recommended for production. Terraform variables expose supported customization options. Advanced options are available for customizing the architecture or hosts as well.
+Typhoon modules accept Terraform input variables for customizing clusters in meritorious ways (e.g. worker_count
, etc). Variables are carefully considered to provide essentials, while limiting complexity and test matrix burden. See each platform's tutorial for options.
Clusters are kept to a minimal Kubernetes control plane by offering components like Nginx Ingress Controller, Prometheus, and Grafana as optional post-install addons. Customize addons by modifying a copy of our addon manifests.
+Typhoon uses the Ignition system of Fedora CoreOS and Flatcar Linux to immutably declare a system via first-boot disk provisioning. Human-friendly Butane Configs define disk partitions, filesystems, systemd units, dropins, config files, mount units, raid arrays, users, and more before being converted to Ignition.
+Controller and worker instances form a minimal and secure Kubernetes cluster on each platform. Typhoon provides the snippets feature to accept custom Butane Configs that are merged with instance declarations. This allows advanced host customization and experimentation.
+Note
+Snippets cannot be used to modify an already existing instance, the antithesis of immutable provisioning. Ignition fully declares a system on first boot only.
+Danger
+Snippets provide the powerful host customization abilities of Ignition. You are responsible for additional units, configs, files, and conflicts.
+Danger
+Edits to snippets for controller instances can (correctly) cause Terraform to observe a diff (if not otherwise suppressed) and propose destroying and recreating controller(s). Recognize that this is destructive since controllers run etcd and are stateful. See blue/green clusters.
+Define a Butane Config (docs, config) in version control near your Terraform workspace directory (e.g. perhaps in a snippets
subdirectory). You may organize snippets into multiple files, if desired.
For example, ensure an /opt/hello
file is created with permissions 0644 before boot.
# custom-files.yaml
+variant: fcos
+version: 1.5.0
+storage:
+ files:
+ - path: /opt/hello
+ contents:
+ inline: |
+ Hello World
+ mode: 0644
+
# custom-files.yaml
+variant: flatcar
+version: 1.0.0
+storage:
+ files:
+ - path: /opt/hello
+ contents:
+ inline: |
+ Hello World
+ mode: 0644
+
Or ensure a systemd unit hello.service
is created.
# custom-units.yaml
+variant: fcos
+version: 1.5.0
+systemd:
+ units:
+ - name: hello.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Hello World
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/bin/echo Hello World!
+ [Install]
+ WantedBy=multi-user.target
+
# custom-units.yaml
+variant: flatcar
+version: 1.0.0
+systemd:
+ units:
+ - name: hello.service
+ enabled: true
+ contents: |
+ [Unit]
+ Description=Hello World
+ [Service]
+ Type=oneshot
+ ExecStart=/usr/bin/echo Hello World!
+ [Install]
+ WantedBy=multi-user.target
+
Reference the Butane contents by location (e.g. file("./custom-units.yaml")
). On AWS, Azure, DigitalOcean, or Google Cloud extend the controller_snippets
or worker_snippets
list variables.
module "nemo" {
+ ...
+ worker_count = 2
+ controller_snippets = [
+ file("./custom-files.yaml"),
+ file("./custom-units.yaml"),
+ ]
+ worker_snippets = [
+ file("./custom-files.yaml"),
+ file("./custom-units.yaml")",
+ ]
+ ...
+}
+
On Bare-Metal, different Butane configs may be used for each node (since hardware may be heterogeneous). Extend the snippets
map variable by mapping a controller or worker name key to a list of snippets.
module "mercury" {
+ ...
+ snippets = {
+ "node2" = [file("./units/hello.yaml")]
+ "node3" = [
+ file("./units/world.yaml"),
+ file("./units/hello.yaml"),
+ ]
+ }
+ ...
+}
+
Typhoon chooses variables to expose with purpose. If you must customize clusters in ways that aren't supported by input variables, fork Typhoon and maintain a repository with customizations. Reference the repository by changing the username.
+module "nemo" {
+ source = "git::https://github.com/USERNAME/typhoon//digital-ocean/flatcar-linux/kubernetes?ref=myspecialcase"
+ ...
+}
+
To customize low-level Kubernetes control plane bootstrapping, see the poseidon/terraform-render-bootstrap Terraform module.
+Typhoon publishes Kubelet container images to Quay.io (default) and to Dockerhub (in case of a Quay outage or breach). Quay automated builds also provide the option for fully verifiable tagged images (build-{short_sha}
).
To set an alternative etcd image or Kubelet image, use a snippet to set a systemd dropin.
+# kubelet-image-override.yaml
+variant: fcos <- remove for Flatcar Linux
+version: 1.5.0 <- remove for Flatcar Linux
+systemd:
+ units:
+ - name: kubelet.service
+ dropins:
+ - name: 10-image-override.conf
+ contents: |
+ [Service]
+ Environment=KUBELET_IMAGE=docker.io/psdn/kubelet:v1.18.3
+
# etcd-image-override.yaml
+variant: fcos <- remove for Flatcar Linux
+version: 1.5.0 <- remove for Flatcar Linux
+systemd:
+ units:
+ - name: etcd-member.service
+ dropins:
+ - name: 10-image-override.conf
+ contents: |
+ [Service]
+ Environment=ETCD_IMAGE=quay.io/mymirror/etcd:v3.4.12
+
Then reference the snippet in the cluster or worker pool definition.
+module "nemo" {
+ ...
+
+ worker_snippets = [
+ file("./snippets/kubelet-image-override.yaml")
+ ]
+ ...
+}
+
Typhoon clusters consist of controller node(s) and a (default) set of worker nodes.
+Typhoon nodes use the standard set of Kubernetes node labels.
+Labels: kubernetes.io/arch=amd64
+ kubernetes.io/hostname=node-name
+ kubernetes.io/os=linux
+
Controller node(s) are labeled to allow node selection (for rare components that run on controllers) and tainted to prevent ordinary workloads running on controllers.
+Labels: node.kubernetes.io/controller=true
+Taints: node-role.kubernetes.io/controller:NoSchedule
+
Worker nodes are labeled to allow node selection and untainted. Workloads will schedule on worker nodes by default, baring any contraindications.
+Labels: node.kubernetes.io/node=
+Taints: <none>
+
On auto-scaling cloud platforms, you may add worker pools with different groups of nodes with their own labels and taints. On platforms like bare-metal, with heterogeneous machines, you may manage node labels and taints per node.
+Add custom initial worker node labels to default workers or worker pool nodes to allow workloads to select among nodes that differ.
+module "yavin" {
+ source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.1"
+
+ # Google Cloud
+ cluster_name = "yavin"
+ region = "us-central1"
+ dns_zone = "example.com"
+ dns_zone_name = "example-zone"
+
+ # configuration
+ ssh_authorized_key = local.ssh_key
+
+ # optional
+ worker_count = 2
+ worker_node_labels = ["pool=default"]
+}
+
module "yavin-pool" {
+ source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.1"
+
+ # Google Cloud
+ cluster_name = "yavin"
+ region = "europe-west2"
+ network = module.yavin.network_name
+
+ # configuration
+ name = "yavin-16x"
+ kubeconfig = module.yavin.kubeconfig
+ ssh_authorized_key = local.ssh_key
+
+ # optional
+ worker_count = 1
+ machine_type = "n1-standard-16"
+ node_labels = ["pool=big"]
+}
+
In the example above, the two default workers would be labeled pool: default
and the additional worker would be labeled pool: big
.
Add custom initial taints on worker pool nodes to indicate a node is unique and should only schedule workloads that explicitly tolerate a given taint key.
+Warning
+Since taints prevent workloads scheduling onto a node, you must decide whether kube-system
DaemonSets (e.g. flannel, Calico, Cilium) should tolerate your custom taint by setting daemonset_tolerations
. If you don't list your custom taint(s), important components won't run on these nodes.
module "yavin" {
+ source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=v1.30.1"
+
+ # Google Cloud
+ cluster_name = "yavin"
+ region = "us-central1"
+ dns_zone = "example.com"
+ dns_zone_name = "example-zone"
+
+ # configuration
+ ssh_authorized_key = local.ssh_key
+
+ # optional
+ worker_count = 2
+ daemonset_tolerations = ["role"]
+}
+
module "yavin-pool" {
+ source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.1"
+
+ # Google Cloud
+ cluster_name = "yavin"
+ region = "europe-west2"
+ network = module.yavin.network_name
+
+ # configuration
+ name = "yavin-16x"
+ kubeconfig = module.yavin.kubeconfig
+ ssh_authorized_key = local.ssh_key
+
+ # optional
+ worker_count = 1
+ accelerator_type = "nvidia-tesla-p100"
+ accelerator_count = 1
+ node_taints = ["role=gpu:NoSchedule"]
+}
+
In the example above, the the additional worker would be tainted with role=gpu:NoSchedule
to prevent workloads scheduling, but kube-system
components like flannel, Calico, or Cilium would tolerate that custom taint to run there.
Typhoon clusters offer several advanced features for skilled users.
+Typhoon AWS, Azure, and Google Cloud allow additional groups of workers to be defined and joined to a cluster. For example, add worker pools of instances with different types, disk sizes, Container Linux channels, or preemptibility modes.
+Internal Terraform Modules:
+aws/flatcar-linux/kubernetes/workers
aws/fedora-coreos/kubernetes/workers
azure/flatcar-linux/kubernetes/workers
azure/fedora-coreos/kubernetes/workers
google-cloud/flatcar-linux/kubernetes/workers
google-cloud/fedora-coreos/kubernetes/workers
Create a cluster following the AWS tutorial. Define a worker pool using the AWS internal workers
module.
module "tempest-worker-pool" {
+ source = "git::https://github.com/poseidon/typhoon//aws/fedora-coreos/kubernetes/workers?ref=v1.30.1"
+
+ # AWS
+ vpc_id = module.tempest.vpc_id
+ subnet_ids = module.tempest.subnet_ids
+ security_groups = module.tempest.worker_security_groups
+
+ # configuration
+ name = "tempest-pool"
+ kubeconfig = module.tempest.kubeconfig
+ ssh_authorized_key = var.ssh_authorized_key
+
+ # optional
+ worker_count = 2
+ instance_type = "m5.large"
+ os_stream = "next"
+}
+
module "tempest-worker-pool" {
+ source = "git::https://github.com/poseidon/typhoon//aws/flatcar-linux/kubernetes/workers?ref=v1.30.1"
+
+ # AWS
+ vpc_id = module.tempest.vpc_id
+ subnet_ids = module.tempest.subnet_ids
+ security_groups = module.tempest.worker_security_groups
+
+ # configuration
+ name = "tempest-pool"
+ kubeconfig = module.tempest.kubeconfig
+ ssh_authorized_key = var.ssh_authorized_key
+
+ # optional
+ worker_count = 2
+ instance_type = "m5.large"
+ os_image = "flatcar-beta"
+}
+
Apply the change.
+terraform apply
+
Verify an auto-scaling group of workers joins the cluster within a few minutes.
+The AWS internal workers
module supports a number of variables.
Name | +Description | +Example | +
---|---|---|
name | +Unique name (distinct from cluster name) | +"tempest-m5s" | +
vpc_id | +Must be set to vpc_id output by cluster |
+module.cluster.vpc_id | +
subnet_ids | +Must be set to subnet_ids output by cluster |
+module.cluster.subnet_ids | +
security_groups | +Must be set to worker_security_groups output by cluster |
+module.cluster.worker_security_groups | +
kubeconfig | +Must be set to kubeconfig output by cluster |
+module.cluster.kubeconfig | +
ssh_authorized_key | +SSH public key for user 'core' | +"ssh-ed25519 AAAAB3NZ..." | +
Name | +Description | +Default | +Example | +
---|---|---|---|
worker_count | +Number of instances | +1 | +3 | +
instance_type | +EC2 instance type | +"t3.small" | +"t3.medium" | +
os_image | +AMI channel for a Container Linux derivative | +"flatcar-stable" | +flatcar-stable, flatcar-beta, flatcar-alpha | +
os_stream | +Fedora CoreOS stream for compute instances | +"stable" | +"testing", "next" | +
disk_size | +Size of the EBS volume in GB | +40 | +100 | +
disk_type | +Type of the EBS volume | +"gp3" | +standard, gp2, gp3, io1 | +
disk_iops | +IOPS of the EBS volume | +0 (i.e. auto) | +400 | +
spot_price | +Spot price in USD for worker instances or 0 to use on-demand instances | +0 | +0.10 | +
snippets | +Fedora CoreOS or Container Linux Config snippets | +[] | +examples | +
service_cidr | +Must match service_cidr of cluster |
+"10.3.0.0/16" | +"10.3.0.0/24" | +
node_labels | +List of initial node labels | +[] | +["worker-pool=foo"] | +
node_taints | +List of initial node taints | +[] | +["role=gpu:NoSchedule"] | +
Check the list of valid instance types or per-region and per-type spot prices.
+Create a cluster following the Azure tutorial. Define a worker pool using the Azure internal workers
module.
module "ramius-worker-pool" {
+ source = "git::https://github.com/poseidon/typhoon//azure/fedora-coreos/kubernetes/workers?ref=v1.30.1"
+
+ # Azure
+ region = module.ramius.region
+ resource_group_name = module.ramius.resource_group_name
+ subnet_id = module.ramius.subnet_id
+ security_group_id = module.ramius.security_group_id
+ backend_address_pool_id = module.ramius.backend_address_pool_id
+
+ # configuration
+ name = "ramius-spot"
+ kubeconfig = module.ramius.kubeconfig
+ ssh_authorized_key = var.ssh_authorized_key
+
+ # optional
+ worker_count = 2
+ vm_type = "Standard_F4"
+ priority = "Spot"
+ os_image = "/subscriptions/some/path/Microsoft.Compute/images/fedora-coreos-31.20200323.3.2"
+}
+
module "ramius-worker-pool" {
+ source = "git::https://github.com/poseidon/typhoon//azure/flatcar-linux/kubernetes/workers?ref=v1.30.1"
+
+ # Azure
+ region = module.ramius.region
+ resource_group_name = module.ramius.resource_group_name
+ subnet_id = module.ramius.subnet_id
+ security_group_id = module.ramius.security_group_id
+ backend_address_pool_id = module.ramius.backend_address_pool_id
+
+ # configuration
+ name = "ramius-spot"
+ kubeconfig = module.ramius.kubeconfig
+ ssh_authorized_key = var.ssh_authorized_key
+
+ # optional
+ worker_count = 2
+ vm_type = "Standard_F4"
+ priority = "Spot"
+ os_image = "flatcar-beta"
+}
+
Apply the change.
+terraform apply
+
Verify a scale set of workers joins the cluster within a few minutes.
+The Azure internal workers
module supports a number of variables.
Name | +Description | +Example | +
---|---|---|
name | +Unique name (distinct from cluster name) | +"ramius-f4" | +
region | +Must be set to region output by cluster |
+module.cluster.region | +
resource_group_name | +Must be set to resource_group_name output by cluster |
+module.cluster.resource_group_name | +
subnet_id | +Must be set to subnet_id output by cluster |
+module.cluster.subnet_id | +
security_group_id | +Must be set to security_group_id output by cluster |
+module.cluster.security_group_id | +
backend_address_pool_id | +Must be set to backend_address_pool_id output by cluster |
+module.cluster.backend_address_pool_id | +
kubeconfig | +Must be set to kubeconfig output by cluster |
+module.cluster.kubeconfig | +
ssh_authorized_key | +SSH public key for user 'core' | +"ssh-ed25519 AAAAB3NZ..." | +
Name | +Description | +Default | +Example | +
---|---|---|---|
worker_count | +Number of instances | +1 | +3 | +
vm_type | +Machine type for instances | +"Standard_D2as_v5" | +See below | +
os_image | +Channel for a Container Linux derivative | +"flatcar-stable" | +flatcar-stable, flatcar-beta, flatcar-alpha | +
priority | +Set priority to Spot to use reduced cost surplus capacity, with the tradeoff that instances can be deallocated at any time | +"Regular" | +"Spot" | +
snippets | +Container Linux Config snippets | +[] | +examples | +
service_cidr | +CIDR IPv4 range to assign to Kubernetes services | +"10.3.0.0/16" | +"10.3.0.0/24" | +
node_labels | +List of initial node labels | +[] | +["worker-pool=foo"] | +
node_taints | +List of initial node taints | +[] | +["role=gpu:NoSchedule"] | +
Check the list of valid machine types and their specs. Use az vm list-skus
to get the identifier.
Create a cluster following the Google Cloud tutorial. Define a worker pool using the Google Cloud internal workers
module.
module "yavin-worker-pool" {
+ source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes/workers?ref=v1.30.1"
+
+ # Google Cloud
+ region = "europe-west2"
+ network = module.yavin.network_name
+ cluster_name = "yavin"
+
+ # configuration
+ name = "yavin-16x"
+ kubeconfig = module.yavin.kubeconfig
+ ssh_authorized_key = var.ssh_authorized_key
+
+ # optional
+ worker_count = 2
+ machine_type = "n1-standard-16"
+ os_stream = "testing"
+ preemptible = true
+}
+
module "yavin-worker-pool" {
+ source = "git::https://github.com/poseidon/typhoon//google-cloud/flatcar-linux/kubernetes/workers?ref=v1.30.1"
+
+ # Google Cloud
+ region = "europe-west2"
+ network = module.yavin.network_name
+ cluster_name = "yavin"
+
+ # configuration
+ name = "yavin-16x"
+ kubeconfig = module.yavin.kubeconfig
+ ssh_authorized_key = var.ssh_authorized_key
+
+ # optional
+ worker_count = 2
+ machine_type = "n1-standard-16"
+ os_image = "flatcar-stable"
+ preemptible = true
+}
+
Apply the change.
+terraform apply
+
Verify a managed instance group of workers joins the cluster within a few minutes.
+$ kubectl get nodes
+NAME STATUS AGE VERSION
+yavin-controller-0.c.example-com.internal Ready 6m v1.30.1
+yavin-worker-jrbf.c.example-com.internal Ready 5m v1.30.1
+yavin-worker-mzdm.c.example-com.internal Ready 5m v1.30.1
+yavin-16x-worker-jrbf.c.example-com.internal Ready 3m v1.30.1
+yavin-16x-worker-mzdm.c.example-com.internal Ready 3m v1.30.1
+
The Google Cloud internal workers
module supports a number of variables.
Name | +Description | +Example | +
---|---|---|
name | +Unique name (distinct from cluster name) | +"yavin-16x" | +
cluster_name | +Must be set to cluster_name of cluster |
+"yavin" | +
region | +Region for the worker pool instances. May differ from the cluster's region | +"europe-west2" | +
network | +Must be set to network_name output by cluster |
+module.cluster.network_name | +
kubeconfig | +Must be set to kubeconfig output by cluster |
+module.cluster.kubeconfig | +
os_image | +Container Linux image for compute instances | +"uploaded-flatcar-image" | +
ssh_authorized_key | +SSH public key for user 'core' | +"ssh-ed25519 AAAAB3NZ..." | +
Check the list of regions docs or with gcloud compute regions list
.
Name | +Description | +Default | +Example | +
---|---|---|---|
worker_count | +Number of instances | +1 | +3 | +
machine_type | +Compute instance machine type | +"n1-standard-1" | +See below | +
os_stream | +Fedora CoreOS stream for compute instances | +"stable" | +"testing", "next" | +
disk_size | +Size of the disk in GB | +40 | +100 | +
preemptible | +If true, Compute Engine will terminate instances randomly within 24 hours | +false | +true | +
snippets | +Container Linux Config snippets | +[] | +examples | +
service_cidr | +Must match service_cidr of cluster |
+"10.3.0.0/16" | +"10.3.0.0/24" | +
node_labels | +List of initial node labels | +[] | +["worker-pool=foo"] | +
node_taints | +List of initial node taints | +[] | +["role=gpu:NoSchedule"] | +
Check the list of valid machine types.
+ + + + + + + + + + + + + +Typhoon for Fedora CoreOS promoted to alpha!
+Last summer, Typhoon released the first preview of Kubernetes on Fedora CoreOS for bare-metal and AWS, developing many ideas and patterns from Typhoon for Container Linux and Fedora Atomic. Since then, Typhoon for Fedora CoreOS has evolved and gained features alongside Typhoon, while Fedora CoreOS itself has evolved and improved too.
+Fedora recently announced that Fedora CoreOS is available for general use. To align with that change and to better indicate the maturing status, Typhoon for Fedora CoreOS has been promoted to alpha. Many thanks to folks who have worked to make this possbile!
+About: For newcomers, Typhoon is a minimal and free (cost and freedom) Kubernetes distribution providing upstream Kubernetes, declarative configuration via Terraform, and support for AWS, Azure, Google Cloud, DigitalOcean, and bare-metal. It is run by former CoreOS engineer @dghubble to power his clusters, with freedom motivations.
+Introducing a preview of Typhoon Kubernetes clusters with Fedora CoreOS!
+Fedora recently announced the first preview release of Fedora CoreOS, aiming to blend the best of CoreOS and Fedora for containerized workloads. To spur testing, Typhoon is sharing preview modules for Kubernetes v1.15 on AWS and bare-metal using the new Fedora CoreOS preview. What better way to test drive than by running Kubernetes?
+While Typhoon uses Container Linux (or Flatcar Linux) for stable modules, the project hasn't been a stranger to Fedora ideas, once developing a Fedora Atomic variant in 2018. That makes the Fedora CoreOS fushion both exciting and familiar. Typhoon with Fedora CoreOS uses Ignition v3 for provisioning, uses rpm-ostree for layering and updates, tries swapping system containers for podman, and brings SELinux enforcement (table). This is an early preview (don't go to prod), but do try it out and help identify and solve issues (getting started links above).
+Last April, Typhoon introduced alpha support for creating Kubernetes clusters with Fedora Atomic on AWS, Google Cloud, DigitalOcean, and bare-metal. Fedora Atomic shared many of Container Linux's aims for a container-optimized operating system, introduced novel ideas, and provided technical diversification for an uncertain future. However, Project Atomic efforts were merged into Fedora CoreOS and future Fedora Atomic releases are not expected. Typhoon modules for Fedora Atomic will not be updated much beyond Kubernetes v1.13. They may later be removed.
+Typhoon for Fedora Atomic fell short of goals to provide a consistent, practical experience across operating systems and platforms. The modules have remained alpha, despite improvements. Features like coordinated OS updates and boot-time declarative customization were not realized. Inelegance of Cloud-Init/kickstart loomed large. With that brief but obligatory summary, I'd like to change gears and celebrate the many positives.
+Fedora Atomic showcased rpm-ostree as a different approach to Container Linux's AB update scheme. It provided a viable route toward CRI-O to replace Docker as the container engine. And Fedora Atomic devised system containers as a way to package and run raw OCI images through runc for host-level containers1. Many of these ideas will live on in Fedora CoreOS, which is exciting!
+For Typhoon, Fedora Atomic brought fresh ideas and broader perspectives about different container-optimized base operating systems and related tools. Its sad to let go of so much work, but I think its time. Many of the concepts and technologies that were explored will surface again and Typhoon is better positioned as a result.
+Thank you Project Atomic team members for your work! - dghubble
+Starting in v1.10.3, Typhoon AWS and bare-metal container-linux
modules allow picking between the Red Hat Container Linux (formerly CoreOS Container Linux) and Kinvolk Flatcar Linux operating system. Flatcar Linux serves as a drop-in compatible "friendly fork" of Container Linux. Flatcar Linux publishes the same channels and versions as Container Linux and gets provisioned, managed, and operated in an identical way (e.g. login as user "core").
On AWS, pick the Container Linux derivative channel by setting os_image
to coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, or flatcar-alpha.
On bare-metal, pick the Container Linux derivative channel by setting os_channel
to coreos-stable, coreos-beta, coreos-alpha, flatcar-stable, flatcar-beta, or flatcar-alpha. Set the os_version
number to PXE boot and install. Variables container_linux_channel
and container_linux_version
have been dropped.
Flatcar Linux provides a familar Container Linux experience, with support from Kinvolk as an alternative to Red Hat. Typhoon offers the choice of Container Linux vendor to satisfy differing preferences and to diversify technology underpinnings, while providing a consistent Kubernetes experience across operating systems, clouds, and on-premise.
+Introducing Typhoon Kubernetes clusters for Fedora Atomic!
+Fedora Atomic is a container-optimized operating system designed for large-scale clustered operation, immutable infrastructure, and atomic operating system upgrades. Its part of Fedora and Project Atomic, a Red Hat sponsored project working on rpm-ostree, buildah, skopeo, CRI-O, and the related CentOS/RHEL Atomic.
+For newcomers, Typhoon is a free (cost and freedom) Kubernetes distribution providing upstream Kubernetes, declarative configuration via Terraform, and support for AWS, Google Cloud, DigitalOcean, and bare-metal. Typhoon clusters use a self-hosted control plane, support Calico and flannel CNI networking, and enable etcd TLS, RBAC, and network policy.
+Typhoon for Fedora Atomic reflects many of the same principles that created Typhoon for Container Linux. Clusters are declared using plain Terraform configs that can be versioned. In lieu of Ignition, instances are declaratively provisioned with Cloud-Init and kickstart (bare-metal only). TLS assets are generated. Hosts run only a kubelet service, other components are scheduled (i.e. self-hosted). The upstream hyperkube is used directly2. And clusters are kept minimal by offering optional addons for Ingress, Prometheus, and Grafana. Typhoon compliments and enhances Fedora Atomic as a choice of operating system for Kubernetes.
+Meanwhile, Fedora Atomic adds some promising new low-level technologies:
+ostree & rpm-ostree - a hybrid, layered, image and package system that lets you perform atomic updates and rollbacks, layer on packages, "rebase" your system, or manage a remote tree repo. See Dusty Mabe's great intro.
+system containers - OCI container images that embed systemd and runc metadata for starting low-level host services before container runtimes are ready. Typhoon uses system containers under runc for etcd
, kubelet
, and bootkube
on Fedora Atomic (instead of rkt-fly).
CRI-O - CRI-O is a kubernetes-incubator implementation of the Kubernetes Container Runtime Interface. Typhoon uses Docker as the container runtime today, but its a goal to gradually introduce CRI-O as an alternative runtime as it matures.
+Typhoon has long aspired to add a dissimilar operating system to compliment Container Linux. Operating Typhoon clusters across colocations and multiple clouds was driven by our own real need and has provided healthy perspective and clear direction. Adding Fedora Atomic is exciting for the same reasons. Fedora Atomic diversifies Typhoon's technology underpinnings, uniting the Container Linux and Fedora Atomic ecosystems to provide a consistent Kubernetes experience across operating systems, clouds, and on-premise.
+Get started with the basics or read the OS comparison. If you're familiar with Terraform, follow the new tutorials for Fedora Atomic on AWS, Google Cloud, DigitalOcean, and bare-metal.
+Typhoon is not affiliated with Red Hat or Project Atomic.
+Warning
+Heed the warnings. Typhoon for Fedora Atomic is still alpha. Container Linux continues to be the recommended flavor for production clusters. Atomic is not meant to detract from efforts on Container Linux or its derivatives.
+Tip
+For bare-metal, you may continue to use your v0.7+ Matchbox service and terraform-provider-matchbox
plugin to provision both Container Linux and Fedora Atomic clusters. No changes needed.
Container Linux's own primordial rkt-fly shim dates back to the pre-OCI era. In some ways, rkt drove the OCI standards that made newer ideas, like system containers, appealing. ↩
+Using etcd
, kubelet
, and bootkube
as system containers required metadata files be added in system-containers ↩
A network load balancer (NLB) distributes IPv4 TCP/6443 traffic across a target group of controller nodes with a healthy kube-apiserver
. Clusters with multiple controllers span zones in a region to tolerate zone outages.
A network load balancer (NLB) distributes IPv4 TCP/80 and TCP/443 traffic across two target groups of worker nodes with a healthy Ingress controller. Workers span the zones in a region to tolerate zone outages.
+The AWS NLB has a DNS alias record (regional) resolving to 3 zonal IPv4 addresses. The alias record is output as ingress_dns_name
for use in application DNS CNAME records. See Ingress on AWS.
Load balance TCP applications by adding a listener and target group. A listener and target group may map different ports (e.g 3333 external, 30333 internal).
+# Forward TCP traffic to a target group
+resource "aws_lb_listener" "some-app" {
+ load_balancer_arn = module.tempest.nlb_id
+ protocol = "TCP"
+ port = "3333"
+
+ default_action {
+ type = "forward"
+ target_group_arn = aws_lb_target_group.some-app.arn
+ }
+}
+
+# Target group of workers for some-app
+resource "aws_lb_target_group" "some-app" {
+ name = "some-app"
+ vpc_id = module.tempest.vpc_id
+ target_type = "instance"
+
+ protocol = "TCP"
+ port = 3333
+
+ health_check {
+ protocol = "TCP"
+ port = 30333
+ }
+}
+
Pass worker_target_groups
to the cluster to register worker instances into custom target groups.
module "tempest" {
+...
+ worker_target_groups = [
+ aws_lb_target_group.some-app.id,
+ ]
+}
+
Notes:
+Add firewall rules to the worker security group.
+resource "aws_security_group_rule" "some-app" {
+ security_group_id = module.tempest.worker_security_groups[0]
+
+ type = "ingress"
+ protocol = "tcp"
+ from_port = 3333
+ to_port = 30333
+ cidr_blocks = ["0.0.0.0/0"]
+}
+
Add a custom route to the VPC route table.
+data "aws_route_table" "default" {
+ vpc_id = module.temptest.vpc_id
+ subnet_id = module.tempest.subnet_ids[0]
+}
+
+resource "aws_route" "peering" {
+ route_table_id = data.aws_route_table.default.id
+ destination_cidr_block = "192.168.4.0/24"
+ ...
+}
+
IPv6 Feature | +Supported | +
---|---|
Node IPv6 address | +Yes | +
Node Outbound IPv6 | +Yes | +
Kubernetes Ingress IPv6 | +Yes | +
A load balancer distributes IPv4 TCP/6443 traffic across a backend address pool of controllers with a healthy kube-apiserver
. Clusters with multiple controllers use an availability set with 2 fault domains to tolerate hardware failures within Azure.
A load balancer distributes IPv4 TCP/80 and TCP/443 traffic across a backend address pool of workers with a healthy Ingress controller.
+The Azure LB IPv4 address is output as ingress_static_ipv4
for use in DNS A records. See Ingress on Azure.
Load balance TCP/UDP applications by adding rules to the Azure LB (output). A rule may map different ports (e.g. 3333 external, 30333 internal).
+# Forward traffic to the worker backend address pool
+resource "azurerm_lb_rule" "some-app-tcp" {
+ resource_group_name = module.ramius.resource_group_name
+
+ name = "some-app-tcp"
+ loadbalancer_id = module.ramius.loadbalancer_id
+ frontend_ip_configuration_name = "ingress"
+
+ protocol = "Tcp"
+ frontend_port = 3333
+ backend_port = 30333
+ backend_address_pool_id = module.ramius.backend_address_pool_id
+ probe_id = azurerm_lb_probe.some-app.id
+}
+
+# Health check some-app
+resource "azurerm_lb_probe" "some-app" {
+ resource_group_name = module.ramius.resource_group_name
+
+ name = "some-app"
+ loadbalancer_id = module.ramius.loadbalancer_id
+ protocol = "Tcp"
+ port = 30333
+}
+
Add firewall rules to the worker security group.
+resource "azurerm_network_security_rule" "some-app" {
+ resource_group_name = module.ramius.resource_group_name
+
+ name = "some-app"
+ network_security_group_name = module.ramius.worker_security_group_name
+ priority = "3001"
+ access = "Allow"
+ direction = "Inbound"
+ protocol = "Tcp"
+ source_port_range = "*"
+ destination_port_range = "30333"
+ source_address_prefix = "*"
+ destination_address_prefixes = module.ramius.worker_address_prefixes
+}
+
Azure does not provide public IPv6 addresses at the standard SKU.
+IPv6 Feature | +Supported | +
---|---|
Node IPv6 address | +No | +
Node Outbound IPv6 | +No | +
Kubernetes Ingress IPv6 | +No | +
Load balancing across controller nodes with a healthy kube-apiserver
is determined by your unique bare-metal environment and its capabilities.
Load balancing across worker nodes with a healthy Ingress Controller is determined by your unique bare-metal environment and its capabilities.
+See the nginx-ingress
addon to run Nginx as the Ingress Controller for bare-metal.
Load balancing across worker nodes with TCP/UDP services is determined by your unique bare-metal environment and its capabilities.
+Status of IPv6 on Typhoon bare-metal clusters.
+IPv6 Feature | +Supported | +
---|---|
Node IPv6 address | +Yes | +
Node Outbound IPv6 | +Yes | +
Kubernetes Ingress IPv6 | +Possible | +
IPv6 support depends upon the bare-metal network environment.
+ + + + + + + + + + + + + +Let's cover the concepts you'll need to get started.
+Kubernetes is an open-source cluster system for deploying, scaling, and managing containerized applications across a pool of compute nodes (bare-metal, droplets, instances).
+All cluster nodes provision themselves from a declarative configuration upfront. Nodes run a kubelet
service and register themselves with the control plane to join the cluster. All nodes run kube-proxy
and calico
or flannel
pods.
Controller nodes are scheduled to run the Kubernetes apiserver
, scheduler
, controller-manager
, coredns
, and kube-proxy
. A fully qualified domain name (e.g. cluster_name.domain.com) resolving to a network load balancer or round-robin DNS (depends on platform) is used to refer to the control plane.
Worker nodes register with the control plane and run application workloads.
+Terraform config files declare resources that Terraform should manage. Resources include infrastructure components created through a provider API (e.g. Compute instances, DNS records) or local assets like TLS certificates and config files.
+# Declare an instance
+resource "google_compute_instance" "pet" {
+ # ...
+}
+
The terraform
tool parses configs, reconciles the desired state with actual state, and updates resources to reach desired state.
$ terraform plan
+Plan: 4 to add, 0 to change, 0 to destroy.
+$ terraform apply
+Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
+
With Typhoon, you'll be able to manage clusters with Terraform.
+Terraform modules allow a collection of resources to be configured and managed together. Typhoon provides a Kubernetes cluster Terraform module for each supported platform and operating system.
+Clusters are declared in Terraform by referencing the module.
+module "yavin" {
+ source = "git::https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes"
+ cluster_name = "yavin"
+ ...
+}
+
Modules are updated regularly, set the version to a release tag or commit hash.
+...
+source = "git:https://github.com/poseidon/typhoon//google-cloud/fedora-coreos/kubernetes?ref=hash"
+
Module versioning ensures terraform get --update
only fetches the desired version, so plan and apply don't change cluster resources, unless the version is altered.
Maintain Terraform configs for "live" infrastructure in a versioned repository. Seek to organize configs to reflect resources that should be managed together in a terraform apply
invocation.
You may choose to organize resources all together, by team, by project, or some other scheme. Here's an example that manages clusters together:
+.git/
+infra/
+└── terraform
+ └── clusters
+ ├── aws-tempest.tf
+ ├── azure-ramius.tf
+ ├── bare-metal-mercury.tf
+ ├── google-cloud-yavin.tf
+ ├── digital-ocean-nemo.tf
+ ├── providers.tf
+ ├── terraform.tfvars
+ └── remote-backend.tf
+
By convention, providers.tf
registers provider APIs, terraform.tfvars
stores shared values, and state is written to a remote backend.
Terraform syncs its state with provider APIs to plan changes to reconcile to the desired state. By default, Terraform writes state data (including secrets!) to a terraform.tfstate
file. At a minimum, add a .gitignore
file (or equivalent) to prevent state from being committed to your infrastructure repository.
# .gitignore
+*.tfstate
+*.tfstate.backup
+.terraform/
+
Later, you may wish to checkout Terraform remote backends which store state in a remote bucket like Google Storage or S3.
+terraform {
+ backend "gcs" {
+ credentials = "/path/to/credentials.json"
+ project = "project-id"
+ bucket = "bucket-id"
+ path = "metal.tfstate"
+ }
+}
+
DNS A records round-robin1 resolve IPv4 TCP/6443 traffic to controller droplets (regardless of whether their kube-apiserver
is healthy). Clusters with multiple controllers are supported, but round-robin means ⅓ down causes ~⅓ of apiserver requests will fail).
DNS records (A and AAAA) round-robin1 resolve the workers_dns
name (e.g. nemo-workers.example.com
) to a worker droplet's IPv4 and IPv6 address. This allows running an Ingress controller Daemonset across workers (resolved regardless of whether its the controller is healthy).
The DNS record name is output as workers_dns
for use in application DNS CNAME records. See Ingess on DigitalOcean.
DNS records (A and AAAA) round-robin1 resolve the workers_dns
name (e.g. nemo-workers.example.com
) to a worker droplet's IPv4 and IPv6 address. The DNS record name is output as workers_dns
for use in application DNS CNAME records.
With round-robin as "load balancing", TCP/UDP services can be served via the same CNAME. Don't forget to add a firewall rule for the application.
+Add a DigitalOcean load balancer to distribute IPv4 TCP traffic (HTTP/HTTPS Ingress or TCP service) across worker droplets (tagged with worker_tag
) with a healthy Ingress controller. A load balancer adds cost, but adds redundancy against worker failures (closer to Typhoon clusters on other platforms).
resource "digitalocean_loadbalancer" "ingress" {
+ name = "ingress"
+ region = "fra1"
+ vpc_uuid = module.nemo.vpc_id
+ droplet_tag = module.nemo.worker_tag
+
+ healthcheck {
+ protocol = "http"
+ port = "10254"
+ path = "/healthz"
+ healthy_threshold = 2
+ }
+
+ forwarding_rule {
+ entry_protocol = "tcp"
+ entry_port = 80
+ target_protocol = "tcp"
+ target_port = 80
+ }
+
+ forwarding_rule {
+ entry_protocol = "tcp"
+ entry_port = 443
+ target_protocol = "tcp"
+ target_port = 443
+ }
+
+ forwarding_rule {
+ entry_protocol = "tcp"
+ entry_port = 3333
+ target_protocol = "tcp"
+ target_port = 30300
+ }
+}
+
Define DNS A records to digitalocean_loadbalancer.ingress.ip
instead of CNAMEs.
Add firewall rules matching worker droplets with worker_tag
.
resource "digitalocean_firewall" "some-app" {
+ name = "some-app"
+ tags = [module.nemo.worker_tag]
+ inbound_rule {
+ protocol = "tcp"
+ port_range = "30300"
+ source_addresses = ["0.0.0.0/0"]
+ }
+}
+
DigitalOcean load balancers do not have an IPv6 address. Resolving individual droplets' IPv6 addresses and using an Ingress controller with hostNetwork: true
is a possible way to serve IPv6 traffic, if one must.
IPv6 Feature | +Supported | +
---|---|
Node IPv6 address | +Yes | +
Node Outbound IPv6 | +Yes | +
Kubernetes Ingress IPv6 | +Possible | +
A global forwarding rule (IPv4 anycast) and TCP Proxy distribute IPv4 TCP/443 traffic across a backend service with zonal instance groups of controller(s) with a healthy kube-apiserver
(TCP/6443). Clusters with multiple controllers span zones in a region to tolerate zone outages.
Notes:
+Global forwarding rules and a TCP Proxy distribute IPv4/IPv6 TCP/80 and TCP/443 traffic across a managed instance group of workers with a healthy Ingress Controller. Workers span zones in a region to tolerate zone outages.
+The IPv4 and IPv6 anycast addresses are output as ingress_static_ipv4
and ingress_static_ipv6
for use in DNS A and AAAA records. See Ingress on Google Cloud.
Load balance TCP/UDP applications by adding a forwarding rule to the worker target pool (output).
+# Static IPv4 address for some-app Load Balancing
+resource "google_compute_address" "some-app-ipv4" {
+ name = "some-app-ipv4"
+}
+
+# Forward IPv4 TCP traffic to the target pool
+resource "google_compute_forwarding_rule" "some-app-tcp" {
+ name = "some-app-tcp"
+ ip_address = google_compute_address.some-app-ipv4.address
+ ip_protocol = "TCP"
+ port_range = "3333"
+ target = module.yavin.worker_target_pool
+}
+
+
+# Forward IPv4 UDP traffic to the target pool
+resource "google_compute_forwarding_rule" "some-app-udp" {
+ name = "some-app-udp"
+ ip_address = google_compute_address.some-app-ipv4.address
+ ip_protocol = "UDP"
+ port_range = "3333"
+ target = module.yavin.worker_target_pool
+}
+
Notes:
+google_compute_address
), no IPv6.HTTP:10254/healthz
(i.e. nginx-ingress
)Add firewall rules to the cluster's network.
+resource "google_compute_firewall" "some-app" {
+ name = "some-app"
+ network = module.yavin.network_self_link
+
+ allow {
+ protocol = "tcp"
+ ports = [3333]
+ }
+
+ allow {
+ protocol = "udp"
+ ports = [3333]
+ }
+
+ source_ranges = ["0.0.0.0/0"]
+ target_tags = ["yavin-worker"]
+}
+
Applications exposed via HTTP/HTTPS Ingress can be served over IPv6.
+IPv6 Feature | +Supported | +
---|---|
Node IPv6 address | +No | +
Node Outbound IPv6 | +No | +
Kubernetes Ingress IPv6 | +Yes | +
Typhoon supports Fedora CoreOS and Flatcar Linux. These operating systems were chosen because they offer:
+Together, they diversify Typhoon to support a range of container technologies.
+Property | +Flatcar Linux | +Fedora CoreOS | +
---|---|---|
Kernel | +~5.15.x | +~6.5.x | +
systemd | +252 | +254 | +
Username | +core | +core | +
Ignition system | +Ignition v3.x spec | +Ignition v3.x spec | +
storage driver | +overlay2 (extfs) | +overlay2 (xfs) | +
logging driver | +json-file | +journald | +
cgroup driver | +systemd | +systemd | +
cgroup version | +v2 | +v2 | +
Networking | +systemd-networkd | +NetworkManager | +
Resolver | +systemd-resolved | +systemd-resolved | +
Property | +Flatcar Linux | +Fedora CoreOS | +
---|---|---|
single-master | +all platforms | +all platforms | +
multi-master | +all platforms | +all platforms | +
control plane | +static pods | +static pods | +
Container Runtime | +containerd 1.5.9 | +containerd 1.6.0 | +
kubelet image | +kubelet image with upstream binary | +kubelet image with upstream binary | +
control plane images | +upstream images | +upstream images | +
on-host etcd | +docker | +podman | +
on-host kubelet | +docker | +podman | +
CNI plugins | +calico, cilium, flannel | +calico, cilium, flannel | +
coordinated drain & OS update | +FLUO addon | +fleetlock | +
Typhoon conventional directories.
+Kubelet setting | +Host location | +
---|---|
cni-conf-dir | +/etc/cni/net.d | +
pod-manifest-path | +/etc/kubernetes/manifests | +
volume-plugin-dir | +/var/lib/kubelet/volumeplugins | +