Skip to content

Latest commit

 

History

History
480 lines (360 loc) · 20.7 KB

9-kubernetes-helm.md

File metadata and controls

480 lines (360 loc) · 20.7 KB

Understanding Helm Components

  • Helm Tool: The core component of Helm that needs to be installed.
  • Chart: The package to be installed, containing:
    • A description of the package.
    • One or more templates comprising Kubernetes manifest files.
  • Charts can be stored locally or accessed from remote Helm repositories.

Understanding Helm Charts

  • Helm revolves around Helm charts for application management.

Exploring Helm Chart Repositories

  • ArtifactHub.io: A registry for Helm charts, aiding in finding Helm repository names.
  • Search and browse through various categories to discover Helm charts.
  • For instance, to install the Kubernetes dashboard:
  • Add the Helm repository: helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard
  • Install the software: helm install kubernetes-dashboard.

Adding Helm Repositories

  • To add a Helm repository, use the command: helm repo add <repo-name> <repo-url>.
  • Example: helm repo add bitnami https://charts.bitnami.com/bitnami.

Exploring Repository Content

  • Use helm repo list to list available repositories, including added ones.
  • To search within a specific repository: helm search repo <repository-name>.

Updating Repository Information

  • Periodically update repository information using: helm repo update.
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

$ helm version
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm search repo bitnami | less
$ helm repo update

Updating Helm Repositories

  • After adding Helm repositories, use helm repo update to ensure you have the latest information cached locally.

Installing Helm Charts

  • Helm provides a straightforward way to install charts with default parameters using the helm install command.
  • Helm Charts serve as an easy starting point for deploying applications.

Managing Installed Charts

  • After installing at least one chart, you can list currently installed charts using helm list.
  • Optionally, you can remove installed charts with helm delete.

Installing a Helm Chart

  • To install a Helm Chart, use the helm install command, followed by the chart name. For example: helm install bitnami/mysql.
  • Helm Charts involve templating, which we'll explore further later.

Understanding Helm Chart Output

  • After installing a Helm Chart, you may receive instructions and usage information.
  • It's crucial to follow these instructions, as they provide guidance on configuring and connecting to your application.

Examining Deployed Resources

  • To inspect the resources created by a Helm Chart, use kubectl get all.
  • Helm Charts often create various Kubernetes resources such as Pods, Services, and StatefulSets.

Exploring Helm Chart Details

  • You can retrieve detailed information about a Helm Chart using helm show chart <chart-name>.
  • For even more details, use helm show all <chart-name> to understand what choices the Helm Chart makes for you.

Customizing Helm Charts

  • Helm Charts can be customized before installation, allowing you to tailor settings to your requirements.
  • This customization is done via YAML files within the Helm Chart.

Modifying Chart Values

  • Modify Helm Chart values in the values.yaml file within the chart.
  • It's advisable to customize values according to your application's needs, rather than using default settings.

Using helm pull

  • Fetch a local copy of a Helm Chart using helm pull <chart-name>.
  • Extract the chart using tar xvf <chart-name>.tgz.

Previewing Chart Templates

  • Use helm template --debug <chart-name> to preview the Kubernetes manifests generated by the Helm Chart.
  • The template corresponds to a directory name within the chart.

Installing a Customized Chart

  • To install a customized Helm Chart, use the -f flag followed by the path to your customized values.yaml file.
  • Example: helm install -f nginx/values.yaml my-nginx nginx/.
$ helm install bitnami/mysql --generate-name
$ kubectl get all
$ helm show chart bitnami/mysql
$ helm show all bitnami/mysql
$ helm show values bitnami/nginx | less
$ helm list
$ helm pull bitnami/nginx
$ tar xvf nginx-15.2.0.tgz
$ cd nginx/
$ cat values.yaml
$ helm template --debug nginx
$ helm install -f nginx/values.yaml my-nginx nginx/

What is Kustomize?

  • Kustomize is a Kubernetes feature for managing resource customization.
  • It uses a kustomization.yaml file to define customization rules for a set of resources.
  • It allows decoupling of resource configuration from the source files.

Applying Kustomizations

  • You can apply Kustomizations using kubectl apply -k ./directory, where directory contains the kustomization.yaml file and the resources it refers to.
  • To delete resources created by the customization, you can use kubectl delete -k ./directory.

Kustomization YAML Structure

  • A kustomization.yaml file defines customization rules for resources.
  • Key features include:
    • resources: A list of generic resource definitions.
    • namePrefix: Prefix added to resource names.
    • namespace: The target namespace for resources.
    • commonLabels: Labels applied to all resources.
    • Many other features available in Kustomize for advanced customization.

Using Kustomization Overlays

  • Kustomization overlays allow you to work with different deployment scenarios, similar to a CI/CD pipeline.
  • Overlays define variations of the base configuration for different environments (e.g., dev, staging, prod).

Example Kustomization Structure

  • A typical Kustomization structure includes a base configuration and overlays for different environments:
    • base: Contains shared resource definitions.
    • overlays (e.g., dev, staging, prod): Each overlay has its own kustomization.yaml.

Customizing Resources in Overlays

  • In overlay-specific kustomization.yaml files, you can set parameters for that specific environment.
  • For example, you might set a namePrefix for dev, a different namespace, and common labels to distinguish resources in that environment.

Hands-on Example

  • A simple example in a Git repository includes a directory named kustomization.
  • It contains a deployment.yaml, a service.yaml, and a kustomization.yaml.
  • The kustomization.yaml applies a namePrefix of "test" and a common label of "environment: testing."
  • Running kubectl apply -K . from this directory will create the resources with these customizations.
$ cd kustomization/
$ kubectl apply -k .
$ kubectl get all --selector environment=testing
$ kubectl delete -k .

Understanding Blue/Green Deployments

  • Blue/Green deployments ensure smooth application upgrades with zero downtime.
  • In this strategy, you can test a new application version before taking it into production while simulating real usage.
  • Key components: Blue deployment (current app) and Green deployment (new app).
  • Traffic is initially routed to the Blue deployment; once Green is tested and ready, traffic is switched to it.

Procedure Overview

  1. Start with the running Blue application.
  2. Create a new Green deployment with the new version.
  3. Test it with a temporary service resource.
  4. If tests pass, remove the temporary service.
  5. Delete the old Blue service and create a new service to expose the Green deployment.
  6. After a successful transition, remove the Blue deployment.
  7. Maintain the service name to ensure smooth transitions for frontend resources like Ingress.

Visualizing the Process

  • The user points to a service with a unique name (kept in the old configuration).
  • Initially, the service points to the Blue deployment.
  • For testing, a temporary test service is used.
  • After testing, reconfigure the service to point to the Green deployment.
  • User traffic experiences minimal disruption during the transition.

Demo: Working with Blue/Green Deployments

  1. Create the Blue deployment: kubectl create deploy blue-nginx --image=nginx:1.4 --replicas=3.
  2. Expose the Blue deployment as a service: kubectl expose deploy blue-nginx --port=80 --name=bgnginx.
  3. Create the Green deployment YAML: kubectl get deploy blue-nginx -o yaml > green-nginx.yaml.
  4. Edit green-nginx.yaml, change labels, and update the image version.
  5. Create the Green deployment: kubectl create -f green-nginx.yaml.
  6. Test using a temporary service: kubectl expose deploy green-nginx --port=80 --name=green.
  7. Test thoroughly and verify the endpoints with kubectl get endpoints.
  8. Delete the temporary Green service: kubectl delete svc green.
  9. Perform a quick transition: kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx.
  10. Test again if needed.
  11. When confident, delete the Blue deployment: kubectl delete deploy blue-nginx.
$ kubectl create deploy blue-nginx --image-nginx:1.14 --replicas=3
$ kubectl get all
$ kubectl expose deploy blue-nginx --port=80 --name=bgnginx
$ kubectl get deploy blue-nginx -o yaml > green-nginx.yaml
$ kubectl create -f green-nginx.yaml
$ kubectl get pods
$ kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
$ kubectl get pods -o wide
$ kubectl get endpoints
$ kubectl delete deploy blue-nginx

Understanding Canary Deployments

  • Canary Deployments are a deployment strategy where updates are initially rolled out to a small subset of users or resources.
  • The name comes from the practice of sending a "canary" into a mine to test for safety; if the canary survived, it was safe for miners to enter. Similarly, in Canary Deployments, a small portion of users or resources is used as a test group.
  • The goal is to detect issues or errors in new deployments with minimal impact. If issues arise, only a limited set of users or resources are affected.

Canary Deployment Procedure

  1. Start with an existing application or deployment (the "old" version).
  2. Create a new deployment for the updated version (the "canary" version).
  3. Ensure both deployments use the same label, which is essential for service configuration.
  4. Configure a service to use the label selector for both the old and canary deployments.
  5. The service load balances traffic between the old and canary deployments, with a small percentage directed to the canary.
  6. Test the canary deployment to identify issues.
  7. If issues are found, adjustments can be made or the canary deployment can be rolled back without impacting all users.
  8. If the canary deployment is successful, scale it up gradually by increasing the number of replicas.
  9. Eventually, once you're confident in the canary deployment, you can scale down or delete the old deployment.

Visualizing Canary Deployments

  • Canary deployments involve two deployments: the "old" version and the "canary" version.
  • A service, with a label selector that includes both deployments, directs traffic.
  • Initially, only a small percentage of users or resources access the canary version.
  • This allows for testing and monitoring of the canary deployment's performance.

Demo: Implementing a Canary Deployment

This demonstration consists of four parts:

  1. Create an "old" deployment (initial application version).
  2. Expose the "old" deployment as a service.
  3. Create a "canary" deployment (new version) and mount a ConfigMap for uniqueness.
  4. Gradually scale up the canary deployment and eventually delete the old deployment.
$ kubectl create deploy old-nginx --image=nginx:1.14 --replicas=3 --dry-run=client -o yaml > oldnginx.yaml
$ kubectl create -f oldnginx.yaml
$ kubectl get all
$ kubectl expose deploy old-nginx --name=oldnginx --port=80 --selector type=canary
$ kubectl get svc
$ kubectl get endpoints
$ kubectl get pods -o wide --selector type=canary
$ kubectl get svc
$ minikube ssh
docker@minikube:~$ curl http://10.110.8.145:80
$ kubectl cp old-nginx-654f595c5-9jh7p:/usr/share/nginx/html/index.html index.html
$ kubectl create cm canary --from-file=index.html
$ kubectl describe cm canary
$ cp oldnginx.yaml canary.yaml
$ kubectl create -f canary.yaml
$ kubectl get svc
$ kubectl get endpoints
$ kubectl get pods --show-labels
$ kubectl describe pod new-nginx-76b49d4df9-nxzkd
$ kubectl get svc
$ minikube ssh
docker@minikube:~$ curl http://10.110.8.145:80
$ kubectl get deploy
$ kubectl scale deploy new-nginx --replicas=3
$ kubectl describe svc
$ kubectl describe svc oldnginx
$ kubectl scale deployment old-nginx --replicas=0
$ kubectl get pods

What Are Custom Resource Definitions?

  • Custom Resource Definitions (CRDs) enable users to introduce custom resources into Kubernetes clusters.
  • They allow the integration of various resource types into a cloud-native environment, making it highly extensible.
  • CRDs simplify the process of adding custom resources to the Kubernetes API server without requiring programming skills.
  • CRDs provide an alternative to building custom resources via API integration, which necessitates programming skills. In this video, we focus on CRDs exclusively.

How CRDs Work

CRDs follow a two-step procedure:

  1. Defining the Resource: Define the custom resource using the Custom Resource Definition API (CRD API). This step outlines the structure and attributes of the custom resource.

  2. Editing the Resource: After defining the resource with CRD, you can manage and edit instances of the custom resource through its dedicated API endpoint.

Demonstrating CRDs

  • CRD Object (crd-object.yaml): Defines the CRD itself. It specifies the API group, kind, metadata, and schema. For example, it defines a custom resource named "backup" in the "stable.example.com" group.

  • CRD Backup (crd-backup.yaml): Demonstrates how to create an instance of the custom resource defined in CRD Object. It specifies the API version, kind, metadata (name), and spec (attributes like backup type, image, and replicas).

Applying a CRD

To apply the CRD Object to the cluster, use the following command:

$ kubectl create -f crd-object.yaml
customresourcedefinition.apiextensions.k8s.io/backups.stable.example.com created

$ kubectl api-resources | grep back
backups                           bks          stable.example.com/v1                  true         BackUp

$ kubectl create -f crd-backup.yaml
backup.stable.example.com/mybackup created

$ kubectl get backups
NAME       AGE
mybackup   35s

$ kubectl describe backups.stable.example.com mybackup

What Is a Kubernetes Operator?

  • A Kubernetes Operator is a custom application designed around Custom Resource Definitions (CRDs).
  • Operators provide a way to package, run, and manage applications in Kubernetes.
  • Unlike Helm, which is a package manager, operators are specifically tailored for applications that introduce new functionalities not previously available in Kubernetes.
  • Operators are based on controllers, which are Kubernetes components that continuously manage dynamic systems.

The Role of Controllers

  • Controllers in Kubernetes operate within a controller loop.
  • The controller loop continually observes the current state of resources, compares it to the desired state, and makes necessary adjustments to maintain the desired state.
  • The Kubernetes controller manager runs a reconciliation loop that oversees these controllers.

Working with Operators

  • Operators are application-specific controllers that can be added to Kubernetes clusters.
  • While you can write your own operators, most users leverage operators available from community sources.
  • Websites like operatorhub.io provide a repository of operators for various purposes.

Popular Operators

Many essential solutions from the Kubernetes ecosystem are provided as operators, including:

  • Prometheus: A monitoring and alerting solution.
  • Calico: An operator for managing the Calico network plugin.
  • Jaeger: Used for tracing transactions between distributed systems.

Demo: Deploying a Tigera Operator

In this demonstration, we will rebuild Minikube and use a Tigera operator for networking. Please note that this demo will destroy existing configurations, so back up your work if needed.

  1. Stop Minikube to reconfigure networking
  2. Verify that Minikube is running and clean.
  3. Deploy the Tigera operator.
  4. Verify that the Tigera operator namespace has been created.
  5. Check the resources created by the Tigera operator in its namespace.
  6. Observe the new custom resource definitions (CRDs) added by the operator.
  7. Create custom resources for the operator. Fetch the custom resources.
  8. Edit the custom-resources.yaml file to match your desired CIDR settings.
  9. Apply the custom resources to the cluster.
  10. Monitor the installation process.
  11. Wait until all the pods in the calico-system namespace are up and running. This may take a few minutes:
$ minikube stop
$ minikube delete
$ minikube start --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.10.0.0/16
$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
$ kubectl get ns
$ kubectl get all -n tigera-operator
$ kubectl api-resources | grep tigera
$ wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml
$ nano custom-resources.yaml
>> cidr: 10.10.0.0/16

$ kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

$ kubectl get installation -o yaml
$ kubectl get pods -n calico-system

What is a StatefulSet?

  • A StatefulSet is a Kubernetes resource that provides persistent identity to pods.
  • It is similar to a Deployment but is designed for applications requiring stable network identities and persistent storage.
  • Each pod in a StatefulSet has a persistent identifier that remains consistent even during rescheduling.
  • StatefulSets offer ordering guarantees for pod creation and scaling.

Use Cases for StatefulSets

StatefulSets are necessary for applications with specific requirements:

  • Stable Network Identifier: Applications needing a consistent and unique network identifier.
  • Stable Persistent Storage: Applications that require persistent storage.
  • Ordered Deployment and Scaling: When pods must be deployed or scaled in a specific order.
  • Ordered Automated Rolling Updates: When updates must be performed in a defined sequence.

Limitations of StatefulSets

While powerful, StatefulSets have some limitations:

  • Storage Provisioning: Storage provisioning through a StorageClass must be available.
  • Data Safety: Volumes created by the StatefulSet are not automatically deleted when the StatefulSet is removed.
  • Headless Service: A headless service (without an IP address) must be created for application access.
  • Pod Removal: To guarantee pod removal, scale down the number of pods to zero before deleting the StatefulSet.

StatefulSet Example

  • In Git repository, locate the sfs.yaml file.

Configuration Details

  1. Service:

    • The StatefulSet requires an associated service.
    • The service has a cluster IP set to none, making it a headless service.
  2. StatefulSet Definition:

    • Kind: StatefulSet.
    • Name: web.
    • Service Name: It is unique to StatefulSets and is required.
    • Replicas: Define the desired number of replicas.
  3. VolumeClaimTemplate:

    • StatefulSets use a volumeClaimTemplate to automatically create volume claims.
    • Metadata name: WWW.
    • Define access mode, storageClassName, and resources.
    • Request storage of one gigabyte.
$ kubectl get storageclass
$ kubectl get all
$ kubectl create -f sfs.yaml
service/nginx created
statefulset.apps/web created

$ kubectl get all
$ kubectl get pvc
$ echo this is old version > index.html
$ kubectl create configmap oldversion-cm --from-file=index.html
$ echo this is new version > index.html
$ kubectl create configmap newversion-cm --from-file=index.html
$ kubectl get cm -o yaml
$ kubectl create -f oldnginx-v-1-14.yaml
$ kubectl get all
$ kubectl get all --show-labels
$ kubectl expose deployment oldnginx --name=canary-svc --port=80 --selector=type=canary
$ kubectl get svc
$ kubectl describe svc canary-svc
$ minikube ssh
docker@minikube:~$ curl ....
docker@minikube:~$ this is old version
docker@minikube:~$ exit

$ kubectl create -f newnginx-v-latest.yaml
$ kubectl get svc
$ minikube ssh
docker@minikube:~$ curl ....
docker@minikube:~$ this is old version
docker@minikube:~$ curl ....
docker@minikube:~$ this is old version
docker@minikube:~$ curl ....
docker@minikube:~$ this is new version
docker@minikube:~$ exit
$ kubectl scale deployment oldnginx --replicas=0