- Helm Tool: The core component of Helm that needs to be installed.
- Chart: The package to be installed, containing:
- A description of the package.
- One or more templates comprising Kubernetes manifest files.
- Charts can be stored locally or accessed from remote Helm repositories.
- Helm revolves around Helm charts for application management.
- ArtifactHub.io: A registry for Helm charts, aiding in finding Helm repository names.
- Search and browse through various categories to discover Helm charts.
- For instance, to install the Kubernetes dashboard:
- Add the Helm repository:
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard
- Install the software:
helm install kubernetes-dashboard
.
- To add a Helm repository, use the command:
helm repo add <repo-name> <repo-url>
. - Example:
helm repo add bitnami https://charts.bitnami.com/bitnami
.
- Use
helm repo list
to list available repositories, including added ones. - To search within a specific repository:
helm search repo <repository-name>
.
- Periodically update repository information using:
helm repo update
.
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
$ helm version
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm search repo bitnami | less
$ helm repo update
- After adding Helm repositories, use
helm repo update
to ensure you have the latest information cached locally.
- Helm provides a straightforward way to install charts with default parameters using the
helm install
command. - Helm Charts serve as an easy starting point for deploying applications.
- After installing at least one chart, you can list currently installed charts using
helm list
. - Optionally, you can remove installed charts with
helm delete
.
- To install a Helm Chart, use the
helm install
command, followed by the chart name. For example:helm install bitnami/mysql
. - Helm Charts involve templating, which we'll explore further later.
- After installing a Helm Chart, you may receive instructions and usage information.
- It's crucial to follow these instructions, as they provide guidance on configuring and connecting to your application.
- To inspect the resources created by a Helm Chart, use
kubectl get all
. - Helm Charts often create various Kubernetes resources such as Pods, Services, and StatefulSets.
- You can retrieve detailed information about a Helm Chart using
helm show chart <chart-name>
. - For even more details, use
helm show all <chart-name>
to understand what choices the Helm Chart makes for you.
- Helm Charts can be customized before installation, allowing you to tailor settings to your requirements.
- This customization is done via YAML files within the Helm Chart.
- Modify Helm Chart values in the
values.yaml
file within the chart. - It's advisable to customize values according to your application's needs, rather than using default settings.
- Fetch a local copy of a Helm Chart using
helm pull <chart-name>
. - Extract the chart using
tar xvf <chart-name>.tgz
.
- Use
helm template --debug <chart-name>
to preview the Kubernetes manifests generated by the Helm Chart. - The template corresponds to a directory name within the chart.
- To install a customized Helm Chart, use the
-f
flag followed by the path to your customizedvalues.yaml
file. - Example:
helm install -f nginx/values.yaml my-nginx nginx/
.
$ helm install bitnami/mysql --generate-name
$ kubectl get all
$ helm show chart bitnami/mysql
$ helm show all bitnami/mysql
$ helm show values bitnami/nginx | less
$ helm list
$ helm pull bitnami/nginx
$ tar xvf nginx-15.2.0.tgz
$ cd nginx/
$ cat values.yaml
$ helm template --debug nginx
$ helm install -f nginx/values.yaml my-nginx nginx/
- Kustomize is a Kubernetes feature for managing resource customization.
- It uses a
kustomization.yaml
file to define customization rules for a set of resources. - It allows decoupling of resource configuration from the source files.
- You can apply Kustomizations using
kubectl apply -k ./directory
, wheredirectory
contains thekustomization.yaml
file and the resources it refers to. - To delete resources created by the customization, you can use
kubectl delete -k ./directory
.
- A
kustomization.yaml
file defines customization rules for resources. - Key features include:
resources
: A list of generic resource definitions.namePrefix
: Prefix added to resource names.namespace
: The target namespace for resources.commonLabels
: Labels applied to all resources.- Many other features available in Kustomize for advanced customization.
- Kustomization overlays allow you to work with different deployment scenarios, similar to a CI/CD pipeline.
- Overlays define variations of the base configuration for different environments (e.g., dev, staging, prod).
- A typical Kustomization structure includes a base configuration and overlays for different environments:
base
: Contains shared resource definitions.overlays
(e.g.,dev
,staging
,prod
): Each overlay has its ownkustomization.yaml
.
- In overlay-specific
kustomization.yaml
files, you can set parameters for that specific environment. - For example, you might set a
namePrefix
for dev, a different namespace, and common labels to distinguish resources in that environment.
- A simple example in a Git repository includes a directory named
kustomization
. - It contains a
deployment.yaml
, aservice.yaml
, and akustomization.yaml
. - The
kustomization.yaml
applies anamePrefix
of "test" and a common label of "environment: testing." - Running
kubectl apply -K .
from this directory will create the resources with these customizations.
$ cd kustomization/
$ kubectl apply -k .
$ kubectl get all --selector environment=testing
$ kubectl delete -k .
- Blue/Green deployments ensure smooth application upgrades with zero downtime.
- In this strategy, you can test a new application version before taking it into production while simulating real usage.
- Key components: Blue deployment (current app) and Green deployment (new app).
- Traffic is initially routed to the Blue deployment; once Green is tested and ready, traffic is switched to it.
- Start with the running Blue application.
- Create a new Green deployment with the new version.
- Test it with a temporary service resource.
- If tests pass, remove the temporary service.
- Delete the old Blue service and create a new service to expose the Green deployment.
- After a successful transition, remove the Blue deployment.
- Maintain the service name to ensure smooth transitions for frontend resources like Ingress.
- The user points to a service with a unique name (kept in the old configuration).
- Initially, the service points to the Blue deployment.
- For testing, a temporary test service is used.
- After testing, reconfigure the service to point to the Green deployment.
- User traffic experiences minimal disruption during the transition.
- Create the Blue deployment:
kubectl create deploy blue-nginx --image=nginx:1.4 --replicas=3
. - Expose the Blue deployment as a service:
kubectl expose deploy blue-nginx --port=80 --name=bgnginx
. - Create the Green deployment YAML:
kubectl get deploy blue-nginx -o yaml > green-nginx.yaml
. - Edit
green-nginx.yaml
, change labels, and update the image version. - Create the Green deployment:
kubectl create -f green-nginx.yaml
. - Test using a temporary service:
kubectl expose deploy green-nginx --port=80 --name=green
. - Test thoroughly and verify the endpoints with
kubectl get endpoints
. - Delete the temporary Green service:
kubectl delete svc green
. - Perform a quick transition:
kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
. - Test again if needed.
- When confident, delete the Blue deployment:
kubectl delete deploy blue-nginx
.
$ kubectl create deploy blue-nginx --image-nginx:1.14 --replicas=3
$ kubectl get all
$ kubectl expose deploy blue-nginx --port=80 --name=bgnginx
$ kubectl get deploy blue-nginx -o yaml > green-nginx.yaml
$ kubectl create -f green-nginx.yaml
$ kubectl get pods
$ kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --name=bgnginx
$ kubectl get pods -o wide
$ kubectl get endpoints
$ kubectl delete deploy blue-nginx
- Canary Deployments are a deployment strategy where updates are initially rolled out to a small subset of users or resources.
- The name comes from the practice of sending a "canary" into a mine to test for safety; if the canary survived, it was safe for miners to enter. Similarly, in Canary Deployments, a small portion of users or resources is used as a test group.
- The goal is to detect issues or errors in new deployments with minimal impact. If issues arise, only a limited set of users or resources are affected.
- Start with an existing application or deployment (the "old" version).
- Create a new deployment for the updated version (the "canary" version).
- Ensure both deployments use the same label, which is essential for service configuration.
- Configure a service to use the label selector for both the old and canary deployments.
- The service load balances traffic between the old and canary deployments, with a small percentage directed to the canary.
- Test the canary deployment to identify issues.
- If issues are found, adjustments can be made or the canary deployment can be rolled back without impacting all users.
- If the canary deployment is successful, scale it up gradually by increasing the number of replicas.
- Eventually, once you're confident in the canary deployment, you can scale down or delete the old deployment.
- Canary deployments involve two deployments: the "old" version and the "canary" version.
- A service, with a label selector that includes both deployments, directs traffic.
- Initially, only a small percentage of users or resources access the canary version.
- This allows for testing and monitoring of the canary deployment's performance.
This demonstration consists of four parts:
- Create an "old" deployment (initial application version).
- Expose the "old" deployment as a service.
- Create a "canary" deployment (new version) and mount a ConfigMap for uniqueness.
- Gradually scale up the canary deployment and eventually delete the old deployment.
$ kubectl create deploy old-nginx --image=nginx:1.14 --replicas=3 --dry-run=client -o yaml > oldnginx.yaml
$ kubectl create -f oldnginx.yaml
$ kubectl get all
$ kubectl expose deploy old-nginx --name=oldnginx --port=80 --selector type=canary
$ kubectl get svc
$ kubectl get endpoints
$ kubectl get pods -o wide --selector type=canary
$ kubectl get svc
$ minikube ssh
docker@minikube:~$ curl http://10.110.8.145:80
$ kubectl cp old-nginx-654f595c5-9jh7p:/usr/share/nginx/html/index.html index.html
$ kubectl create cm canary --from-file=index.html
$ kubectl describe cm canary
$ cp oldnginx.yaml canary.yaml
$ kubectl create -f canary.yaml
$ kubectl get svc
$ kubectl get endpoints
$ kubectl get pods --show-labels
$ kubectl describe pod new-nginx-76b49d4df9-nxzkd
$ kubectl get svc
$ minikube ssh
docker@minikube:~$ curl http://10.110.8.145:80
$ kubectl get deploy
$ kubectl scale deploy new-nginx --replicas=3
$ kubectl describe svc
$ kubectl describe svc oldnginx
$ kubectl scale deployment old-nginx --replicas=0
$ kubectl get pods
- Custom Resource Definitions (CRDs) enable users to introduce custom resources into Kubernetes clusters.
- They allow the integration of various resource types into a cloud-native environment, making it highly extensible.
- CRDs simplify the process of adding custom resources to the Kubernetes API server without requiring programming skills.
- CRDs provide an alternative to building custom resources via API integration, which necessitates programming skills. In this video, we focus on CRDs exclusively.
CRDs follow a two-step procedure:
-
Defining the Resource: Define the custom resource using the Custom Resource Definition API (CRD API). This step outlines the structure and attributes of the custom resource.
-
Editing the Resource: After defining the resource with CRD, you can manage and edit instances of the custom resource through its dedicated API endpoint.
-
CRD Object (crd-object.yaml): Defines the CRD itself. It specifies the API group, kind, metadata, and schema. For example, it defines a custom resource named "backup" in the "stable.example.com" group.
-
CRD Backup (crd-backup.yaml): Demonstrates how to create an instance of the custom resource defined in CRD Object. It specifies the API version, kind, metadata (name), and spec (attributes like backup type, image, and replicas).
To apply the CRD Object to the cluster, use the following command:
$ kubectl create -f crd-object.yaml
customresourcedefinition.apiextensions.k8s.io/backups.stable.example.com created
$ kubectl api-resources | grep back
backups bks stable.example.com/v1 true BackUp
$ kubectl create -f crd-backup.yaml
backup.stable.example.com/mybackup created
$ kubectl get backups
NAME AGE
mybackup 35s
$ kubectl describe backups.stable.example.com mybackup
- A Kubernetes Operator is a custom application designed around Custom Resource Definitions (CRDs).
- Operators provide a way to package, run, and manage applications in Kubernetes.
- Unlike Helm, which is a package manager, operators are specifically tailored for applications that introduce new functionalities not previously available in Kubernetes.
- Operators are based on controllers, which are Kubernetes components that continuously manage dynamic systems.
- Controllers in Kubernetes operate within a controller loop.
- The controller loop continually observes the current state of resources, compares it to the desired state, and makes necessary adjustments to maintain the desired state.
- The Kubernetes controller manager runs a reconciliation loop that oversees these controllers.
- Operators are application-specific controllers that can be added to Kubernetes clusters.
- While you can write your own operators, most users leverage operators available from community sources.
- Websites like operatorhub.io provide a repository of operators for various purposes.
Many essential solutions from the Kubernetes ecosystem are provided as operators, including:
- Prometheus: A monitoring and alerting solution.
- Calico: An operator for managing the Calico network plugin.
- Jaeger: Used for tracing transactions between distributed systems.
In this demonstration, we will rebuild Minikube and use a Tigera operator for networking. Please note that this demo will destroy existing configurations, so back up your work if needed.
- Stop Minikube to reconfigure networking
- Verify that Minikube is running and clean.
- Deploy the Tigera operator.
- Verify that the Tigera operator namespace has been created.
- Check the resources created by the Tigera operator in its namespace.
- Observe the new custom resource definitions (CRDs) added by the operator.
- Create custom resources for the operator. Fetch the custom resources.
- Edit the custom-resources.yaml file to match your desired CIDR settings.
- Apply the custom resources to the cluster.
- Monitor the installation process.
- Wait until all the pods in the calico-system namespace are up and running. This may take a few minutes:
$ minikube stop
$ minikube delete
$ minikube start --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.10.0.0/16
$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml
$ kubectl get ns
$ kubectl get all -n tigera-operator
$ kubectl api-resources | grep tigera
$ wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml
$ nano custom-resources.yaml
>> cidr: 10.10.0.0/16
$ kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
$ kubectl get installation -o yaml
$ kubectl get pods -n calico-system
- A StatefulSet is a Kubernetes resource that provides persistent identity to pods.
- It is similar to a Deployment but is designed for applications requiring stable network identities and persistent storage.
- Each pod in a StatefulSet has a persistent identifier that remains consistent even during rescheduling.
- StatefulSets offer ordering guarantees for pod creation and scaling.
StatefulSets are necessary for applications with specific requirements:
- Stable Network Identifier: Applications needing a consistent and unique network identifier.
- Stable Persistent Storage: Applications that require persistent storage.
- Ordered Deployment and Scaling: When pods must be deployed or scaled in a specific order.
- Ordered Automated Rolling Updates: When updates must be performed in a defined sequence.
While powerful, StatefulSets have some limitations:
- Storage Provisioning: Storage provisioning through a StorageClass must be available.
- Data Safety: Volumes created by the StatefulSet are not automatically deleted when the StatefulSet is removed.
- Headless Service: A headless service (without an IP address) must be created for application access.
- Pod Removal: To guarantee pod removal, scale down the number of pods to zero before deleting the StatefulSet.
- In Git repository, locate the
sfs.yaml
file.
-
Service:
- The StatefulSet requires an associated service.
- The service has a cluster IP set to none, making it a headless service.
-
StatefulSet Definition:
- Kind: StatefulSet.
- Name: web.
- Service Name: It is unique to StatefulSets and is required.
- Replicas: Define the desired number of replicas.
-
VolumeClaimTemplate:
- StatefulSets use a volumeClaimTemplate to automatically create volume claims.
- Metadata name: WWW.
- Define access mode, storageClassName, and resources.
- Request storage of one gigabyte.
$ kubectl get storageclass
$ kubectl get all
$ kubectl create -f sfs.yaml
service/nginx created
statefulset.apps/web created
$ kubectl get all
$ kubectl get pvc
$ echo this is old version > index.html
$ kubectl create configmap oldversion-cm --from-file=index.html
$ echo this is new version > index.html
$ kubectl create configmap newversion-cm --from-file=index.html
$ kubectl get cm -o yaml
$ kubectl create -f oldnginx-v-1-14.yaml
$ kubectl get all
$ kubectl get all --show-labels
$ kubectl expose deployment oldnginx --name=canary-svc --port=80 --selector=type=canary
$ kubectl get svc
$ kubectl describe svc canary-svc
$ minikube ssh
docker@minikube:~$ curl ....
docker@minikube:~$ this is old version
docker@minikube:~$ exit
$ kubectl create -f newnginx-v-latest.yaml
$ kubectl get svc
$ minikube ssh
docker@minikube:~$ curl ....
docker@minikube:~$ this is old version
docker@minikube:~$ curl ....
docker@minikube:~$ this is old version
docker@minikube:~$ curl ....
docker@minikube:~$ this is new version
docker@minikube:~$ exit
$ kubectl scale deployment oldnginx --replicas=0