Skip to content

High availible kubernetes cluster deploy for education purpose with the ability to deploy a single-node option. Using Vagrant to provision VM and ansible over kubeadm for kubernetes cluster creation. Infrastructure components deployed with Helm. CNI - Cilium

Notifications You must be signed in to change notification settings

Randsw/ha-kubernetes-cluster

Repository files navigation

Deploy HA k8s cluster with kubeadmin using ansible and vagrant

Introduction

Deploy HA kubernetes cluster with 3 contol-plane nodes, 3 worker nodes and 2 GW nodes for external access to cluster using Vagrant

⚠️ This version work only with Ubuntu Server

Contents

  1. Requirements
  2. System Overview
  3. Cluster Installation
  4. Cluster Configuration

Requirements

NOTE: The ansible playbooks and role require a configured ansible environment where the ansible nodes are reachable and are properly set up to have an IP address and a working package manager.

Host

  • 4 Processor core
  • 16 GB of RAM
  • Linux OS 64-bit (preffered Ubuntu 20.04 and newer)

Ansible

Vagrant

System Overview

Virtual Machines

This cluster used only for education purpose. :warning: DO NOT USE IN PRODUCTION

Cluster consist of 8 Virtual Machines

  • 3 Control Plane nodes
  • 3 Worker nodes
  • 2 Gateway nodes

Cluster

Control plane nodes doesn't allow any pods run on them except kubernetes contol plane pods. Gateway nodes use for NGINX ingress-controller pods deployment and also doesn't allow any other pods. Application pods and cluster service pods - such as Vault, Cert-Manager, Local-path-provisioner, EFK and Prometheus/Grafana, bare-metal load balancer provisioner runs only on worker nodes.

Gateway nodes needs an external load balancer ether for cluster administration and application access.

Kubernetes High Availability

Each control plane node runs an instance of the kube-apiserver, kube-scheduler, and kube-controller-manager. The kube-apiserver is exposed to worker nodes using a load balancer.

From kubernets documentation:

Each of master replicas will run the following components in the following mode:

  • etcd instance: all instances will be clustered together using consensus;
  • API server: each server will talk to local etcd - all API servers in the cluster will be available;
  • controllers, scheduler, and cluster auto-scaler: will use lease mechanism - only one instance of each of them will be active in the cluster;
  • add-on manager: each manager will work independently trying to keep add-ons in sync.

HA control-plane

For load balancing workers access to control-plane we using Kube-VIP. The leader within the cluster will assume the VIP and will have it bound to the selected interface that is declared within the configuration. When the leader changes it will evacuate the VIP first or in failure scenarios the VIP will be directly assumed by the next elected leader. Kube-VIP deployed by static pods This configuration provide failure toleration but not load balancing. For load balancing we deploy HAProxy server on all control-plane nodes. HaProxy listen 0.0.0.0:8443 on contol-plane nodes and proxy request to kubernetes apiserver using round-robin algoritm.

HA control-plane

LoadBalancer Service

Kubernetes does not offer an implementation of network load-balancers (Services of type LoadBalancer) for bare metal clusters. The implementations of Network LB that Kubernetes does ship with are all glue code that calls out to various IaaS platforms (GCP, AWS, Azure…). If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created.

Bare metal cluster operators are left with two lesser tools to bring user traffic into their clusters, “NodePort” and “externalIPs” services. Both of these options have significant downsides for production use, which makes bare metal clusters second class citizens in the Kubernetes ecosystem.

MetalLB aims to redress this imbalance by offering a Network LB implementation that integrates with standard network equipment, so that external services on bare metal clusters also “just work” as much as possible.

In this realisation Metallb work in Layer2 mode.

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters

Storage class

  • Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the user configuration, the Local Path Provisioner will create hostPath based persistent volume on the node automatically. It utilizes the features introduced by Kubernetes Local Persistent Volume feature, but make it a simpler solution than the built-in local volume feature in Kubernetes. Currently the Kubernetes Local Volume provisioner cannot do dynamic provisioning for the local volumes.

With Local Path Provisioner we can create dynamic provisioning the volume using hostPath. We dont have to create static persistent volume. Local Path Provisioner do all provisioning work for us.

Monitoring

  • Kube-prometheus-stack a collection of Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

Logging

  • Grafana Loki is a set of components that can be composed into a fully featured logging stack. Loki is built around the idea of only indexing metadata about your logs: labels (just like Prometheus labels). Log data itself is then compressed and stored in chunks in object stores such as S3 or GCS, or even locally on the filesystem.

Example application

TODO

Cluster Configuration

Hosts

Change ansible_host variable in all files in inventories/ml-k8s/host_vars to ip address you choose in section Cluster installation. Private key and user generated by Vagrant and dont needed to be changed.

Cluster Installation

HA installation

Configure Vagrantfile_ha with your variable

Name Default Value Description
k8s_control_node_num 3 number of control plane nodes
k8s_worker_num 3 number of worker nodes
k8s_gw_num 2 number of gateway nodes
bridge - Ethernet interface with internet access nodes connected to
vm_cidr 192.168.0 First 3 octets of nodes ip address
vm_ip_addr_start 130 Nodes start ip address last octet. Increase incremently

Create and start VM VAGRANT_VAGRANTFILE=Vagrantfile_ha vagrant up

Provision VM with ansible ansible-playbook -i inventories/ml-k8s/hosts.yml deploy-cluster.yml

Kubectl config file for kubernetes cluster access would placed at /home/<ansible user>/.kube/admin.conf

Single node installation

Configure Vagrantfile with your variable

Name Default Value Description
k8s_control_node_num 1 number of control plane nodes
bridge - Ethernet interface with internet access nodes connected to
vm_cidr 192.168.0 First 3 octets of nodes ip address
vm_ip_addr_start 130 Nodes start ip address last octet. Increase incremently

Create and start VM VAGRANT_VAGRANTFILE=Vagrantfile vagrant up

Provision VM with ansible ansible-playbook -i inventories/ml-k8s/hosts-single.yml deploy-cluster.yml

Kubectl config file for kubernetes cluster access would placed at /home/<ansible user>/.kube/admin.conf

About

High availible kubernetes cluster deploy for education purpose with the ability to deploy a single-node option. Using Vagrant to provision VM and ansible over kubeadm for kubernetes cluster creation. Infrastructure components deployed with Helm. CNI - Cilium

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published