Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mkdocs #11

Merged
merged 11 commits into from
Dec 22, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .ansible-lint
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
exclude_paths:
- meta/
- .github/
12 changes: 12 additions & 0 deletions .github/renovate.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": [
"config:base"
],
"github-actions": {
"fileMatch": [
"^\\.github\\\\/workflows\\\\/.*\\.ya?ml$"
]
}
}

26 changes: 26 additions & 0 deletions .github/workflows/mkdocs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
name: Deploy
on:
push:
branches:
- master
jobs:
build:
name: Deploy docs to GitHub Pages
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4

- name: Build documentation
uses: Tiryoh/actions-mkdocs@v0
with:
mkdocs_version: 'latest'
requirements: 'requirements.txt'
configfile: 'mkdocs/mkdocs.yml'

- name: Deploy docs to github pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./mkdocs/site

1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
mkdocs/site
437 changes: 16 additions & 421 deletions README.md

Large diffs are not rendered by default.

6 changes: 4 additions & 2 deletions defaults/main.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
k3s_version: v1.28.4+k3s2
k3s_version: v1.29.0+k3s1
k3s_systemd_dir: /etc/systemd/system
k3s_master_ip: "{{ hostvars[groups[k3s_master_group][0]]['ansible_host'] | default(groups[k3s_master_group][0]) }}"
k3s_master_port: 6443
Expand Down Expand Up @@ -33,7 +33,9 @@ k3s_gvisor_platform: systrap
k3s_gvisor_create_runtimeclass: true
k3s_gvisor_config: {}
# https://github.com/google/gvisor/tags
k3s_gvisor_version: 20231204
k3s_gvisor_version: 20231218
k3s_crun: false
k3s_crun_version: 1.12
k3s_sysctl_config: {}
k3s_registries: ""
k3s_kubeconfig: false
Expand Down
27 changes: 27 additions & 0 deletions mkdocs/docs/advanced-configuration/additional-k8s-configs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Creating additional kubernetes configs
Sometimes you need to create additional config files. For example, you want to trace kubelet, which requires separate config file for tracing configuration.
Variable ```k3s_additional_config_files``` will take care of that.
All additional config files will go to ```/etc/rancher/k3s``` directory, with name specified in name block and with content specified in content.
This action will happen on pre-configuration stage, before k3s installation.

Example:
```yaml
k3s_additional_config_files:
- name: apiserver-tracing.yaml
content: |
apiVersion: apiserver.config.k8s.io/v1alpha1
kind: TracingConfiguration
endpoint: 127.0.0.1:4317
samplingRatePerMillion: 100
```

Will result in file ```/etc/rancher/k3s/apiserver-tracing.yaml``` with following content:
```yaml
apiVersion: apiserver.config.k8s.io/v1alpha1
kind: TracingConfiguration
endpoint: 127.0.0.1:4317
samplingRatePerMillion: 100
```

Please note that no additional formatting or processing is happening on that stage, so you need to take care about all indentation and other formatting things.
Additionally, editing any of those files will trigger k3s restart
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Additional packages and services
Sometimes certain software requires certain packages installed on host system. Some of examples are distributed filesystems like longhorn and openebs which require iscsid.
While it's better to manage such software with dedicated roles, i included that variables for simplicity. If you want openebs-jiva or longhorn to work, you can specify
```yaml
k3s_additional_packages:
- open-iscsi
k3s_additional_services:
- iscsid
```
open-iscsi will be installed and iscsid service will be both started and enabled at boot-time before k3s installation
6 changes: 6 additions & 0 deletions mkdocs/docs/advanced-configuration/containerd-template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Customizing containerd config template
If you use different version of k3s and/or you want to customize containerd template, you can override path to containerd template with ```k3s_containerd_template``` variable, for example:
```yaml
k3s_containerd_template: "{{ inventory_dir }}/files/k3s/containerd.toml.tmpl.j2"
```
In that case, role will look for containerd template in ```files/k3s/containerd.toml.tmpl.j2``` inside your inventory folder you defined in ```ansible.cfg```
11 changes: 11 additions & 0 deletions mkdocs/docs/advanced-configuration/custom-cni.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Using custom network plugin
If you want to use something different and self-managed than default flannel you can set flannel backend to none, which will remove flannel completely:
```yaml
k3s_flannel_backend: none
```
Additionally, if you want to use something with eBPF dataplane enabled (calico, cilium) you might need to disable kube-proxy and mount bpffs filesystem on host node:
```yaml
k3s_bpffs: true
k3s_master_additional_config:
disable-kube-proxy: true
```
23 changes: 23 additions & 0 deletions mkdocs/docs/advanced-configuration/custom-manifests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Adding custom kubernetes manifests

If you need to create additional kubernetes objects after cluster creation, you can use k3s_additional_manifests variable.<br>
Example with all possible parameters:

```yaml
k3s_additional_manifests:
- name: kata
state: present
definition:
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: kata
handler: kata
```

You can supply full definition in "definition" block, including resource name in metadata.name (kata in example).<br>
If your object doesn't contain metadata.name, then name from ansible will be used (kata in example).<br>
Object name in .definition have precedence and will be used if both .name and .definition.metadata.name exists.<br>
You can also control control resource state with state parameter (```absent```, ```present```), which is set to ```present``` by default.<br>
Object creation will be delegated to first node in your ```k3s_master_group```, in case of multi-master setup it will be your "initial" master node.<br>
For RBAC, it will use k3s-generated ```/etc/rancher/k3s/k3s.yaml``` kubeconfig on same master server, which have cluster-admin rights.<br>
20 changes: 20 additions & 0 deletions mkdocs/docs/advanced-configuration/custom-registries.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@

# Adding custom registries
By using k3s_registries variable you can configure custom registries, both origins and mirrors. Format follows [official](https://rancher.com/docs/k3s/latest/en/installation/private-registry/) config format.
Example:
```yaml
k3s_registries:
mirrors:
docker.io:
endpoint:
- "https://mycustomreg.com:5000"
configs:
"mycustomreg:5000":
auth:
username: xxxxxx # this is the registry username
password: xxxxxx # this is the registry password
tls:
cert_file: # path to the cert file used in the registry
key_file: # path to the key file used in the registry
ca_file: # path to the ca file used in the registry
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# Provisioning cluster using external cloud-controller-manager
By default, cluster will be installed with k3s "dummy" cloud controller manager. If you deploy your k3s cluster on supported cloud platform (for example hetzner with their [ccm](https://github.com/hetznercloud/hcloud-cloud-controller-manager)) you will need to specify following parameters **before** first cluster start, since cloud controller can't be changed after cluster deployment:

```yaml
k3s_master_additional_config:
disable-cloud-controller: true
k3s_kubelet_additional_config:
- "cloud-provider=external"
```
23 changes: 23 additions & 0 deletions mkdocs/docs/advanced-configuration/getting-kubeconfig.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@

# Getting kubeconfig via role
Role have ability to download kubeconfig file to machine from where ansible was run. To use it, set following variables:
```yaml
k3s_kubeconfig: true
k3s_kubeconfig_context: k3s-de1
```
Role will perform following:

1. Copy ```/etc/rancher/k3s/k3s.yml``` to ```~/.kube/config-${ k3s_kubeconfig_context``` }

2. Patch it with your preferred context name specified in ```k3s_kubeconfig_context``` variable instead of stock ```default```

3. Patch it with proper server URL (by default, it will be ansible_host of first master node in group specified in variable ```k3s_master_group```, with port 6443, aka "initial master"), but you can override it with ```k3s_kubeconfig_server```

4. Download resulting file to machine running ansible with path ```~/.kube/config-${ k3s_kubeconfig_context }```, in current example it will be ```~/.kube/config-k3s-de1```

And you can start using it right away.
However, if your master is configured differently (HA IP, Load balancer, etc), you might want to specify server manually. For this, you can use ```k3s_kubeconfig_server``` variable:
```yaml
k3s_kubeconfig_server: "master-ha.k8s.example.org:6443"
```
Please note that role *will not* track changes of ```/etc/rancher/k3s/k3s.yml``` - if you redeploy your k3s cluster and need new kubeconfig, just delete existing local kubeconfig to get new one.
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Setting kubelet arguments
To pass arguments for kubelet, you can use ```k3s_kubelet_additional_config``` variable:
```yaml
k3s_kubelet_additional_config:
- "image-gc-high-threshold=40"
- "image-gc-low-threshold=30"
```
8 changes: 8 additions & 0 deletions mkdocs/docs/advanced-configuration/setting-sysctl.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
### Setting sysctl

Role also allows setting arbitrary sysctl settings using ```k3s_sysctl_config``` variable in dict format:
```yaml
k3s_sysctl_configs:
fs.inotify.max_user_instances: 128
```
Settings defined with that varible will be persisted in ```/etc/sysctl.d/99-k3s.conf``` file, loading them after system reboots
8 changes: 8 additions & 0 deletions mkdocs/docs/advanced-configuration/specifying-ip.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# k3s and external ip
Sometimes k3s fails to properly detect external and internal ip. For those, you can use ```k3s_external_ip``` and ```k3s_internal_ip``` variables, for example:
Ie:
```yaml
k3s_external_ip: "{{ ansible_default_ipv4['address'] }}"
k3s_internal_ip: "{{ ansible_vpn0.ipv4.address }}"
```
In which case external ip will be ansible default ip and node ip (internal-ip) will be ip address of vpn0 interface
12 changes: 12 additions & 0 deletions mkdocs/docs/gvisor/settings.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Additional configuration
Role supports passing additional settings for gvisor using ```k3s_gvisor_config```. For example, to enable host networking, use:
```yaml
k3s_gvisor_config:
network: host
```
Which will become
```toml
[runsc_config]
network = "host"
```
in gvisor config
25 changes: 25 additions & 0 deletions mkdocs/docs/gvisor/usage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Installation and usage
By setting k3s_gvisor to true role will install gvisor - google's application kernel for container.<br>
By default it will use systrap mode, to switch it to kvm set k3s_gvisor_platform to kvm.<br>
If platform will be set to kvm, role will also load (and persist) corresponding module into kernel.<br>
It will also create RuntimeClass kubernetes object if you have variable k3s_gvisor_create_runtimeclass set to true (default).<br>
If you want to create it manually:
```yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
```
After that you should be able to launch gvisor-enabled pods by adding runtimeClassName to pod spec, ie
```yaml
apiVersion: v1
kind: Pod
metadata:
name: gvisor-nginx
spec:
runtimeClassName: gvisor
containers:
- name: nginx
image: nginx
```
1 change: 1 addition & 0 deletions mkdocs/docs/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Ansible role for managing rancher [k3s](https://k3s.io), lightweight, cncf-certified kubernetes distribution.
12 changes: 12 additions & 0 deletions mkdocs/docs/installation/airgapped-install.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Airgapped installation
For environments without internet access, you can use
```yaml
k3s_install_mode: airgap
```
In that mode, role will download k3s binary and bootstrap images locally, and transfer them to target server from ansible runner.
It will also work for gvisor.
Please note that if you use [additional manifests installation](#adding-custom-kubernetes-manifests), you will need python3-kubernetes package installed on system - role assumes you have accessible OS distribution mirror configured on that airgapped node, otherwise installation will fail.
If you can't get that package installed on your system, do not use automatic installation of manifests and set
```yaml
k3s_gvisor_create_runtimeclass: false
```
27 changes: 27 additions & 0 deletions mkdocs/docs/installation/basics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Basic installation
This role discovers installation mode from your ansible inventory.
For working with your inventory, it operates on two basic variables, ```k3s_master_group``` and ```k3s_agent_group```, which are set to ```k3s_master``` and ```k3s_agent``` by default.

Following is an example of single master and 2 agents:
```ini
[k3s_master]
kube-master-1.example.org

[k3s_agent]
kube-node-1.example.org
kube-node-2.example.org
```

For group with master, k3s_master in that example, you should enable master installation with ```k3s_master``` variable:
```yaml
k3s_master: true
```

Accordingly, for agents, use ```k3s_agent``` variable:
```yaml
k3s_agent: true
```

For selecting master server to connect, you can use ```k3s_master_ip``` variable.
By default it will be set to first ansible_host in ansible group specified in ```k3s_master_group``` variable.
Of course, you can always redefine it manually.
89 changes: 89 additions & 0 deletions mkdocs/docs/installation/multi-master/high-availability/haproxy.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
# HA with haproxy
Using [This haproxy role](https://github.com/Oefenweb/ansible-haproxy). I run my cluster on top of L3 vpn so i can't use L2, so i just install haproxy on each node, point haproxy to all masters, and point agents to localhost haproxy. Dirty, but works. Example config:

```yaml
haproxy_listen:
- name: stats
description: Global statistics
bind:
- listen: '0.0.0.0:1936'
mode: http
http_request:
- action: use-service
param: prometheus-exporter
cond: if { path /metrics }
stats:
enable: true
uri: /
options:
- hide-version
- show-node
admin: if LOCALHOST
refresh: 5s
auth:
- user: admin
passwd: 'yoursupersecretpassword'
haproxy_frontend:
- name: kubernetes_master_kube_api
description: frontend with k8s api masters
bind:
- listen: "127.0.0.1:16443"
mode: tcp
default_backend: k8s-de1-kube-api
haproxy_backend:
- name: k8s-de1-kube-api
description: backend with all kubernetes masters
mode: tcp
balance: roundrobin
option:
- httpchk GET /readyz
http_check: expect status 401
default_server_params:
- inter 1000
- rise 2
- fall 2
server:
- name: k8s-de1-master-1
listen: "master-1:6443"
param:
- check
- check-ssl
- verify none
- name: k8s-de1-master-2
listen: "master-2:6443"
param:
- check
- check-ssl
- verify none
- name: k8s-de1-master-3
listen: "master-3:6443"
param:
- check
- check-ssl
- verify none
```

That will start haproxy listening on 127.0.0.1:16443 for connections to k8s masters. You can then redefine master IP and port for agents with
```yaml
k3s_master_ip: 127.0.0.1
k3s_master_port: 16443
```

And now your connections are balanced between masters and protected in case of one or two masters will go down. One downside of that config is that it checks for reply 401 on /readyz endpoint, because since certain version of k8s (1.19 if i recall correctly) this endpoint requires authorization. So you have 2 options here:

* Continue to rely on 401 check (not a good solution, since we're just checking for http up status)
* Add ```anonymous-auth=true``` to apiserver arguments:
```yaml
k3s_master_additional_config:
kube-apiserver-arg:
- "anonymous-auth=true"
```
This will open /readyz, /healthz, /livez and /version endpoints to anonymous auth, and potentially expose version info. If that is concerning you, it's possible to patch system:public-info-viewer role to keep only /readyz, /healthz and /livez endpoint open:
```
kubectl patch clusterrole system:public-info-viewer --type=json -p='[{"op": "replace", "path": "/rules/0/nonResourceURLs", "value":["/healthz","/livez","/readyz"]}]'
```

This proxy also works with initial agent join, so it's better to setup haproxy before installing k3s and then switching to HA config.
It will also expose prometheus metrics on 0.0.0.0:1936/metrics - pay attention that this part (unlike webui) won't be protected by user and password, so adjust your firewall accordingly if needed!

Of course you can use whatever you want - external cloud LB, nginx, anything, all it needs is TCP protocol support (because in this case we don't want to manage SSL on loadbalancer side). But haproxy provides you with prometheus metrics, have nice webui for monitoring and management, and i'm just familiar with it.
Loading