You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems like many projects in the Kubernetes space today have a CLI tool, including linkerd, argocd, Cluster API, Velero, cilium, helm, and many more.
This "standard" makes a certain amount of sense within this ecosystem; Kubernetes itself is, in its most basic form after installation (and even for many clusters, their entire lifetimes), usually interacted with using kubectl. It is relevant to point out, however, that kubectl is a relatively complex wrapper for an HTTP REST API and is not required for interaction with Kubernetes beyond some initial configuration if other tools are used.
Cluster API's clusterctl, however, is different:
It provides no direct interaction with any of its deployed components.
It requires kubectl to do anything against either a management cluster or workload clusters.
All it really does is to generate manifests which it then applies via kubectl.
In clusterctl's defence here, it acts as a central tool to be able to generate manifests from any provider, in much the same way that Terraform does. This is a strength, since it doesn't require a user to plumb through the depths of documentation for each and every provider to get the correct CRD syntax. However, I believe this actually is a symptom of a deeper underlying problem, which I'll get to below.
What's Wrong With That?
I do not wish to detract from the work that clusterctl does. However, clusterctl's use drops very quickly at any kind of scale, and in particular when we wish to leverage GitOps.
Obviously, one can use clusterctl to spin out a set of manifests that can then be used as templates for our particular use case, and once we have done that, now we can place the manifests into GitOps to be able to operate with them.
But then clusterctl becomes YAFT (Yet Another Freaking Tool) that I have to install and keep up to date, or install once then delete, or find a docker container for, or something, and at the end of the day, it's unnecessary, unlike kubectl--if the documentation exists to back it up.
I Still Don't Understand The Problem You Have
Ultimately, I see the proliferation of, and preference towards, CLI-first tooling to be a problem in the declarative landscape of Kubernetes and, more specifically, GitOps. However, my opinion on this is entirely beside the point (though it does inform my point).
Ultimately, the issue is this: most cluster API and provider documentation uses clusterctl, but does not provide equivalent documentation using plain CRD manifests (which is ultimately all clusterctl is doing under the hood). Those of us that spin up clusters entirely declarative using GitOps from the get-go have no choice but to install and use clusterctl even if just once, which is incompatible with that model.
This is the underlying issue: documentation, or more specifically, examples, are severely lacking using the operator/CRD model vs clusterctl.
Why Not Just Do It Yourself
The reason I'm submitting this as a discussion rather than adding to the documentation myself via PR's (which I may still do) and/or opening an issue is because I think that it's problematic for this project and all of the related providers to not provide complete examples for the operator method/plain CRD's without the use of clusterctl. I propose, then, that this project amend not only its own practices to require equivalent documentation for clusterctl and plain YAML manifests but also require (if that's possible) that related providers do the same.
As good as the Cluster API book is, I, as a new user wanting to go the Operator/GitOps route found it impossible to actually do without the use of clusterctl.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Preamble
It seems like many projects in the Kubernetes space today have a CLI tool, including linkerd, argocd, Cluster API, Velero, cilium, helm, and many more.
This "standard" makes a certain amount of sense within this ecosystem; Kubernetes itself is, in its most basic form after installation (and even for many clusters, their entire lifetimes), usually interacted with using
kubectl
. It is relevant to point out, however, thatkubectl
is a relatively complex wrapper for an HTTP REST API and is not required for interaction with Kubernetes beyond some initial configuration if other tools are used.Cluster API's
clusterctl
, however, is different:kubectl
to do anything against either a management cluster or workload clusters.kubectl
.clusterctl
's defence here, it acts as a central tool to be able to generate manifests from any provider, in much the same way that Terraform does. This is a strength, since it doesn't require a user to plumb through the depths of documentation for each and every provider to get the correct CRD syntax. However, I believe this actually is a symptom of a deeper underlying problem, which I'll get to below.What's Wrong With That?
I do not wish to detract from the work that
clusterctl
does. However,clusterctl
's use drops very quickly at any kind of scale, and in particular when we wish to leverage GitOps.Obviously, one can use
clusterctl
to spin out a set of manifests that can then be used as templates for our particular use case, and once we have done that, now we can place the manifests into GitOps to be able to operate with them.But then
clusterctl
becomes YAFT (Yet Another Freaking Tool) that I have to install and keep up to date, or install once then delete, or find a docker container for, or something, and at the end of the day, it's unnecessary, unlikekubectl
--if the documentation exists to back it up.I Still Don't Understand The Problem You Have
Ultimately, I see the proliferation of, and preference towards, CLI-first tooling to be a problem in the declarative landscape of Kubernetes and, more specifically, GitOps. However, my opinion on this is entirely beside the point (though it does inform my point).
Ultimately, the issue is this: most cluster API and provider documentation uses
clusterctl
, but does not provide equivalent documentation using plain CRD manifests (which is ultimately allclusterctl
is doing under the hood). Those of us that spin up clusters entirely declarative using GitOps from the get-go have no choice but to install and useclusterctl
even if just once, which is incompatible with that model.This is the underlying issue: documentation, or more specifically, examples, are severely lacking using the operator/CRD model vs
clusterctl
.Why Not Just Do It Yourself
The reason I'm submitting this as a discussion rather than adding to the documentation myself via PR's (which I may still do) and/or opening an issue is because I think that it's problematic for this project and all of the related providers to not provide complete examples for the operator method/plain CRD's without the use of
clusterctl
. I propose, then, that this project amend not only its own practices to require equivalent documentation forclusterctl
and plain YAML manifests but also require (if that's possible) that related providers do the same.As good as the Cluster API book is, I, as a new user wanting to go the Operator/GitOps route found it impossible to actually do without the use of
clusterctl
.I look forward to your thoughts on this.
Beta Was this translation helpful? Give feedback.
All reactions