Skip to content

Support for taking ownership of resources managed by k0s #5815

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/manifests.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ The use of Manifest Deployer is quite similar to the use the `kubectl apply` com
- Each directory that is a direct descendant of `/var/lib/k0s/manifests` is considered to be its own "stack". Nested directories (further subfolders), however, are excluded from the stack mechanism and thus are not automatically deployed by the Manifest Deployer.

- k0s uses the independent stack mechanism for some of its internal in-cluster components, as well as for other resources. Be sure to only touch the manifests that are not managed by k0s.
- If you want to take ownership of certain resources, you can add the `k0s.k0sproject.io/managed: false` label to those resources, indicating that they are no longer managed by k0s and k0s will not update them anymore.

- Explicitly define the namespace in the manifests (Manifest Deployer does not have a default namespace).

Expand Down
3 changes: 3 additions & 0 deletions pkg/applier/meta.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,9 @@ const (
// NameLabel stack label
NameLabel = MetaPrefix + "/stack"

// ManagedLabel defines the label key used to indicate whether a resource is managed by k0s.
ManagedLabel = MetaPrefix + "/managed"

// ChecksumAnnotation defines the annotation key to used for stack checksums
ChecksumAnnotation = MetaPrefix + "/stack-checksum"

Expand Down
8 changes: 7 additions & 1 deletion pkg/applier/stack.go
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,11 @@ func (s *Stack) Apply(ctx context.Context, prune bool) error {
errs = append(errs, err)
continue
} else { // The resource already exists, we need to update/patch it
if serverResource.GetLabels()[ManagedLabel] == "false" {
s.log.Debug("resource is not managed by k0s, skipping")
continue
}

localChecksum := resource.GetAnnotations()[ChecksumAnnotation]
if serverResource.GetAnnotations()[ChecksumAnnotation] == localChecksum {
s.log.Debug("resource checksums match, no need to update")
Expand Down Expand Up @@ -385,7 +390,8 @@ func (s *Stack) getPruneableResources(ctx context.Context, drClient dynamic.Reso
for _, resource := range resourceList.Items {
// We need to filter out objects that do not actually have the stack label set
// There are some cases where we get "extra" results, e.g.: https://github.com/kubernetes-sigs/metrics-server/issues/604
if !s.isInStack(resource) && len(resource.GetOwnerReferences()) == 0 && resource.GetLabels()[NameLabel] == s.Name {
labels := resource.GetLabels()
if !s.isInStack(resource) && len(resource.GetOwnerReferences()) == 0 && labels[NameLabel] == s.Name && labels[ManagedLabel] != "false" {
s.log.Debugf("adding prunable resource: %s", generateResourceID(resource))
pruneableResources = append(pruneableResources, resource)
}
Expand Down