Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Add ability to register runnables as pre-start hooks #2044

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

terinjokes
Copy link
Contributor

@terinjokes terinjokes commented Nov 14, 2022

When implementing a controller that uses leader election, there maybe be
work that needs to be done after winning the election but before
processing enqueued requests. For example, a controller may need to
build up an internal mapping of the current state of the cluster before
it can begin reconciling.

This changeset adds support for adding prestart hooks to
controller-runtime's manager implementation. This hook runs after the
manager has been elected leader, immediately before the leader election
controllers are started.

Related #607

@k8s-ci-robot k8s-ci-robot added do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Nov 14, 2022
@k8s-ci-robot
Copy link
Contributor

Hi @terinjokes. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Nov 14, 2022
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. labels Nov 14, 2022
@alvaroaleman
Copy link
Member

For example, a controller may need to build up an internal mapping of the current state of the cluster before it can begin reconciling.

Can you elaborate? Why not use a sync.Once at the beginning of the reconciliation? What you want to do is definitely already possible today, albeit a bit cumbersome. I am not convinced this is a usecase common enough to warrant adding first class support.

@terinjokes
Copy link
Contributor Author

terinjokes commented Nov 15, 2022

Thanks @alvaroaleman to taking a look at this PR. Unfortunately, I've found that using sync.Once inside of a reconciler function is not a great alternative on both ergonomic and correctness fronts. Attempting to do so will, depending on the controller involved, cause incorrect reconciliations at best, or loss of data at worse. I'll explore these in kind.

Ergonomics

Currently the reconciler function has a clear scope: it processes a single request to reconcile objects in the cluster. This makes testing it very easy. By attempting to add more functionality to the reconciler function, we have made it more difficult to test: now we must ensure the once function is safe to run in tests, be able to swap the function for one that is safe, reuse a once already done across tests, or split our reconcile function into two. I'm not a fan of any of these options, they all complicate the testing of the controller.

Correctness

Attempting to use sync.Once to perform work before starting reconciliation is difficult to do correctly, especially when coupled with the internal controller logic.

The sync.Once API does not have previsions for returning an error. The do function must be carefully written to properly lift the error for all workers. Even if we do lift the error out of the do function, there is no provision for the reconcile function to stop the controller: returning an error only requeues the item.

We can attempt to work around this by using panic in our do function. This will eventually stop the entire program by default, but now we're using panics for flow control. If we don't recover on the main goroutine, we lose the ability to handle these failures such as to do any cleanup work, and to release any locks.

Unless one sets the controller option RecoverPanic, then the panic is recovered by the internal controller and turned into an error, and the item is requeued.

This becomes even harder to do correctly if you run multiple workers with MaxConcurrentReconciles. As the Godoc for sync.Once explains:

If f panics, Do considers it to have returned; future calls of Do return without calling f.

This allows other workers to continue past the once while the panic is being handled in the panic goroutine, and potentially take destructive actions due to incomplete work by the once function.


This pull request, which is roughly 15 lines, runs the prestart hooks before starting workers, so there is no synchronization issues across workers to deal with. A prestart function returning error stops the controller, returning an error from the Start function, which the manager can use to stop other Runnables and release resources, before returning the error to the main package.

This follows the normal flow control for errors users of controller-runtime expect, and eliminates the possibility of handling the error case incorrectly.


For example, a controller may need to build up an internal mapping of the current state of the cluster before it can begin reconciling.

Can you elaborate?

I've seen this problem often when we need to compute things across the entire cluster before being able to reconcile individual resources. For the current project I'm working on, I need to compute address allocation maps so I don't allocate the same address twice and cause havoc in networking. I've had use cases where I would have used a hook, and due to their absence I usually end up using client-go instead, but that's been a miserable experience.

As people continue to build on Kubernetes, especially when combined with Crossplane and kcp, I expect even more people needing to do things between winning a leader election and starting reconcilers.

@alvaroaleman
Copy link
Member

Right, a sync.Once doesn't work but you can make it work with a lock.

I am open to the idea of running code after leader election was won and before a controller was started but I dislike the idea of tying that to the controller.

I'd suggest that you implement something like

// LeaderElectionRunnable knows if a Runnable needs to be run in the leader election mode.
type LeaderElectionRunnable interface {
// NeedLeaderElection returns true if the Runnable needs to be run in the leader election mode.
// e.g. controllers need to be run in leader election mode, while webhook server doesn't.
NeedLeaderElection() bool
}
in the manager as the manager is the entity that is responsible for managing the lifecycle of components.

Note that depending on your usecase a source.SyncingSource might also be useful.

@alvaroaleman
Copy link
Member

The easiest way to achieve what you are doing here (running code before a given controller starts) would btw be to construct one through NewUnmanaged and wrap it with something that executes your code in Start() before calling the controllers Start()

@terinjokes
Copy link
Contributor Author

I put it in the controller because the content of a prestart hook, in my experience, is tied to the content of the reconciler. The controller is responsible for starting and maintaining watches, and gives us a spot to run between cached being synced and the reconciler function.

Putting it in the manager, which I also considered, means it is running separately from a controller setting up watches and waiting for caches to synchronize, and would have to do everything by directly calling the API server. This seems like a waste if we're also trying to cache the very same resources. It would also be a waste if the controller fails to synchronize the caches: it will exit with an error.

@alvaroaleman
Copy link
Member

Putting it in the manager, which I also considered, means it is running separately from a controller setting up watches and waiting for caches to synchronize, and would have to do everything by directly calling the API server.

No, it will start the caches first.

I put it in the controller because the content of a prestart hook, in my experience, is tied to the content of the reconciler. The controller is responsible for starting and maintaining watches, and gives us a spot to run between cached being synced and the reconciler function.

Be that as it may for your use case, but then the next person is going to come and want it independent of a specific controller and then we have two options and ppl will ask what to use when and I don't want that. Controller-Runtime is supposed to make the standard use case easy and the advanced use case possible. Running code in leader election before any controllers start is not possible today, doing it before a specific controller starts is by either wrapping it or using a syncingSource, hence no need for this.

@terinjokes
Copy link
Contributor Author

No, it will start the caches first.

My understanding is that a controller's watches are started and caches as synced in the controller's Start function. Before this point no informers have been setup.

// NB(directxman12): launch the sources *before* trying to wait for the
// caches to sync so that they have a chance to register their intendeded
// caches.
for _, watch := range c.startWatches {
c.LogConstructor(nil).Info("Starting EventSource", "source", fmt.Sprintf("%s", watch.src))
if err := watch.src.Start(ctx, watch.handler, c.Queue, watch.predicates...); err != nil {
return err
}
}
// Start the SharedIndexInformer factories to begin populating the SharedIndexInformer caches
c.LogConstructor(nil).Info("Starting Controller")
for _, watch := range c.startWatches {
syncingSource, ok := watch.src.(source.SyncingSource)
if !ok {
continue
}
if err := func() error {
// use a context with timeout for launching sources and syncing caches.
sourceStartCtx, cancel := context.WithTimeout(ctx, c.CacheSyncTimeout)
defer cancel()
// WaitForSync waits for a definitive timeout, and returns if there
// is an error or a timeout
if err := syncingSource.WaitForSync(sourceStartCtx); err != nil {
err := fmt.Errorf("failed to wait for %s caches to sync: %w", c.Name, err)
c.LogConstructor(nil).Error(err, "Could not wait for Cache to sync")
return err
}
return nil
}(); err != nil {
return err
}
}

While the manager has a "Caches" runnable group, which would get started before leader election controllers, I can't find anything in controller-runtime that uses it. The only two structs that implement the hasCache interface is cluster and controllerManager. In all my controllers the Caches runnable group always has an empty startQueue.

Running code in leader election before any controllers start is not possible today, doing it before a specific controller starts is by either wrapping it or using a syncingSource, hence no need for this.

Since the watches are started and the caches synchronized by the controller, I do not understand your proposed solution of wrapping the controller. From my understanding. it's not currently possible to run code between the watches being started and synchronized and the reconcile function being called. This pull request adds that possibility.

If I'm misunderstanding something, do you have psuedo-code to get me pointed the right direction by what you mean?

we have two options and ppl will ask what to use when and I don't want that

I'm happy to rename the function and expand the Godocs to reduce confusion.

@alvaroaleman
Copy link
Member

My understanding is that a controller's watches are started and caches as synced in the controller's Start function. Before this point no informers have been setup.

After the cache got started, getting any informer from it will result in an informer for that given resource being created if it doesn't exist yet. The client you get from the manager uses the cache for reading and autocreates any missing informers, meaning you can just use it after the cache was started and everything will work.

While the manager has a "Caches" runnable group, which would get started before leader election controllers, I can't find anything in controller-runtime that uses

It is used by the cluster.Cluster which among other things contains a cache. It gets started before leader election but won't contain any informer yet, as described above

@terinjokes
Copy link
Contributor Author

Okay, now I understand what you were trying to explain. I can take this pull request in two different directions now:

  1. Move this hook to the manager so that it can run before the LeaderElection runnable group, or
  2. Extend package builder to be able to construct unmanaged controllers.

Do you have a preference?

@alvaroaleman
Copy link
Member

Move this hook to the manager so that it can run before the LeaderElection runnable group

That sounds fine to me

Extend package builder to be able to construct unmanaged controllers.

No. The builder is a helper that should make the standard cases easy. Making all cases possible through it would contradict that goal. You can construct an unmanaged controller without the builder.

@terinjokes
Copy link
Contributor Author

I disagree that this would be a radical change to builder: the use case is already there for its tests. But I'll table it and open a proposal at a future time. It's not needed for me right now since you've selected option one.

@terinjokes
Copy link
Contributor Author

terinjokes commented Nov 16, 2022

I pushed one version of this change for review. Would you prefer a new runnable group be made?

Edit: I'm not entirely happy with the current way these hooks work, so I'm going to take another go at it.

@terinjokes terinjokes force-pushed the prestart-hook branch 3 times, most recently from de6797a to 3b513d7 Compare November 17, 2022 13:20
@terinjokes
Copy link
Contributor Author

This is ready for another review, when you have a moment.

pkg/manager/manager.go Outdated Show resolved Hide resolved
pkg/manager/internal.go Outdated Show resolved Hide resolved
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Dec 27, 2022
@terinjokes
Copy link
Contributor Author

Rebased changes on top of 0.17 and the tests are passing locally.

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@terinjokes
Copy link
Contributor Author

/retest

pkg/manager/internal.go Outdated Show resolved Hide resolved
pkg/manager/manager.go Show resolved Hide resolved
pkg/manager/manager.go Outdated Show resolved Hide resolved
pkg/manager/manager.go Outdated Show resolved Hide resolved
@terinjokes terinjokes force-pushed the prestart-hook branch 2 times, most recently from e48fcc2 to 00fb43a Compare January 29, 2024 23:49
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2024
@terinjokes
Copy link
Contributor Author

I've rebased again onto the main branch.

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 6, 2024
@terinjokes terinjokes changed the title ⚠️ Add ability to register runnables as pre-start hooks ✨ Add ability to register runnables as pre-start hooks May 6, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Aug 8, 2024
@briansok
Copy link

@terinjokes Can I help in any way to get this PR merged? It already has been a while since you started this PR but this feature is also needed for a situation I'm currently in. I could get some more hands if necessary. Please let me know, and thanks for your work :)

@terinjokes
Copy link
Contributor Author

I was about to rebase for 0.19, I've been busy the last 24 hours.

Other than that, I'm not sure why it's not being reviewed or merged.

When implementing a controller that uses leader election, there maybe be
work that needs to be done after winning the election but before
processing enqueued requests. For example, a controller may need to
build up an internal mapping of the current state of the cluster before
it can begin reconciling.

This changeset adds support for adding prestart hooks to
controller-runtime's manager implementation. This hook runs after the
manager has been elected leader, immediately before the leader election
controllers are started.

Related kubernetes-sigs#607
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Aug 25, 2024
@k8s-ci-robot
Copy link
Contributor

@terinjokes: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-controller-runtime-apidiff 07216dd link false /test pull-controller-runtime-apidiff

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 24, 2024
@terinjokes
Copy link
Contributor Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants