-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CA-MCM overhaul #251
Comments
/assign @elankath @unmarshall @rishabh-11 |
/assign |
I was wondering whether we can leverage the "ground truth" a.k.a. the kube-scheduler more directly (no CA at all), e.g.:
The point is, every simulation that is not based on the real kube-scheduler will be flawed, so why not find a way to trick it into doing what we need instead of an approximation (like the CA tries)? |
Running a second kube-scheduler (can run in-process) that operates on a simulated model is a pretty nice idea. It would also alleviate the implementation efforts for such a large task. Though do we have any gardener customers that run custom schedulers ? |
@elankath I don't know, but the Kubernetes or Gardener CA or our own simulation would also not match such a custom scheduler. The reason we introduced kube-scheduler configurability (e.g. bin-packing) was because even large teams didn't want to run control plane components by themselves. I don't think we need to consider those cases and if, then we probably shouldn't consider an automated solution and rather provide them with an API so that they can provision and deprovision nodes themselves. They have then to build the bridge between their scheduler and our API themselves, but unless somebody asks, I wouldn't even consider that. I have difficulties imagining, many/some/anybody running their own kube-scheduler. |
Reason Discussion:
Currently there are some CA-MCM interaction issues, which we want to fix. One solution is to change the entire CA-MCM working which is seen currently.
This issue is to discuss the feasability of such approaches
Terms for the discussion (to avoid confusion):
k/CA
= kubernetes CAg/CA
= gardener CA (fork ofk/CA
)new-CA
= new CA code we'll implement which could be a component or a libraryDimensions of discussion:
Use new-CA as a library inside MCM, new-CA library is just recommending and MCM is deciding. Currently g/CA has a binding recommendation
Leverage current k/CA
RunOnce()
flow, if scale-up happened / until it happens, or scale-down in cool-downThe text was updated successfully, but these errors were encountered: