forked from kubernetes/kubernetes
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
authorizer/webhook: make webhook authorizer cluster aware #43
Open
s-urbaniak
wants to merge
89
commits into
kcp-dev:feature-logical-clusters-1.23
Choose a base branch
from
s-urbaniak:delegating-authorizer
base: feature-logical-clusters-1.23
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
authorizer/webhook: make webhook authorizer cluster aware #43
s-urbaniak
wants to merge
89
commits into
kcp-dev:feature-logical-clusters-1.23
from
s-urbaniak:delegating-authorizer
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
with duplicated k8s code. cmd/kube-apiserver/app/* -> pkg/genericcontrolplane/* pkg/kubeapiserver/admission/* -> pkg/genericcontrolplane/admission/*
... stripped down from kube-apiserver, based on the following duplicated k8s files: pkg/controlplane/instance.go -> pkg/genericcontrolplane/apis/apis.go pkg/kubeapiserver/options/authentication.go -> pkg/genericcontrolplane/options.go cmd/kube-apiserver/app/options/options.go -> pkg/genericcontrolplane/options/options.go
Bootstrapped files are copied from existing k8s code: pkg/api/legacyscheme/scheme.go -> pkg/api/genericcontrolplanescheme/scheme.go pkg/apis/core/install/install.go -> pkg/apis/core/install/genericcontrolplane/install.go pkg/apis/core/register.go -> pkg/apis/core/register_generic_control_plane.go pkg/apis/core/v1/register.go -> pkg/apis/core/v1/register_generic_control_plane.go staging/src/k8s.io/api/core/v1/register.go -> staging/src/k8s.io/api/core/v1/register_generic_control_plane.go pkg/registry/core/rest/storage_core.go -> pkg/registry/core/rest/genericcontrolplane/storage_core.go
Some of the core API group is required for generic control plane server startup - specifically events and namespaces - and so the core API group must be installed. Instead of loading it as the legacy group, attempt to make it a standard API group and adapt the generic startup code as necessary to enable that.
The following existing k8S file is duplicated to prepare the change: pkg/kubeapiserver/default_storage_factory_builder.go -> pkg/kubeapiserver/legacy_storage_factory_builder.go
…rage A number of hardcoded assumptions exist in the standard resource factory that are specific to versions in use. Instead, create a separate factory without these assumptions. In the future this might instead be a way for injecting a different form of placement logic (identify resources with a unique lineage and map them to a single key that can have multiple readers.
Allows multiple keyspaces to overlap.
This is a generalization of the prototype approach used to retrieve the clusterName during wildcard watch across clusters For etcd3 storage functions, we should detect when an operation involves (or results in) an empty clusterName. However we only emit a warning, in order to avoid breaking normal usage of kubernetes starting kube-apiserver. Of course, in the future, we should add an abstraction layer to allow several implementations of the clusterName setting / retrieval, and not be depending on a single etcd-based implementation. Signed-off-by: David Festal <[email protected]>
This may be useful in case when some legacy scheme resources are not registered, which is the case for example in KCP. There has to be special cases for `core` resources that have an empty group and are at a dedicated api prefix. This doesn't fully work with `kubectl` or kubernetes client, due to the fact that those client tools systematically use strategic merge patch for legacy scheme resources, and CRDs don't support strategic merge patch for now. The fix for this limitation will be in a next commit. Signed-off-by: David Festal <[email protected]>
…netes#6) Currently CRDs only allow defining custom table column based on a single basic JsonPath expression. This is obviously not sufficient to reproduce the various column definitions of legacy scheme objects like deployments, etc ..., since those definitions are implemented in Go code. So for example in KCP, when the `deployments` API resource is brought back from physical clusters under the form of a CRD, the table columns shown from a kubectl get deployments command are not the ones typically expected. This PR adds a temporary hack to replace the table converter of CRDs that bring back legacy-schema resources, with the default table converter of the related legacy scheme resource. In the future this should probably be replaced by some new mechanism that would allow customizing some behaviors of resources defined by CRDs. Signed-off-by: David Festal <[email protected]>
... based on CRD published openapi v2 definitions
In order to compile from a go module, the APIServer requires access to GetOpenAPIDefinitions() for at least a subset of the core code. The minimal control plane would need openapi docs for all inline objects - one way to solve that would be to split the set up based on what objects are registered and to accept definitions at init time. In either case, a downstream vendor would have to reference some of these objects, so exclude them from being ignored temporarily until a longer term solution can be provided.
...This commit hacks CRD tenancy by ensuring that logical clusters are taken in account in: - CRD-related controllers - APIServices-related controllers - Discovery + OpenAPI endpoints In the current Kubernetes design, those 3 areas are highly coupled and intricated, which explains why this commit had to hack the code at various levels: - client-go level - controllers level, - http handlers level. While this gives a detailed idea of which code needs to be touched in order to enable CRD tenancy, a clean implementation would first require some refactoring, in order to build the required abstraction layers that would allow decoupling those areas. In the current implementation, we ensure that the clusterName is managed consistently and do not allow any operation that involves (or results in) an object having an empty `clusterName` Signed-off-by: David Festal <[email protected]>
In some contexts, like the controller-runtime library used by the Operator SDK, all the resources of the client-go scheme are created / updated using the protobuf content type. However when these resources are in fact added as CRDs, in the KCP minimal API server scenario, these resources cannot be created / updated since the protobuf (de)serialization is not supported for CRDs. So in this case we just convert the protobuf request to a Json one (using the `client-go` scheme decoder/encoder), before letting the CRD handler serve it. A real, long-term and non-hacky, fix for this problem would be as follows: When a request for an unsupported serialization is returned, the server should reject it with a 406 and provide a list of supported content types. client-go should then examine whether it can satisfy such a request by encoding the object with a different scheme. This would require a KEP but is in keeping with content negotiation on GET / WATCH in Kube Signed-off-by: David Festal <[email protected]>
... and potential client problems Signed-off-by: David Festal <[email protected]>
It is necessary to take in account the `ClusterName` (logical cluster) of the current request context and use it to validate the request against the given namespace of the same logical cluster. Before this change, it was not possible to create an object in a given namespace in a non-default logical cluster until a namespace with the same name would be created in the default ('admin') logical cluster (issue kcp-dev/kcp#157) Signed-off-by: David Festal <[email protected]>
The Namespace controller must take in account the `ClusterName` (== logical cluster) in all its actions, especially when deleting contained objects during finalization. This also required hacking a bit the RBAC policy initialization so that it explicitely uses the `admin` logical cluster (until we have a decent inheritance mechanism between logical clusters for RBAC) Signed-off-by: David Festal <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
Fix an issue where the naming controller was hot looping trying to enqueue other CRDs in the same API group. With the change to cache.MetaNamespaceKeyFunc to now encode the cluster name as part of the string key, we have to make a similar change to the naming controller so that it decodes the cluster name from the key and reinserts it when enqueuing new items. Signed-off-by: Andy Goldstein <[email protected]>
This hack fixes issue kcp-dev/kcp#183: One cannot watch with wildcards (across logical clusters) if the CRD of the related API Resource hasn't been added in the admin logical cluster first. The fix in this HACK is limited since the request will fail if 2 logical clusters contain CRDs for the same GVK with non-equal specs (especially non-equal schemas). Signed-off-by: David Festal <[email protected]>
Transparent sharding needs to occur after a client has passed auth{n,z} for the primary replica they contact. This means that kcp needs to inject a handler in between those chains and the gorestful delegates. Right now the simplest way to do that is to just open up the option. Signed-off-by: Steve Kuznetsov <[email protected]>
Signed-off-by: Steve Kuznetsov <[email protected]>
Signed-off-by: Steve Kuznetsov <[email protected]>
Signed-off-by: Steve Kuznetsov <[email protected]>
Signed-off-by: Steve Kuznetsov <[email protected]>
As part of the updated Lister interface methods, fix the hand-written conversionLister to conform to the updated interface. Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
s.Logs.Apply no longer exists; it is now ValidateAndApply. Upstream calls this as early as possible in the Cobra RunE method, so this is a deviation (it's a minimal change to keep compiling). Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
Undo most of our kube-aggregator hacks, now that we're no longer using it for genericcontrolplane. Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Steve Kuznetsov <[email protected]> (cherry picked from commit 31f5bdc)
(cherry picked from commit 101b604)
This reverts commit f374c6c.
(cherry picked from commit 515bb63)
(cherry picked from commit 9b29aab) Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: Andy Goldstein <[email protected]>
Signed-off-by: David Festal <[email protected]> (cherry picked from commit d7e9ea1)
Signed-off-by: David Festal <[email protected]> (cherry picked from commit 29fcab6)
Rebase on top of v1.23.3
…apiserver-config-23 1.23: genericcontrolplane: split generic config construction from kube-apiserver and apiextensions
s-urbaniak
force-pushed
the
delegating-authorizer
branch
from
February 17, 2022 15:50
b7a9ce9
to
64d2e2d
Compare
ncdc
reviewed
Feb 17, 2022
@@ -67,11 +67,12 @@ type WebhookAuthorizer struct { | |||
retryBackoff wait.Backoff | |||
decisionOnError authorizer.Decision | |||
metrics AuthorizerMetrics | |||
cluster string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't used anywhere?
side note: discussed this with @sttts as I am not convinced this is the right path. This is currently used upstream in
|
stevekuznetsov
force-pushed
the
feature-logical-clusters-1.23
branch
from
July 13, 2022 16:31
005ea13
to
d351993
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
/cc @ncdc @sttts