Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incomplete helm chart #129

Open
sgaist opened this issue Sep 27, 2023 · 8 comments · May be fixed by #139
Open

Incomplete helm chart #129

sgaist opened this issue Sep 27, 2023 · 8 comments · May be fixed by #139
Labels
kind/bug Bug lifecycle/stale Nobody worked on this for 6 months (will further age)

Comments

@sgaist
Copy link
Contributor

sgaist commented Sep 27, 2023

What happened:

The webhook fails to operate when using the helm chart to deploy it.

What you expected to happen:

To have a fully functional webhook.

How to reproduce it (as minimally and precisely as possible):

helm upgrade --install --namespace oidc-webhook --create-namespace oidc-webhook ./oidc-webhook-authenticator

Anything else we need to know:

Based on the helm chart and the content of the config/samples folder, it seems that the chart is incomplete with regard to the roles and bindings created. The role and binding to create SubjectAccessReviews is missing as well as a role to allow access the /validate-token non resource URL.

Environment:
macOS with Docker Desktop cluster
Linux with kubeadm based cluster

@sgaist sgaist added the kind/bug Bug label Sep 27, 2023
@dimityrmirchev
Copy link
Member

I am not sure how to reproduce the missing permissions w.r.t the subjectaccessreview.There is a binding to the auth-delegator role which should provide the permissions to create tokenreviews and subjectaccessreviews. And in the config/samples there is this role.

The /validate-token can be skipped during authorization which might be the case for some setups. It is true that there is no such cluster role that callers to OWA can easily bind to as of now. Maybe such role can be added to the charts 🤔

@sgaist
Copy link
Contributor Author

sgaist commented Oct 9, 2023

Sorry for the late reply, I re-tried it from my side from scratch and indeed the auth-delegator binding is working properly. I had several things going in parallel when trying to deploy the webhook and make it work at the start between the manual deployment and the helm chart. Sorry for the noise and thanks for making me aware of this binding.

As for the missing /validate-token access role, I can provide a PR for it.

I have an additional question related to the webhook kubeconfig part, I was wondering what is your recommended way to manage the client certificate for the hook. One can make use of cert-manager as it's already supported by the chart and thus I was wondering if a client auth certificate should be optionally created as part of the deployment or maybe be explained in the README ?

@dimityrmirchev
Copy link
Member

As for the missing /validate-token access role, I can provide a PR for it.

Sure, I guess this will be a nice addition.

I was wondering what is your recommended way to manage the client certificate for the hook.

IMO it strongly depends on the setup. Ideally the OWA is deployed in the same cluster it works for so and can use its own serviceaccount in order to also authenticate and authorize itself against that cluster. In the charts we have a support for token projection

{{- if .Values.serviceAccountTokenVolumeProjection.enabled }}

See also https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#serviceaccount-token-volume-projection

If that is not the case and OWA is deployed outside the cluster then different auth options are possible. Basically everything that can be used to auth against a kube-apiserver. So I do not think that there is one answer to this question.

One can make use of cert-manager as it's already supported by the chart and thus I was wondering if a client auth certificate should be optionally created as part of the deployment or maybe be explained in the README

I think this is only valid for the case in which OWA is deployed in the cluster it works for? If that is the case then I would recommend using the service account token volume projected + the respective kubeconfig (I found a comment from some time ago that includes a similar kubeconfig)). My knowledge of cert manager is limited so this is my guess.

@sgaist
Copy link
Contributor Author

sgaist commented Oct 13, 2023

Going through all of this, I am still missing something.

If I have understood things correctly, the most simple of deployment with the chart is using the following:

helm upgrade --install --namespace oidc-webhook --create-namespace oidc-webhook --set runtime.serviceAccountTokenVolumeProjection.enabled=true

Is it correct ?

Using the above triggers an error due missing PEM data.

If using:
helm upgrade --install --namespace oidc-webhook --create-namespace oidc-webhook --set runtime.serviceAccountTokenVolumeProjection.enabled=true --set global.certManager.enabled=true

the webhook seems to start properly however what I currently fail to do is to write the webhook configuration file that will be used by the kube-apiserver.

I am running Kubernetes version is 1.27.2 through Docker Desktop

apiVersion: v1
kind: Config
clusters:
- name: local
  cluster:
    insecure-skip-tls-verify: true
    server: https://<url_to_webhook>/validate-token
users:
- name: token
  user: "What should come here ? Setting the apiserver crt and key as shown in the Minikube config does not work"
current-context: webhook
contexts:
- context:
    cluster: local
    user: token
  name: webhook

From your comment as well as the values.yaml comments, it seems that the global kubeconfig value shall be set because automountServiceAccountToken is true by default, however, the deployment does not error in case kubeconfig is empty.

So my next questions are:

  • Shouldn't the deployment fail when automountServiceAccountToken is true and kubeconfig empty ?
  • What should the kubeconfig field contain ?
  • What should the webhook configuration file contain ?

@dimityrmirchev
Copy link
Member

Shouldn't the deployment fail when automountServiceAccountToken is true and kubeconfig empty ?

No, If kubeconfig is empty then the incluster kubeconfig is used and automountServiceAccountToken should be true so that the auth token is automounted by kubernetes.

helm upgrade --install --namespace oidc-webhook --create-namespace oidc-webhook  --set global.certManager.enabled=true

This should work but then you have to provide the kube-apiserver with a proper kubeconfig. It is not recommended to set insecure-skip-tls-verify to true. It is best that you provide the CA cert that is used to sign the webhook's server certificate. This could be the CA that cert manager generated or own CA if you use your own certificates.

Then the user field should contain a user that is authenticated and allowed to request the /validate-token endpoint. If the kube-apiserver has anonymous authentication enabled (this is on by default I think) then you can also skip the user field completely and allow anonymous to request the /validate-token endpoint. This can be done by adding the path to the alwaysAllowPaths

@sgaist
Copy link
Contributor Author

sgaist commented Oct 20, 2023

That's were I think I might be missing something with regard to the service account variant. I tried to deploy the hook using it however, I did not find the correct way to configure things to work properly with it. How would the hook configuration that must be passed to kube-apiserver look like ?

I agree that insecure-skip-tls-verify shall not be used, I just took the example from the repository as a base.
On my systems, I did a certificate based deployment using the certificate-authority property for proper TLS handling. On that matter, I was wondering whether you would be interested in having the chart support that use case when people want to use cert-manager ? I did some minor modification in that direction and would be happy to share them.

@dimityrmirchev
Copy link
Member

How would the hook configuration that must be passed to kube-apiserver look like ?

There are a lot of deployment variants and is difficult for me to determine what exactly is the issue with your setup. As I mentioned

helm upgrade --install --namespace oidc-webhook --create-namespace oidc-webhook  --set global.certManager.enabled=true

should work and the only thing left to do is to supply the kube-apiserver with a valid kubeconfig pointing to the oidc-webhook-authenticator. This kubeconfig should include a user that is allowed to /validate-token against the cluster that OWA is delegating authentication and authorization to.

On that matter, I was wondering whether you would be interested in having the chart support that use case when people want to use cert-manager ? I did some minor modification in that direction and would be happy to share them.

There is a basic support for integrating with cert-manager already, but no fine grained control over the resources regarding cert-manager. The only knob is controlled with certManager.enabled=(true/false). Since structured authentication will be released in K8s sooner or later I prefer that no great amount of time (both developing and reviewing) is invested in additional features for this project. You can share what is missing with regards to the cert-manager integration or you can open an issue. If you have a working/modified version of the chart using cert-manager and this forks for you this is also fine.

@sgaist
Copy link
Contributor Author

sgaist commented Oct 30, 2023

I just realized that what I wrote was misleading, sorry. There's no new code involved except a small change in the chart.

What I meant was that I added the option to create a client certificate as part of the deployment when cert-manager is enabled. The certificate, key and CA can then be retrieved and used for the authentication. I'll prepare a patch with the corresponding documentation.

@sgaist sgaist linked a pull request Nov 10, 2023 that will close this issue
@gardener-robot gardener-robot added the lifecycle/stale Nobody worked on this for 6 months (will further age) label Jul 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Bug lifecycle/stale Nobody worked on this for 6 months (will further age)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants