Skip to content
This repository has been archived by the owner on Oct 27, 2020. It is now read-only.

kubectl apply -f kanali/hack/helm-rbac.yaml fails #117

Open
artworkad opened this issue Jun 12, 2018 · 3 comments
Open

kubectl apply -f kanali/hack/helm-rbac.yaml fails #117

artworkad opened this issue Jun 12, 2018 · 3 comments

Comments

@artworkad
Copy link

Hey guys,

I just wanted to try out kanali on a fresh kubernetes cluster:

kubectl apply -f kanali/hack/helm-rbac.yaml

However I get following error message:

serviceaccount "tiller" created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io "tiller" created
Error from server (Forbidden): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"cluster-admin","namespace":""},"rules":[{"apiGroups":[""],"resources":[""],"verbs":[""]},{"nonResourceURLs":[""],"verbs":[""]}]}\n"},"namespace":""}}
to:
&{0xc4208c00c0 0xc42034bf80 cluster-admin kanali/hack/helm-rbac.yaml 0xc42009c000 26 false}
for: "kanali/hack/helm-rbac.yaml": clusterroles.rbac.authorization.k8s.io "cluster-admin" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["
"], APIGroups:[""], Verbs:[""]} PolicyRule{NonResourceURLs:[""], Verbs:[""]}] user=&{[email protected] [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/" "/apis" "/apis/" "/healthz" "/swagger-2.0.0.pb-v1" "/swagger.json" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]

@frankgreco
Copy link
Contributor

@artjomzab

Are you experiencing this on Minikube or another environment?
What version of Kubernetes are you using? (kubectl version)

@artworkad
Copy link
Author

@frankgreco I am experiencing this in the google cloud on kubernetes version 1.8.10-gke.0

@frankgreco
Copy link
Contributor

@artjomzab

After doing a little research I found this issue.

Because of the way Container Engine checks permissions when you create a Role or ClusterRole, you must first create a RoleBinding that grants you all of the permissions included in the role you want to create.

An example workaround is to create a RoleBinding that gives your Google identity a cluster-admin role before attempting to create additional Role or ClusterRole permissions.

This is a known issue in the Beta release of Role-Based Access Control in Kubernetes and Container Engine version 1.6.

They go on to say that in order to proceed without error, cluster-admin role should be added to current executing user. They provided this example:

$ kubectl create clusterrolebinding your-user-cluster-admin-binding --clusterrole=cluster-admin [email protected]

It is important to note that helm-rbac.yaml provides an excessive amount of permissions as it was intended to be used in a local/test environment. You will probably want to customize your own rbac policy for Helm. Here are some recommendations from Helm.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants