Skip to content
This repository has been archived by the owner on Oct 27, 2020. It is now read-only.

Implement Quota/Rate Limiting #56

Open
frankgreco opened this issue Sep 4, 2017 · 8 comments
Open

Implement Quota/Rate Limiting #56

frankgreco opened this issue Sep 4, 2017 · 8 comments
Assignees

Comments

@frankgreco
Copy link
Contributor

Quota and Rate Limiting features are currently in alpha. In order to make them stable, the following things need to hold true:

  • Maintain the current zero network overhead SLS
  • Always return correct results (never lie)
  • Must be lightweight

I was beginning to go down a path where Kanali would basically implement the raft algorithm to accomplish this. This however would be a great amount effort and I was worried I'd be reinventing the wheel when I could just use something already there. Etcd, the default Kubernetes backend, already uses raft for it's consensus algorithm. Here is the current design approach:

  • Watch etcd endpoints key for traffic. This will maintain the zero network overhead SLA.
  • Use the etcd grpc client maximum over the wire efficiency
  • Use protobuf for efficient serialization/deserialization
@frankgreco frankgreco added this to the v1.3.0 milestone Sep 4, 2017
@frankgreco frankgreco self-assigned this Sep 4, 2017
@SEJeff
Copy link

SEJeff commented Sep 8, 2017

So this would require giving kanali direct access to etcd, or spinning up a separate etcd cluster? Currently, k8s (by default, but can be encrypted in 1.7 using the Provider plugins) stores all secrets unencrypted in etcd.

You might mention that it is generally a terrible idea to give anything outside of k8s access to etcd and spin up a separate one for this perhaps using the etcd-operator from CoreOS.

@frankgreco
Copy link
Contributor Author

frankgreco commented Sep 8, 2017

@SEJeff

I should clarify on some aspects.

For all k8s resources (secrets, services, apiproxies, etc), Kanali would continue to watch the apiserver. However, to persist minimal details about a request, etcd is being proposed. An example protobuf message might look like this:

message TrafficPoint {
  required string namespace = 1;
  required string proxyName = 2;
  required string keyName = 3;
}

Concerning giving Kanali access to the same etcd that Kubernetes uses, Kanali would use a separate etcd key. This is a common practice seen amongst other Kubernetes add-ons like Flannel and Calico. Or course nothing prevents someone from bootstrapping another etcd deployment that is Kanali specific as Kanali just needs to know the endpoints and potential tls connection details

All Kanali instances would watch this etcd key using the etcd grpc v3 client by coreos. Each time a Kanali instance receives traffic and writes it to etcd, all other Kanali instances will know about it and can update their in memory data structures so that rate limiting and quota policies can be properly enforced.

@SEJeff
Copy link

SEJeff commented Sep 8, 2017

@frankgreco it is actually discouraged per the kubernetes documentation.

Access to etcd is equivalent to root permission in the cluster so ideally only the API server should have access to it.

Also, Calico is actually in the process of moving to storing all of their data via the apiserver via Typha directly via the kubernetes apiserver. This has been on their roadmap for awhile now.

Regarding Flannel, their documentation explicitly says using the kubernetes apiserver or a discrete etcd is preferable:

Though not required, it's recommended that flannel uses the Kubernetes API as its backing store which avoids the need to deploy a discrete etcd cluster for flannel. This flannel mode is known as the kube subnet manager.

Please consider using CRDs in the kubernetes apiserver instead of etcd directly if feasible. I love the way this project is shaping up, but we use RBAC heavily and would not give an ingress access to all secrets in plaintext (by giving it direct access to etcd).

@frankgreco
Copy link
Contributor Author

@SEJeff I did not know that Calico was moving to this! Great find!

I see two main options here:

Use a CRD

Pros
  • Storage is abstracted
  • Explicitly enable permissions to resource with RBAC
  • Familiar API via the Kubernetes api.
Cons
  • The idea of a storing lightweight details about a traffic point doesn't really lend itself to a CRD in my opinion. A traffic point is a data point, not a resource. A traffic point also isn't something that should ever be created/updated/deleted by any user. Instead, it's a "behind-the-scenes" metric that only Kanali should create/read. It would be a lot of overhead to create a unique name (it wouldn't be namespace).
  • The api server, from what I understand, does not provide a grpc client as of yet. Hence, we'd have to use json+rest which would be slow.
  • Every single request will require an update or creation of a CRD as well as an emitted message over every http watch stream from every Kanali pod. This is noisy and and HTTP stream might not be the best option for this.

Use a separate instance of etcd

Pros
  • can use grpc/protobuf
  • the usage of grpc should support all of the traffic noise nicely.
Cons
  • requires a separate etcd deployment although it should be easy to deploy as one can just use a similar manifest as the one used to bootstrap the cluster.

I'm leaning towards etcd option but would love to get your opinion.

@SEJeff
Copy link

SEJeff commented Sep 8, 2017

Regarding json+http, protobuf support, comms with the apiserver via protobuf landed in kubernetes 1.3. The magic is passing the Content-Type header with the value application/vnd.kubernetes.protobuf instead of application/json.

Also I'm curious about the Cons for lightweight details about a traffic point not lending itsself to a CRD. I'd say Endpoint resources in K8S should never be used editable (except when using headless services to front non-k8s services). Forgive me for being dense, but I don't see how this is terribly different from endpoints. That said, if every single incoming request needs to update / create a CRD, that is indeed sad. I guess batching updates wouldn't be good enough would it?

Regarding complexity of using an external etcd, the etcd-operator is really good, and would be how I would suggest you recommend users to go if they use kanali + etcd.

Would you consider making the storage mechanism pluggable? Then you could implement both eventually, giving the user more flexibility. Just add decent unit tests so you don't break things!

@frankgreco
Copy link
Contributor Author

I have been pulling my hair out for a while over why protobuf wouldn't work - if all I was missing was a header..... :)

I agree that the endpoints object does account for a lot of noise from the apiserver however there'd be exponentially more noise if there was one for every request. If the requests were batched, then you wouldn't have consensus amongst the different Kanali instances.

I like the idea of having it pluggable. I can make everything conform to an interface and then you could have multiple implementations that conform that a common interface.

@SEJeff
Copy link

SEJeff commented Sep 8, 2017

Use the source luke!^Wfrank. That should work a bit better for you, so glad to help where I can. In hindsight, it doesn't make much sense to use a CRD for this just like you said, my only real suggestion is to document setting up a separate etcd cluster.

If you do indeed make the storage backend pluggable, also consider nats which we use at $REAL_JOB to sync up several packet pushing relay services together. It is really solid stuff.

Keep kicking ass.

@derekperkins
Copy link

+1 for pluggability. Depending on what infrastructure you already have in place, you might use etcd, Redis or whatever makes sense. On the initial raft train of thought, it should be relatively trivial to have one of those backends use www.serf.io so that things worked out of the box.

@frankgreco frankgreco modified the milestones: v1.3.0, v1.4.0, v3.0.0, v2.1.0 Sep 27, 2017
@frankgreco frankgreco modified the milestones: v2.1.0, v2.0.0 Oct 15, 2017
@frankgreco frankgreco mentioned this issue Oct 19, 2017
21 tasks
@frankgreco frankgreco changed the title Graduate Quota/Rate Limiting to Stable Implement Quota/Rate Limiting Mar 31, 2018
@frankgreco frankgreco removed this from the v2.0.0 milestone Mar 31, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants