Skip to content

Latest commit

 

History

History
124 lines (107 loc) · 3.53 KB

README.md

File metadata and controls

124 lines (107 loc) · 3.53 KB

Mitose

Easy Kubernetes autoscaler controller. mitose

Install

To install mitose in your k8s cluster just run:

$ kubectl create -f mitose-app.yaml

We recommended you to use a diferent namespace

Controllers Configuration

The Mitose controllers are configured by kubernetes configmaps. Each entry on configmap represents a deployment to watch. The configuration data format is json with those fields:

Field Description
namespace namespace of deployment
deployment deployment name
type type of controller
max maximum number of replicas
min minimum number of replicas
scale_method method of autoscaling (by editing HPA or editing DEPLOY)
interval controller running interval (e.g. 1m)
active if this controller is active

Those fields are comom for each controller type.

You don't need to restart mitose when you change a configmap, because mitose will rebuild its controllers on each configmap change.

SQS Queue Size Controller

There is a mitose controller bases on AWS SQS queue size. The specifics configuration fields are:

Field Description
key aws credential key
secret aws credential secret
region aws region
queue_urls a list of the complete endopoints of the queues
msgs_per_pod the desired number of msgs in queue per replica

Example

To configure a controller based on SQS queue size use the follow example:

{
  "namespace": "target",
  "deployment": "target",
  "type": "sqs",
  "interval": "1m",
  "scale_method": "DEPLOY",
  "max": 5,
  "min": 1,
  "active": true,
  "key": "XXXX",
  "secret": "XXXX",
  "region": "us-east-1",
  "queue_urls": ["https://sqs.us-east-1.amazonaws.com/XXXXXXX/XXXXXXX"],
  "msgs_per_pod": 2
}

To configure a controller based on GCP's Pub/Sub:

{
  "namespace": "target",
  "deployment": "target",
  "type": "pubsub",
  "interval": "1m",
  "scale_method": "DEPLOY",
  "max": 5,
  "min": 1,
  "active": true,
  "google_application_credentials": "XXXX",
  "region": "us-east1",
  "subscription_ids": ["mysub"],
  "project": "my-gcp-project",
  "msgs_per_pod": 2
}

google_application_credentials should be the location of a credentials.json file provided by GCP.

To configure a controller based on RabbitMQ queue size:

{
  "namespace": "target",
  "deployment": "target",
  "type": "rabbitmq",
  "interval": "1m",
  "scale_method": "DEPLOY",
  "max": 5,
  "min": 1,
  "active": true,
  "credentials": "XXXX",
  "queue_urls": ["https://my-rabbitmq-domain/api/queues/vhost/queue-name"],
  "msgs_per_pod": 2
}

credentials should be the user:password of RabbitMQ encoded in base64 format.

Save that content as target.json file and create a configmap using the kubectl create configmap command, f.ex:

$ kubectl create configmap config --from-file=target.json --namespace=mitose

Prometheus metrics handler configuration

To expose mitose metrics to prometheus you need to expose a service to deploy

$ kubectl expose deployment mitose --type=ClusterIP --port=5000 --namespace mitose

and add the annotation prometheus.io/scrape: "true" on this service. Mitose will start a http server on the port configured by environment variable $PORT.

If you deployed mitose using the mitose-app.yaml file, you don't need that.

TODO

  • Tests
  • Admin to CRUD the configs.