This is a combination of three repos kelseyhightower/consul-on-kubernetes, drud/vault-consul-on-kube and h2ik/consul-vault-kubernetes.
To the Consul deployment are ACL's added from the last repo.
This first starts with taking the Consul StatefulSets and upgrading them to 1.1.2 and then taking the vault deployments from drud/vault-consul-on-kube and modifing them to work with the statefulset deployments.
Update: I added some manifests to deploy vault also as StatefulSet in addition with automatic initialization and unsealing, this is inspired from kelseyhightower/vault-on-google-kubernetes-engine and sethvargo/vault-on-gke. Thanks to both for this.
This tutorial will walk you through deploying a three (3) node Consul cluster on Kubernetes.
- Three (3) node Consul cluster using a StatefulSet
- Secure communication between Consul members using TLS and encryption keys
This tutorial leverages features available in Kubernetes 1.10.0 and later.
- kubernetes 1.10.x
The following clients must be installed on the machine used to follow this tutorial:
In my Setup i use local storage as persistence Volumes for Consul.
First we create a storage class local-storage:
kubectl apply -f volumes/storage_class.yaml
Now we create the persistenace Volumes, since the local-storage class cannot create this dynamicly you must create this folders manually on each node.
On each node create the Folder structure:
mkdir -p /data/storage/consul
Create the PV in kubernetes:
kubectl apply -f volumes/persistant_volume.yaml
RPC communication between each Consul member will be encrypted using TLS. Initialize a Certificate Authority (CA):
cfssl gencert -initca ca/ca-csr.json | cfssljson -bare ca
Create the Consul TLS certificate and private key:
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca/ca-config.json \
-profile=default \
ca/consul-csr.json | cfssljson -bare consul
At this point you should have the following files in the current working directory:
ca-key.pem
ca.pem
consul-key.pem
consul.pem
Gossip communication between Consul members will be encrypted using a shared encryption key. Generate and store an encrypt key:
GOSSIP_ENCRYPTION_KEY=$(consul keygen)
The Consul cluster will be configured using a combination of CLI flags, TLS certificates, and a configuration file, which reference Kubernetes configmaps and secrets.
Store the gossip encryption key and TLS certificates in a Secret:
kubectl create secret generic consul \
--from-literal="gossip-encryption-key=${GOSSIP_ENCRYPTION_KEY}" \
--from-file=ca.pem \
--from-file=consul.pem \
--from-file=consul-key.pem
Depreciated Create Tokens for the ACL's with uuidgen and put it in the configs/server.json file.
Since Consul 0.9.1 you can bootstrap the ACL Tokens over the ACL API described here.
ACL are bootstrapped below in the ACL Section.
Store the Consul server configuration file in a ConfigMap:
kubectl create configmap consul --from-file=configs/server.json
Create a headless service to expose each Consul member internally to the cluster:
kubectl create -f services/consul.yaml -f services/consul-http.yaml
Deploy a three (3) node Consul cluster using a StatefulSet:
kubectl create -f statefulsets/consul.yaml
Each Consul member will be created one by one. Verify each member is Running
before moving to the next step.
kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-0 1/1 Running 0 50s
consul-1 1/1 Running 0 29s
consul-2 1/1 Running 0 15s
Since we are using ACL with consul we need to setup these.
First create bootstrap master token, this is a managment token:
#kubectl exec -it consul-0 -- curl --request PUT http://127.0.0.1:8500/v1/acl/bootstrap
{"ID":"<master token>","AccessorID":"7b9d03ea-510e-7d0b-2be8-45c42937698f","SecretID":"<master token>","Description":"Bootstrap Token (Global Management)","Policies":[{"ID":"00000000-0000-0000-0000-000000000001","Name":"global-management"}],"Local":false,"CreateTime":"2019-02-22T15:31:17.289734141Z","Hash":"oyrov6+GFLjo/KZAfqgxF/X4J/3LX0435DOBy9V22I0=","CreateIndex":11,"ModifyIndex":11}
On the ACL Page in the WebUI create a new Agent Token Policy.(You can use the master token to access this)
node_prefix "" {
policy = "write"
}
service_prefix "" {
policy = "read"
}
key_prefix "lock/" {
policy = "write"
}
or via ACL API
kubectl exec -it consul-0 -- curl \
--request PUT \
--header "X-Consul-Token: <master token>" \
--data '{
"Name": "agent-token",
"Description": "Agent Token Policy",
"Rules": "node_prefix \"\" { policy = \"write\"} service_prefix \"\" { policy = \"read\"} key_prefix \"lock/\" { policy = \"write\"}",
"Datacenters": ["dc1"]
}' http://127.0.0.1:8500/v1/acl/policy
{"ID":"45dd10b1-2b7c-273a-c2ec-da4a4ea216e9","Name":"agent-token","Description":"Agent Token Policy","Rules":"node_prefix \"\" { policy = \"write\"} service_prefix \"\" { policy = \"read\"} key_prefix \"lock/\" { policy = \"write\"}","Datacenters":["dc1"],"Hash":"8E2cR3dI75q56akVS8HcARoVNmUcZA8FApULOoYN9tE=","CreateIndex":14,"ModifyIndex":14}
after that you can create the agent token with the newly created policy
kubectl exec -it consul-0 -- curl \
--request PUT \
--header "X-Consul-Token: <master token>" \
--data '{
"Description": "Agent token",
"Policies": [{"ID": "45dd10b1-2b7c-273a-c2ec-da4a4ea216e9"}],
"Local": true
}' http://127.0.0.1:8500/v1/acl/token
{"AccessorID":"048763c4-ba6c-6f03-3173-b16e9fbfaf92","SecretID":"<agent token>","Description":"Agent token","Policies":[{"ID":"45dd10b1-2b7c-273a-c2ec-da4a4ea216e9","Name":"agent-token"}],"Local":true,"CreateTime":"2019-02-22T15:34:11.427293434Z","Hash":"JBYiLsbos13QGMJgJMwBy5UfGTFUH00BHtMdWceVwdg=","CreateIndex":16,"ModifyIndex":16}
For all 3 consul nodes set the Agent Token via API.
kubectl exec -it consul-0 -- curl --request PUT --header "X-CONSUL-TOKEN: <master token>" --data '{"Token": "<agent token>"}' http://localhost:8500/v1/agent/token/acl_agent_token
In order to allow operation like consul members
works without a token we can allow anonymous some operations.
First we create a policy that allows to read node infos
kubectl exec -it consul-0 -- curl \
--request PUT \
--header "X-Consul-Token: <master token>" \
--data '{
"Name": "list-al-nodes",
"Description": "Anonymous node info policy",
"Rules": "node_prefix \"\" { policy = \"read\" }"
}' http://127.0.0.1:8500/v1/acl/policy
{"ID":"7429a177-a322-9e8e-efc0-36e9e78f51f3","Name":"list-al-nodes","Description":"Anonymous node info policy","Rules":"node_prefix \"\" { policy = \"read\" }","Hash":"J7N2FZKEM9xiji5uVAEbAroBD/F3Eq8bgddLgVZCCHI=","CreateIndex":442,"ModifyIndex":442}
now add this policy to the anonymous token
kubectl exec -it consul-0 -- curl \
--request PUT \
--header "X-Consul-Token: <master token>" \
--data '{
"Description": "Anonymous Token - Can List Nodes",
"Policies": [{"ID": "7429a177-a322-9e8e-efc0-36e9e78f51f3"}]
}' http://127.0.0.1:8500/v1/acl/token/00000000-0000-0000-0000-000000000002
{"AccessorID":"00000000-0000-0000-0000-000000000002","SecretID":"anonymous","Description":"Anonymous Token - Can List Nodes","Policies":[{"ID":"7429a177-a322-9e8e-efc0-36e9e78f51f3","Name":"list-al-nodes"}],"Local":false,"CreateTime":"2019-02-22T15:29:52.237703424Z","Hash":"yijZb8sb7ra0vt6/B8DFeNFXfwqL69tdgFMb0QOOy/c=","CreateIndex":5,"ModifyIndex":487}
At this point the Consul cluster has been bootstrapped and is ready for operation. To verify things are working correctly, review the logs for one of the cluster members.
kubectl logs consul-0
The consul CLI can also be used to check the health of the cluster. In a new terminal start a port-forward to the consul-0
pod.
kubectl port-forward consul-0 8500:8500
Forwarding from 127.0.0.1:8500 -> 8500
Forwarding from [::1]:8500 -> 8500
Run the consul members
command to view the status of each cluster member.
consul members
Node Address Status Type Build Protocol DC Segment
consul-0 10.2.3.151:8301 alive server 1.1.0 2 dc1 <all>
consul-1 10.2.2.199:8301 alive server 1.1.0 2 dc1 <all>
consul-2 10.2.1.125:8301 alive server 1.1.0 2 dc1 <all>
We'll use the consul web UI to create this, which avoids all manner of quote-escaping problems.(Can also be done via ACL API, see above)
- Port-forward port 8500 of to local:
kubectl port-forward consul-0 8500
- Hit http://localhost:8500/ui with browser.
- Visit the settings page (gear icon) and enter your acl_master_token.
- Click "ACL"
- Add an ACL with name vault-token, type client, rules:
key_prefix "vault/" {
policy = "write"
}
service "vault" {
policy = "write"
}
session_prefix "" {
policy = "write"
}
node_prefix "" {
policy = "write"
}
agent_prefix "" {
policy = "write"
}
- Capture the newly created vault-token and with it (example key here):
$ kubectl create secret generic vault-consul-key --from-literal=consul-key=9f34ab90-965c-56c7-37e0-362da75bfad9
Get key and cert files for the domain vault will be exposed from. You can do this any way that works for your deployment, including a self-signed certificate, so long as you have a concatenated full certificate chain in vault-combined.pem and private key in vault-key.pem :
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca/ca-config.json \
-profile=default \
ca/vault-csr.json | cfssljson -bare vault
$ cat vault.pem ca.pem > vault-combined.pem
$ kubectl create secret tls vaulttls --cert=vault-combined.pem --key=vault-key.pem
$ kubectl apply -f services/vault-services.yaml
You are now ready to deploy the vault instances:
$ kubectl apply -f deployments/vault-1.yaml -f deployments/vault-2.yaml
We must also apply the Agent Token to the consul container in the Vault pods.
kubectl exec -it <vault pod name> --container consul-agent-client -- curl --request PUT --header "X-CONSUL-TOKEN: <master token>" --data '{"Token": "<agent token>"}' http://localhost:8500/v1/agent/token/acl_agent_token
It's easiest to access the vault in its initial setup on the pod itself,
where HTTP port 9000 is exposed for access without https. You can decide
how many keys and the recovery threshold using args to vault init
$ kubectl exec -it <vault-1*> --container vault -- sh
$ vault operator init
or
$ vault operator init -key-shares=1 -key-threshold=1
This provides the key(s) and initial auth token required.
Unseal with
$ vault operator unseal
(You should not generally use the form vault unseal <key>
because it probably will leave traces of the key in shell history or elsewhere.)
and auth with
$ vault auth
Token (will be hidden): <initial_root_token>
Then access <vault-2*> in the exact same way (kubectl exec -it <vault-2*> --container vault -- sh
) and unseal it.
It will go into standby mode.
You can follow the above Deployment of Vault until the tls setup.
The following Instructions are taken from kelseyhightower/vault-on-google-kubernetes-engine with some changes.
Of curse you need a google account to proceed.
In this section you will create a new GCP project and enable the APIs required by this tutorial.
Generate a project ID:
PROJECT_ID="vault-$(($(date +%s%N)/1000000))"
Create a new GCP project:
gcloud projects create ${PROJECT_ID} \
--name "${PROJECT_ID}"
Enable billing on the new project before moving on to the next step.
Enable the GCP APIs required by this tutorial:
Note: since we dont need all services, i removed some here.
gcloud services enable \
cloudapis.googleapis.com \
cloudkms.googleapis.com \
iam.googleapis.com \
--project ${PROJECT_ID}
COMPUTE_ZONE="us-west1-c"
COMPUTE_REGION="us-west1"
GCS_BUCKET_NAME="${PROJECT_ID}-vault-storage"
KMS_KEY_ID="projects/${PROJECT_ID}/locations/global/keyRings/vault/cryptoKeys/vault-init"
In this section you will create a Cloud KMS keyring and cryptographic key suitable for encrypting and decrypting Vault master keys and root tokens.
Create the vault
kms keyring:
gcloud kms keyrings create vault \
--location global \
--project ${PROJECT_ID}
Create the vault-init
encryption key:
gcloud kms keys create vault-init \
--location global \
--keyring vault \
--purpose encryption \
--project ${PROJECT_ID}
Google Cloud Storage is used to hold encrypted Vault master keys and root tokens.
Create a GCS bucket:
gsutil mb -p ${PROJECT_ID} gs://${GCS_BUCKET_NAME}
An IAM service account is used by Vault to access the GCS bucket and KMS encryption key created in the previous sections.
Create the vault
service account:
gcloud iam service-accounts create vault-server \
--display-name "vault service account" \
--project ${PROJECT_ID}
Grant access to the vault storage bucket:
gsutil iam ch \
serviceAccount:vault-server@${PROJECT_ID}.iam.gserviceaccount.com:objectAdmin \
gs://${GCS_BUCKET_NAME}
gsutil iam ch \
serviceAccount:vault-server@${PROJECT_ID}.iam.gserviceaccount.com:legacyBucketReader \
gs://${GCS_BUCKET_NAME}
Grant access to the vault-init
KMS encryption key:
gcloud kms keys add-iam-policy-binding \
vault-init \
--location global \
--keyring vault \
--member serviceAccount:vault-server@${PROJECT_ID}.iam.gserviceaccount.com \
--role roles/cloudkms.cryptoKeyEncrypterDecrypter \
--project ${PROJECT_ID}
This is needed to authentificate to google. Since we dont use google cloud here, we must provide this credentials to the vault-init container, so that he can access google cloud.
gcloud iam service-accounts keys create vault-creds.json --iam-account=vault-server@${PROJECT_ID}.iam.gserviceaccount.com
In this section you will generate the self-signed TLS certificates used to secure communication between Vault clients and servers.
Generate the Vault TLS certificates:
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca/ca-config.json \
-profile=default \
ca/vault-csr.json | cfssljson -bare vault
In this section you will deploy the multi-node Vault cluster using a collection of Kubernetes and application configuration files.
Note: Since i will use nodePort for my service , i removed the loadBalancer part and added
node_ip_addr
to exposevault
.
Create the vault
secret to hold the Vault TLS certificates:
cat vault.pem ca.pem > vault-combined.pem
kubectl create secret generic vault \
--from-file=ca.pem \
--from-file=vault.pem=vault-combined.pem \
--from-file=vault-key.pem
The vault
configmap holds the Google Cloud Platform settings required bootstrap the Vault cluster.
Create the vault
configmap:
kubectl create configmap vault \
--from-literal node_ip_addr=10.0.2.51 \
--from-literal gcs_bucket_name=${GCS_BUCKET_NAME} \
--from-literal kms_key_id=${KMS_KEY_ID} \
--from-literal google_application_credentials="/meta/credentials/vault-creds.json"
Create vault
creds secret(needed for vault-init container):
kubectl create secret generic vault-creds --from-file=vault-creds.json
In this section you will create the vault
service/statefulset used to provision and manage two Vault server instances.
Create the vault
service:
kubectl apply -f services/vault-stateful.yaml
Create the vault
statefulset:
kubectl apply -f statefulsets/vault.yaml
At this point the multi-node cluster is up and running:
kubectl get pods
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
consul-0 1/1 Running 0 2h 10.2.1.72 worker01
consul-1 1/1 Running 0 2h 10.2.3.213 worker03
consul-2 1/1 Running 0 2h 10.2.2.116 worker02
vault-0 3/3 Running 0 1h 10.2.2.246 worker02
vault-1 3/3 Running 0 1h 10.2.3.110 worker03
In a typical deployment Vault must be initialized and unsealed before it can be used. In our deployment we are using the vault-init container to automate the initialization and unseal steps.
kubectl logs vault-0 -c vault-init
2018/04/25 01:52:11 Starting the vault-init service...
2018/04/25 01:52:21 Vault is not initialized. Initializing and unsealing...
2018/04/25 01:52:28 Encrypting unseal keys and the root token...
2018/04/25 01:52:29 Unseal keys written to gs://vault-1524618541915-vault-storage/unseal-keys.json.enc
2018/04/25 01:52:29 Root token written to gs://vault-1524618541915-vault-storage/root-token.enc
2018/04/25 01:52:29 Initialization complete.
2018/04/25 01:52:30 Unseal complete.
2018/04/25 01:52:30 Next check in 10s
We must also apply the Agent Token to the consul container in the Vault pods.
kubectl exec -it <vault pod name> --container consul-agent-client -- curl --request PUT --header "X-CONSUL-TOKEN: <master token>" --data '{"Token": "<agent token>"}' http://localhost:8500/v1/agent/token/acl_agent_token
Download and decrypt the root token:
export VAULT_TOKEN=$(gsutil cat gs://${GCS_BUCKET_NAME}/root-token.enc | \
base64 --decode | \
gcloud kms decrypt \
--project ${PROJECT_ID} \
--location global \
--keyring vault \
--key vault-init \
--ciphertext-file - \
--plaintext-file -
)
export VAULT_CACERT="ca.pem"
export VAULT_ADDR="https://10.0.2.51:30820"
$ vault status
Key Value
--- -----
Seal Type shamir
Sealed false
Total Shares 5
Threshold 3
Version 0.10.4
Cluster Name vault-cluster-7ead9fe8
Cluster ID 3cc58c0e-dec1-ed48-6b73-43c2b0524ff1
HA Enabled true
HA Cluster https://10.2.2.246:8201
HA Mode standby
Active Node Address https://10.0.2.51:30820
$ vault secrets list
Path Type Accessor Description
---- ---- -------- -----------
cubbyhole/ cubbyhole cubbyhole_afd92c13 per-token private secret storage
identity/ identity identity_538cbfbd identity store
secret/ kv kv_d155aa32 key/value secret storage
sys/ system system_9cb0ea17 system endpoints used for control, policy and debugging