This is a BOSH release for consul.
###Contents
In order to deploy consul-release you must follow the standard steps for deploying software with BOSH.
We assume you have already deployed and targeted a BOSH director. For more instructions on how to do that please see the BOSH documentation.
Find the "BOSH Lite Warden" stemcell you wish to use. bosh.io provides a resource to find and download stemcells. Then run bosh upload release STEMCELL_URL_OR_PATH_TO_DOWNLOADED_STEMCELL
.
From within the consul-release director run bosh create release --force
to create a development release.
Once you've created a development release run bosh upload release
to upload your development release to the director.
We provide a set of scripts and templates to generate a simple deployment manifest. You should use these as a starting point for creating your own manifest, but they should not be considered comprehensive or production-ready.
In order to automatically generate a manifest you must have installed spiff. Once installed, manifests can be generated using ./scripts/generate_consul_deployment_manifest [STUB LIST]
with the provided stubs:
-
director_uuid_stub
The director_uuid_stub provides the uuid for the currently targeted BOSH director.
--- director_uuid: DIRECTOR_UUID
-
instance_count_stub
The instance count stub provides the ability to overwrite the number of instances of consul to deploy. The minimal deployment of consul is shown below:
--- instance_count_overrides: consul_z1: instances: 1 consul_z2: instances: 0
NOTE: at no time should you deploy only 2 instances of consul.
-
persistent_disk_stub
The persistent disk stub allows you to override the size of the persistent disk used in each instance of the consul job. If you wish to use the default settings provide a stub with only an empty hash:
--- persistent_disk_overrides: {}
To override disk sizes the format is as follows
--- persistent_disk_overrides: consul_z1: 1234 consul_z2: 1234
-
iaas_settings
The IaaS settings stub contains IaaS-specific settings, including networks, cloud properties, and compilation properties. Please see the BOSH documentation for setting up networks and subnets on your IaaS of choice. We currently allow for three network configurations on your IaaS: consul1, consul2, and compilation. You must also specify the stemcell to deploy against as well as the version (or latest).
We provide default stubs for a BOSH-Lite deployment. Specifically:
- instance_count_stub: manifest-generation/bosh-lite-stubs/instance-count-overrides.yml
- persistent_disk_stub: manifest-generation/bosh-lite-stubs/persistent-disk-overrides.yml
- iaas_settings: manifest-generation/bosh-lite-stubs/iaas-settings-consul.yml
[Optional]
-
If you wish to override the name of the release and the deployment (default: consul) you can provide a release_name_stub with the following format:
--- name_overrides: release_name: NAME deployment_name: NAME
Output the result of the above command to a file: ./scripts/generate_consul_deployment_manifest [STUB LIST] > OUTPUT_MANIFEST_PATH
.
Run bosh -d OUTPUT_MANIFEST_PATH deploy
.
Run the confab
tests by executing the src/confab/scripts/test
executable.
The acceptance tests deploy a new consul cluster and exercise a variety of features, including scaling the number of nodes, as well as destructive testing to verify resilience.
The following should be installed on the local machine:
- jq
- Consul
- Golang (>= 1.5)
If using homebrew, these can be installed with:
brew install consul go jq
Make sure you’ve run bin/add-route
.
This will setup some routing rules to give the tests access to the consul VMs.
You will want to run your tests from a VM within the same subnet as determined in your iaas-settings stub. This assumes you are using a private subnet within a VPC.
This repository assumes that it is the root of your GOPATH
. You can set this up by doing the following:
source .envrc
Or if you have direnv
installed:
direnv allow
Run all the tests with:
CONSATS_CONFIG=[config_file.json] ./scripts/test
Run a specific set of tests with:
CONSATS_CONFIG=[config_file.json] ./scripts/test <some test packages>
The CONSATS_CONFIG
environment variable points to a configuration file which specifies the endpoint of the BOSH director.
When specifying location of the CONSATS_CONFIG, it must be an absolute path on the filesystem.
See below for more information on the contents of this configuration file.
An example config json for BOSH-lite would look like:
cat > integration_config.json << EOF
{
"bosh":{
"target": "192.168.50.4",
"username": "admin",
"password": "admin"
}
}
EOF
export CONSATS_CONFIG=$PWD/integration_config.json
The full set of config parameters is explained below:
bosh.target
(required) Public BOSH IP address that will be used to host test environmentbosh.username
(required) Username for the BOSH director loginbosh.password
(required) Password for the BOSH director loginbosh.director_ca_cert
BOSH Director CA Certaws.subnet
Subnet ID for AWS deploymentsaws.access_key_id
Key ID for AWS deploymentsaws.secret_access_key
Secret Access Key for AWS deploymentsaws.default_key_name
Default Key Name for AWS deploymentsaws.default_security_groups
Security groups for AWS deploymentsaws.region
Region for AWS deploymentsregistry.host
Host for the BOSH registryregistry.port
Port for the BOSH registryregistry.username
Username for the BOSH registryregistry.password
Password for the BOSH registry
The acceptance-tests
BOSH errand assumes that the BOSH director has already uploaded the correct versions of the dependent releases.
The required releases are:
- turbulence-release
- consul-release or
bosh create release && bosh upload release
For BOSH-Lite:
For AWS:
We provide a set of scripts and templates to generate a simple deployment manifest. This manifest is designed to work on a local BOSH-lite or AWS provisioned BOSH.
In order to automatically generate a manifest you must have installed spiff.
Once installed, manifests can be generated using ./scripts/generate-consats-manifest {bosh-lite|aws}
with the provided stubs.
NOTE: the manifest generation script will set the deployment for the BOSH CLI.
Run bosh deploy
.
Run bosh run errand acceptance-tests
It is not recommended to run a 1-node cluster in any "production" environment. Having a 1-node cluster does not ensure any amount of data persistence.
WARNING: Scaling your cluster to or from a 1-node configuration may result in data loss.