In the realm of large enterprises, employing a monolithic Terraform setup becomes impractical. Dividing the infrastructure deployments into separate components not only reduces code complexity but also confines the blast radius of any change, as each component resides in its distinct Terraform state file. However, this segmentation introduces a new challenge: how can dependent configurations be dynamically shared among these deployments? Unlike in a monolithic deployment where resource attributes facilitate information exchange and dependency graph construction, this capability is lost when deployments are separated. This repository serves as a demonstration of effectively sharing configuration across different Terraform deployments.
The scenario we aim to address involves a standard hub-and-spoke architecture. In this setup, the hub represents a centralized virtual network housing shared services like firewalls, Azure Bastion jumpboxes, and gateways to on-premises infrastructure. Conversely, the spokes are isolated virtual networks peered back to the hub, typically hosting application workloads. This architecture, a common pattern in Azure, is extensively documented in the Azure Architecture Center. While our specific case involves a hub and two spokes, the solution should seamlessly scale to accommodate any number of spokes.
Note:
In the diagram below, the Identity virtual network functions as a spoke, peered to the hub. Though it typically wouldn't host workloads, it's utilized for other shared services such as Domain Controllers and DNS.
From an Infrastructure as Code (IaC) perspective, our objective is to deploy the hub and spoke architecture modularly. Each hub and spoke should be deployed separately using Terraform, resulting in three distinct state files.
Our aim is to dynamically share the hub virtual network's resource ID with the spoke deployments, enabling them to peer with the hub without resorting to hardcoding the hub virtual network resource ID in the spoke deployments.
As depicted, our solution leverages an Azure first-party service called App Configuration Store to serve as our central lightweight configuration database. This store facilitates both reading and writing operations, seamlessly integrated with Terraform via the azurerm
provider. The hub virtual network's resource ID will reside in this configuration store, which the spoke deployments will read to establish peering with the hub.
We opt for Azure App Configuration Store as our central configuration database due to its lightweight nature, minimal maintenance requirements, and support for key-value pairs accessible from anywhere. Additionally, it boasts features like availability zones, geo-replication, private endpoints, and, crucially its integration with Terraform via the azurerm
provider.
Note:
A similar setup could be achieved using Azure Storage Table or equivalent services.
While the terraform_remote_state
data source enables dynamic sharing of configuration between deployments by exposing root-level outputs, it necessitates full access to the state snapshot, posing potential security risks due to sensitive information exposure.
As previously mentioned, dividing the infrastructure deployments into separate components reduces code complexity and mitigates the impact of changes by confining components to separate Terraform state files.
- Azure CLI
- Terraform
- Azure Subscription where you have Owner permissions.
Though it is recommended to go through the manual steps to understand the process and the code, you can also run . '.scripts\Deploy.ps1'
in the root directory to deploy the full multi-deployment infrastructure.
In this we will deploy the pre-requisites for the deployment of our hub and spoke architecture. This will include deploying the Azure App Configuration Store and also giving the necessary permissions to the principal doing the deployment to access the App Configuration Store, that is, the RBAC Role "App Configuration Data Owner". This used to perform data plane operations on the App Configuration Store such as reading and writing configuration key-values.
Name | Type |
---|---|
azurerm_app_configuration.this | resource |
azurerm_resource_group.this | resource |
azurerm_role_assignment.this | resource |
random_bytes.this | resource |
azurerm_client_config.current | data source |
- clone the repository:
git clone https://github.com/luke-taylor/terraform-shared-configuration
- Change directory to the pre-deployment folder:
cd terraform-shared-configuration/pre-deployment
- Authenticate to Azure using the Azure CLI:
az login
- Initialize the Terraform configuration:
terraform init
- Apply the Terraform configuration:
terraform apply -auto-approve
- Set environment variable for
app_configuration_store_id
for the next deployments:
$env:APP_CONFIGURATION_STORE_ID = (terraform output app_configuration_store_id | convertFrom-Json)
- Checkout root directory:
cd ..
The hub deployment will create the hub virtual network and some other optional resources like Azure Bastion and Azure Firewall which will be defaulted to disabled unless specified otherwise. The virtual network resource ID will be written to the Azure App Configuration Store through the terraform-azurerm-app-configuration-read-write
module, this dynamically handles the JSON decoding and encoding of the values for us so we don't have to worry about it.
Name | Source | Version |
---|---|---|
write_data | ./../../modules/terraform-azurerm-app-configuration-read-write | n/a |
Name | Type |
---|---|
azurerm_bastion_host.this | resource |
azurerm_firewall.this | resource |
azurerm_public_ip.bastion | resource |
azurerm_public_ip.firewall | resource |
azurerm_resource_group.this | resource |
azurerm_virtual_network.this | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
app_configuration_store_id | The name of the configuration store. | string |
n/a | yes |
bastion_enabled | Whether to create the bastion. | bool |
false |
no |
firewall_enabled | Whether to create the firewall. | bool |
false |
no |
- Change directory to the hub deployment folder:
cd deployments/hub
- Initialize the Terraform configuration:
terraform init
- Apply the Terraform configuration:
terraform apply -auto-approve
Note:
Ensure to set theapp_configuration_store_id
input variable to the value outputted from the pre-deployment step.
- Change directory back to the root:
cd ..\..
This deployment will create the spoke virtual network for the Identity resources, and we will peer this virtual network with the hub virtual network using the terraform-azurerm-app-configuration-read-write
module to read the hub virtual network resource ID by simply specifying the key (hub_vnet_id
) in the module inputs. We will also create a private DNS zone and link this back to the hub virtual network.
Name | Source | Version |
---|---|---|
hub_virtual_network_id | ./../../modules/terraform-azurerm-app-configuration-read-write | n/a |
Name | Type |
---|---|
azurerm_private_dns_zone.this | resource |
azurerm_private_dns_zone_virtual_network_link.link_to_hub | resource |
azurerm_private_dns_zone_virtual_network_link.link_to_identity | resource |
azurerm_resource_group.this | resource |
azurerm_virtual_network.this | resource |
azurerm_virtual_network_peering.hub_to_identity | resource |
azurerm_virtual_network_peering.identity_to_hub | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
app_configuration_store_id | The name of the configuration store. | string |
n/a | yes |
- Change directory to the identity deployment folder:
cd deployments/identity
- Initialize the Terraform configuration:
terraform init
- Apply the Terraform configuration:
terraform apply -auto-approve
Note:
Ensure to set theapp_configuration_store_id
input variable to the value outputted from the pre-deployment step.
- Change directory back to the root:
cd ..\..
Finally, we will deploy the spoke virtual network for the Landing Zone resources, and we will peer this virtual network with the hub virtual network using the terraform-azurerm-app-configuration-read-write
module to read the hub virtual network resource ID by simply specifying the key (hub_vnet_id
) in the module inputs.
Name | Source | Version |
---|---|---|
hub_virtual_network_id | ./../../modules/terraform-azurerm-app-configuration-read-write | n/a |
Name | Type |
---|---|
azurerm_resource_group.this | resource |
azurerm_virtual_network.this | resource |
azurerm_virtual_network_peering.hub_to_landing_zone | resource |
azurerm_virtual_network_peering.landing_zone_to_hub | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
app_configuration_store_id | The name of the configuration store. | string |
n/a | yes |
- Change directory to the identity deployment folder:
cd deployments/identity
- Initialize the Terraform configuration:
terraform init
- Apply the Terraform configuration:
terraform apply -auto-approve
Note:
Ensure to set theapp_configuration_store_id
input variable to the value outputted from the pre-deployment step.
- Change directory back to the root:
cd ..\..
Obsevere the deployed resources in the Azure Portal.
To remove all resources efficiently run the following destroy script in the root directory:
. '.scripts\Destroy.ps1'