-
Notifications
You must be signed in to change notification settings - Fork 10k
Description
Terraform Version
Terraform v1.13.1
Terraform Configuration Files
tests/setup/main.tf
terraform {
required_version = ">= 0.13"
required_providers {
random = {
source = "hashicorp/random"
version = "~> 3"
}
}
}
resource "random_pet" "name" {
length = 2
prefix = "tftest-delete-me"
}
output "name" {
value = random_pet.name.id
}
data "aws_vpc" "this" {
id = "vpc-004648e92e614c452"
}
output "vpc_id" {
value = data.aws_vpc.this.id
}
resource "aws_subnet" "this" {
# Hardcoding the CIDR blocks here means that only one instance of this test
# can be run at a time. I don't see this as a problem as the test will be run
# infrequently.
for_each = {
"eu-west-1b" : {
cidr_block = "172.30.132.0/28"
},
"eu-west-1c" : {
cidr_block = "172.30.133.0/28"
}
}
vpc_id = data.aws_vpc.this.id
cidr_block = each.value.cidr_block
availability_zone = each.key
tags = {
Name = "${random_pet.name.id}-${each.key}"
}
}
resource "aws_db_subnet_group" "this" {
name = random_pet.name.id
subnet_ids = [for sn in aws_subnet.this : sn.id]
}
output "subnet_group" {
value = aws_db_subnet_group.this
}
resource "random_password" "this" {
length = 16
special = false
}
output "password" {
value = random_password.this.result
sensitive = true
}
tests/create_rds.tftest.hcl
test {
parallel = true
}
run "setup_tests" {
# This will create a DB subnet group that can be passed db_subnet_group_name
# input of the module under test
module {
source = "./tests/setup"
}
}
run "rds_without_dns_records" {
command = apply
state_key = "rds_without_dns_records"
variables {
environment = "${run.setup_tests.name}0"
name = "sample"
database_name = "sample"
instance_class = "db.t4g.small"
engine = "postgres"
engine_version = "15"
allow_major_version_upgrade = true
port = "5432"
storage_type = "gp3"
iops = 12000
allocated_storage = 400
apply_immediately = true
max_allocated_storage = 500
enable_replica = true
replica_count = 0
username = "dboperator"
password = run.setup_tests.password
ca_cert_identifier = "rds-ca-rsa2048-g1"
backup_retention_period = "1"
create_db_option_group = false
option_group_name = "default:postgres-15"
create_dns_records = false
vpc_id = run.setup_tests.vpc_id
security_groups = ["sg-080ef5d2e7abe84bc"]
db_subnet_group_name = run.setup_tests.subnet_group.name
skip_final_snapshot = true
}
}
# If master db instance wants to create parameters with parameter group, but read replica does not.
# read replica parameter group should not be updated and totally ignored.
run "rds_dont_add_read_replica_parameter_groups" {
command = apply
state_key = "rds_dont_add_read_replica_parameter_groups"
variables {
environment = "${run.setup_tests.name}1"
name = "sample"
database_name = "sample"
instance_class = "db.t4g.small"
engine = "postgres"
engine_version = "15"
allow_major_version_upgrade = true
port = "5432"
storage_type = "gp3"
iops = 12000
allocated_storage = 400
apply_immediately = true
max_allocated_storage = 500
enable_replica = true
replica_count = 1
username = "dboperator"
password = run.setup_tests.password
ca_cert_identifier = "rds-ca-rsa2048-g1"
backup_retention_period = "1"
create_db_option_group = true
parameter_group_name = "rds-param-group"
parameters = []
create_dns_records = false
vpc_id = run.setup_tests.vpc_id
security_groups = ["sg-080ef5d2e7abe84bc"]
db_subnet_group_name = run.setup_tests.subnet_group.name
skip_final_snapshot = true
}
assert {
condition = length(module.rds_replica) == 1
error_message = "Invalid parameter group name being set"
}
assert {
condition = module.rds_replica[0].db_parameter_group_id == null
error_message = "Invalid replica parameter group name being set"
}
assert {
condition = strcontains(module.rds_instance.db_parameter_group_id, "rds-param-group")
error_message = "Invalid master parameter group name being set"
}
}
# other run blocks appear here, for brevity I've only shown two.
Debug Output
...debug output, or link to a gist...
Expected Behavior
We have a terraform module that we use to create AWS RDS instances. Its basically a thin wrapper around https://registry.terraform.io/modules/terraform-aws-modules/rds/aws/latest which we use to put guardrails around our use of RDS.
We use terraform test
to test the module and following the successful resolution of #37156 (comment) I've been running these tests in parallel using terraform 1.13. My test file create_rds.tftest.hcl contains 6 run blocks, all of which cause an RDS instance to get created.
Notice in tests/setup/main.tf we create a aws_db_subnet_group
:
resource "aws_db_subnet_group" "this" {
name = random_pet.name.id
subnet_ids = [for sn in aws_subnet.this : sn.id]
}
setup module is instantiated in create_rds.tftest.hcl:
run "setup_tests" {
# This will create a DB subnet group that can be passed db_subnet_group_name
# input of the module under test
module {
source = "./tests/setup"
}
}
and then used in each of our run blocks, e.g.:
run "rds_without_dns_records" {
command = apply
state_key = "rds_without_dns_records"
variables {
# removed other vars for brevity
# ...
db_subnet_group_name = run.setup_tests.subnet_group.name
}
}
My expectation is that all the tests succeed, but that is not happening.
Actual Behavior
The tests fail during the teardown phase with error:
│ Error: deleting RDS Subnet Group (tftest-delete-me-normal-haddock): operation error RDS: DeleteDBSubnetGroup, https response error StatusCode: 400,
RequestID: ef74d1c3-b0df-436b-8c83-22b57b016839, InvalidDBSubnetGroupStateFault:
Cannot delete the subnet group 'tftest-delete-me-normal-haddock' because at least one database
instance: sample-db000-tftest-delete-me-normal-haddock2 is still using it.
Note that the DB Subnet group referred to in the error is aws_db_subnet_group.this
created by tests/setup/main.tf.
I assume that the resources created by one of the run blocks are torn down and so terraform determines that it can safely tear down the resources created by the setup module. That however is a wrong determination because resources created by the other five run blocks still exist and are still using that DB subnet group. Hence the error.
My question is, is this expected behaviour? Should I be creating a separate DB subnet group for each run block? Or, should terraform determine that the shared resource is still in use and not attempt to remove it.
Steps to Reproduce
- Create a module which is a thin wrapper around https://registry.terraform.io/modules/terraform-aws-modules/rds/aws/6.1.1?utm_content=documentLink&utm_medium=Cursor&utm_source=terraform-ls
- Create a tests folder in that module and add the files tests/setup/main.tf & tests/create_rds.tftest.hcl which I provided above.
- Run
terraform test
on the module. Observe all the RDS instances getting created. I am expecting that upon teardown the error explained above will occur.
Additional Context
No response
References
No response
Generative AI / LLM assisted development?
No response