Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .github/workflows/acceptance-tests-runner.yml
Original file line number Diff line number Diff line change
Expand Up @@ -427,7 +427,8 @@ jobs:
env:
MONGODB_ATLAS_LAST_VERSION: ${{ needs.get-provider-version.outputs.provider_version }}
HTTP_MOCKER_CAPTURE: 'true'
ACCTEST_REGEX_RUN: ${{ inputs.reduced_tests == true && '^TestAccMockable' || env.ACCTEST_REGEX_RUN }}
# TEMPORARY: Don't merge, will be reverted in the last commit before merging
ACCTEST_REGEX_RUN: '^TestAccAdvancedCluster_effective'
Comment on lines +430 to +431
Copy link

Copilot AI Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Temporary test filter creates a risk if merged. The hardcoded regex replaces conditional logic and would cause only effective fields tests to run in all scenarios.

Suggested change
# TEMPORARY: Don't merge, will be reverted in the last commit before merging
ACCTEST_REGEX_RUN: '^TestAccAdvancedCluster_effective'
# Run all advanced cluster acceptance tests
ACCTEST_REGEX_RUN: ''

Copilot uses AI. Check for mistakes.
ACCTEST_PACKAGES: ./internal/service/advancedcluster
run: make testacc

Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/code-health.yml
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ jobs:
cat doc.repo.patch
exit 1
call-acceptance-tests-workflow:
needs: [build, lint, shellcheck, unit-test, generate-doc-check]
# needs: [build, lint, shellcheck, unit-test, generate-doc-check] # TEMPORARY: Don't merge, will be reverted in the last commit before merging
Copy link

Copilot AI Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commented-out job dependency creates a risk if this temporary change is merged. Consider using a feature flag or environment variable check instead of commenting out the dependency.

Suggested change
# needs: [build, lint, shellcheck, unit-test, generate-doc-check] # TEMPORARY: Don't merge, will be reverted in the last commit before merging
needs: [build, lint, shellcheck, unit-test, generate-doc-check]

Copilot uses AI. Check for mistakes.
secrets: inherit
uses: ./.github/workflows/acceptance-tests.yml
with:
Expand Down
2 changes: 1 addition & 1 deletion docs/data-sources/advanced_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ data "mongodbatlas_advanced_cluster" "this" {

* `project_id` - (Required) The unique ID for the project to create the cluster.
* `name` - (Required) Name of the cluster as it appears in Atlas. Once the cluster is created, its name cannot be changed.
* `use_effective_fields` - (Optional) Controls how hardware specification fields are returned in the response. When set to true, the non-effective specs (`electable_specs`, `read_only_specs`, `analytics_specs`) fields return the hardware specifications that the client provided. When set to false (default), the non-effective specs fields show the **current** hardware specifications. Cluster auto-scaling is the primary cause for differences between initial and current hardware specifications. **Note:** Effective specs (`effective_electable_specs`, `effective_read_only_specs`, `effective_analytics_specs`) are always returned regardless of the flag value and always report the **current** hardware specifications. See the resource documentation for [Auto-Scaling with Effective Fields](../resources/advanced_cluster.md#auto-scaling-with-effective-fields) for more details.
* `use_effective_fields` - (Optional) Controls how hardware specification fields are returned in the response. When set to true, the non-effective specs (`electable_specs`, `read_only_specs`, `analytics_specs`) fields return the hardware specifications that the client provided. When set to false (default), the non-effective specs fields show the **current** hardware specifications. Cluster auto-scaling is the primary cause for differences between initial and current hardware specifications. It does not apply to tenant or flex clusters. **Note:** Effective specs (`effective_electable_specs`, `effective_read_only_specs`, `effective_analytics_specs`) are always returned regardless of the flag value and always report the **current** hardware specifications. See the resource documentation for [Auto-Scaling with Effective Fields](../resources/advanced_cluster.md#auto-scaling-with-effective-fields) for more details.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when we say "does not apply": are these fields not returned for M0/Flex? are they null? returned with pre-selected values?


## Attributes Reference

Expand Down
2 changes: 1 addition & 1 deletion docs/data-sources/advanced_clusters.md
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ data "mongodbatlas_advanced_clusters" "this" {
## Argument Reference

* `project_id` - (Required) The unique ID for the project to get the clusters.
* `use_effective_fields` - (Optional) Controls how hardware specification fields are returned in the response. When set to true, the non-effective specs (`electable_specs`, `read_only_specs`, `analytics_specs`) fields return the hardware specifications that the client provided. When set to false (default), the non-effective specs fields show the **current** hardware specifications. Cluster auto-scaling is the primary cause for differences between initial and current hardware specifications. **Note:** Effective specs (`effective_electable_specs`, `effective_read_only_specs`, `effective_analytics_specs`) are always returned regardless of the flag value and always report the **current** hardware specifications. See the resource documentation for [Auto-Scaling with Effective Fields](../resources/advanced_cluster.md#auto-scaling-with-effective-fields) for more details.
* `use_effective_fields` - (Optional) Controls how hardware specification fields are returned in the response. When set to true, the non-effective specs (`electable_specs`, `read_only_specs`, `analytics_specs`) fields return the hardware specifications that the client provided. When set to false (default), the non-effective specs fields show the **current** hardware specifications. Cluster auto-scaling is the primary cause for differences between initial and current hardware specifications. It does not apply to tenant or flex clusters. **Note:** Effective specs (`effective_electable_specs`, `effective_read_only_specs`, `effective_analytics_specs`) are always returned regardless of the flag value and always report the **current** hardware specifications. See the resource documentation for [Auto-Scaling with Effective Fields](../resources/advanced_cluster.md#auto-scaling-with-effective-fields) for more details.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above


## Attributes Reference

Expand Down
4 changes: 2 additions & 2 deletions docs/resources/advanced_cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -575,7 +575,7 @@ Refer to the following for full privatelink endpoint connection string examples:
* `redact_client_log_data` - (Optional) Flag that enables or disables log redaction, see the [manual](https://www.mongodb.com/docs/manual/administration/monitoring/#log-redaction) for more information. Use this in conjunction with Encryption at Rest and TLS/SSL (Transport Encryption) to assist compliance with regulatory requirements. **Note**: Changing this setting on a cluster will trigger a rolling restart as soon as the cluster is updated.
* `config_server_management_mode` - (Optional) Config Server Management Mode for creating or updating a sharded cluster. Valid values are `ATLAS_MANAGED` (default) and `FIXED_TO_DEDICATED`. When configured as `ATLAS_MANAGED`, Atlas may automatically switch the cluster's config server type for optimal performance and savings. When configured as `FIXED_TO_DEDICATED`, the cluster will always use a dedicated config server. To learn more, see the [Sharded Cluster Config Servers documentation](https://dochub.mongodb.org/docs/manual/core/sharded-cluster-config-servers/).
- `delete_on_create_timeout`- (Optional) Indicates whether to delete the resource being created if a timeout is reached when waiting for completion. When set to `true` and timeout occurs, it triggers the deletion and returns immediately without waiting for deletion to complete. When set to `false`, the timeout will not trigger resource deletion. If you suspect a transient error when the value is `true`, wait before retrying to allow resource deletion to finish. Default is `true`.
* `use_effective_fields` - (Optional) Controls how hardware specification fields are returned in the response. When set to true, the non-effective specs (`electable_specs`, `read_only_specs`, `analytics_specs`) fields return the hardware specifications that the client provided. When set to false (default), the non-effective specs fields show the **current** hardware specifications. Cluster auto-scaling is the primary cause for differences between initial and current hardware specifications. This opt-in feature enhances auto-scaling workflows by eliminating the need for `lifecycle.ignore_changes` blocks and preventing plan drift from Atlas-managed changes. This attribute will be deprecated in provider version 2.x and removed in 3.x when the new behavior becomes default. See [Auto-Scaling with Effective Fields](#auto-scaling-with-effective-fields) for more details.
* `use_effective_fields` - (Optional) Controls how hardware specification fields are returned in the response. When set to true, the non-effective specs (`electable_specs`, `read_only_specs`, `analytics_specs`) fields return the hardware specifications that the client provided. When set to false (default), the non-effective specs fields show the **current** hardware specifications. Cluster auto-scaling is the primary cause for differences between initial and current hardware specifications. This opt-in feature enhances auto-scaling workflows by eliminating the need for `lifecycle.ignore_changes` blocks and preventing plan drift from Atlas-managed changes. It does not apply to tenant or flex clusters. This attribute will be deprecated in provider version 2.x and removed in 3.x when the new behavior becomes default. See [Auto-Scaling with Effective Fields](#auto-scaling-with-effective-fields) for more details.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above


### bi_connector_config

Expand Down Expand Up @@ -779,7 +779,7 @@ replication_specs = [

When auto-scaling is enabled, there are two approaches to manage your cluster configuration with Terraform:

**Option 1 (Recommended):** Use `use_effective_fields = true` to enable the new effective fields behavior. With this option, Atlas-managed auto-scaling changes won't cause plan drift, eliminating the need for `lifecycle` ignore customizations. You can read scaled values using the `effective_electable_specs`, `effective_analytics_specs`, and `effective_read_only_specs` attributes in the `mongodbatlas_advanced_cluster` data source. See [Auto-Scaling with Effective Fields](#auto-scaling-with-effective-fields) for details.
**Option 1 (Recommended):** Use `use_effective_fields = true` to enable the new effective fields behavior. With this option, Atlas-managed auto-scaling changes won't cause plan drift, eliminating the need for `lifecycle` ignore customizations. Auto-scaling features independently ignore specific Terraform-configured fields: compute auto-scaling ignores only `instance_size`, disk auto-scaling ignores only `disk_size_gb` and `disk_iops`, while both together ignore all three fields. You can read the actual scaled values using the `effective_electable_specs` and `effective_read_only_specs` attributes in the `mongodbatlas_advanced_cluster` data source. See [Auto-Scaling with Effective Fields](#auto-scaling-with-effective-fields) for details.

**Option 2:** If not using `use_effective_fields`, use a lifecycle ignore customization to prevent unintended changes. To explicitly change `disk_size_gb` or `instance_size` values, comment out the `lifecycle` block and run `terraform apply`. Please be sure to uncomment the `lifecycle` block once done to prevent any accidental changes.

Expand Down
6 changes: 5 additions & 1 deletion internal/service/advancedcluster/common.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,10 +85,14 @@ func GenerateFCVPinningWarningForRead(fcvPresentInState bool, apiRespFCVExpirati
return nil
}

func IsFlex(replicationSpecs *[]admin.ReplicationSpec20240805) bool {
func isFlex(replicationSpecs *[]admin.ReplicationSpec20240805) bool {
return getProviderName(replicationSpecs) == flexcluster.FlexClusterType
}

func isTenant(replicationSpecs *[]admin.ReplicationSpec20240805) bool {
return getProviderName(replicationSpecs) == constant.TENANT
}

func getProviderName(replicationSpecs *[]admin.ReplicationSpec20240805) string {
regionConfig := getRegionConfig(replicationSpecs)
if regionConfig == nil {
Expand Down
42 changes: 3 additions & 39 deletions internal/service/advancedcluster/common_admin_sdk.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,42 +16,6 @@ import (
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/flexcluster"
)

func CreateCluster(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, req *admin.ClusterDescription20240805, waitParams *ClusterWaitParams) *admin.ClusterDescription20240805 {
Copy link
Member Author

@lantoli lantoli Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved to resource.go as it's only used there now

var (
pauseAfter = req.GetPaused()
clusterResp *admin.ClusterDescription20240805
)
if pauseAfter {
req.Paused = nil
}
clusterResp = createClusterLatest(ctx, diags, client, req, waitParams)
if diags.HasError() {
return nil
}
if pauseAfter {
clusterResp = updateCluster(ctx, diags, client, &pauseRequest, waitParams, operationPauseAfterCreate)
}
return clusterResp
}

func createClusterLatest(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, req *admin.ClusterDescription20240805, waitParams *ClusterWaitParams) *admin.ClusterDescription20240805 {
_, _, err := client.AtlasV2.ClustersApi.CreateCluster(ctx, waitParams.ProjectID, req).Execute()
if err != nil {
addErrorDiag(diags, operationCreate, defaultAPIErrorDetails(waitParams.ClusterName, err))
return nil
}
return AwaitChanges(ctx, client, waitParams, operationCreate, diags)
}

func updateCluster(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, req *admin.ClusterDescription20240805, waitParams *ClusterWaitParams, operationName string) *admin.ClusterDescription20240805 {
_, _, err := client.AtlasV2.ClustersApi.UpdateCluster(ctx, waitParams.ProjectID, waitParams.ClusterName, req).Execute()
if err != nil {
addErrorDiag(diags, operationName, defaultAPIErrorDetails(waitParams.ClusterName, err))
return nil
}
return AwaitChanges(ctx, client, waitParams, operationName, diags)
}

// ProcessArgs.ClusterAdvancedConfig is managed through create/updateCluster APIs instead of /processArgs APIs but since corresponding TF attributes
// belong in the advanced_configuration attribute we still need to check for any changes
func UpdateAdvancedConfiguration(ctx context.Context, diags *diag.Diagnostics, client *config.MongoDBClient, p *ProcessArgs, waitParams *ClusterWaitParams) (latest *admin.ClusterDescriptionProcessArgs20240805, changed bool) {
Expand Down Expand Up @@ -141,7 +105,7 @@ func DeleteCluster(ctx context.Context, diags *diag.Diagnostics, client *config.
return
}
}
AwaitChanges(ctx, client, waitParams, operationDelete, diags)
_ = AwaitChanges(ctx, client, waitParams, operationDelete, diags)
}

func DeleteClusterNoWait(client *config.MongoDBClient, projectID, clusterName string, isFlex bool) func(ctx context.Context) error {
Expand All @@ -160,9 +124,9 @@ func DeleteClusterNoWait(client *config.MongoDBClient, projectID, clusterName st
}
}

func GetClusterDetails(ctx context.Context, diags *diag.Diagnostics, projectID, clusterName string, client *config.MongoDBClient, fcvPresentInState bool) (clusterDesc *admin.ClusterDescription20240805, flexClusterResp *admin.FlexClusterDescription20241113) {
func GetClusterDetails(ctx context.Context, diags *diag.Diagnostics, projectID, clusterName string, client *config.MongoDBClient, fcvPresentInState, useEffectiveFields bool) (clusterDesc *admin.ClusterDescription20240805, flexClusterResp *admin.FlexClusterDescription20241113) {
isFlex := false
clusterDesc, resp, err := client.AtlasV2.ClustersApi.GetCluster(ctx, projectID, clusterName).Execute()
clusterDesc, resp, err := client.AtlasV2.ClustersApi.GetCluster(ctx, projectID, clusterName).UseEffectiveInstanceFields(useEffectiveFields).Execute()
if err != nil {
if validate.StatusNotFound(resp) || admin.IsErrorCode(err, ErrorCodeClusterNotFound) {
return nil, nil
Expand Down
21 changes: 11 additions & 10 deletions internal/service/advancedcluster/common_await_changes.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,11 @@ var (
)

type ClusterWaitParams struct {
ProjectID string
ClusterName string
Timeout time.Duration
IsDelete bool
ProjectID string
ClusterName string
Timeout time.Duration
IsDelete bool
UseEffectiveFields bool
}

func AwaitChangesUpgrade(ctx context.Context, client *config.MongoDBClient, waitParams *ClusterWaitParams, errorLocator string, diags *diag.Diagnostics) *admin.ClusterDescription20240805 {
Expand All @@ -56,7 +57,7 @@ func AwaitChanges(ctx context.Context, client *config.MongoDBClient, waitParams
extraPending = append(extraPending, retrystrategy.RetryStrategyIdleState)
}
clusterName := waitParams.ClusterName
stateConf := createStateChangeConfig(ctx, api, waitParams.ProjectID, clusterName, targetState, waitParams.Timeout, extraPending...)
stateConf := createStateChangeConfig(ctx, api, waitParams, targetState, extraPending...)
clusterAny, err := stateConf.WaitForStateContext(ctx)
if err != nil {
if admin.IsErrorCode(err, ErrorCodeClusterNotFound) && isDelete {
Expand All @@ -76,7 +77,7 @@ func AwaitChanges(ctx context.Context, client *config.MongoDBClient, waitParams
return cluster
}

func createStateChangeConfig(ctx context.Context, api admin.ClustersApi, projectID, name, targetState string, timeout time.Duration, extraPending ...string) retry.StateChangeConf {
func createStateChangeConfig(ctx context.Context, api admin.ClustersApi, waitParams *ClusterWaitParams, targetState string, extraPending ...string) retry.StateChangeConf {
return retry.StateChangeConf{
Pending: slices.Concat([]string{
retrystrategy.RetryStrategyCreatingState,
Expand All @@ -87,17 +88,17 @@ func createStateChangeConfig(ctx context.Context, api admin.ClustersApi, project
retrystrategy.RetryStrategyDeletingState,
}, extraPending),
Target: []string{targetState},
Refresh: ResourceRefreshFunc(ctx, name, projectID, api),
Timeout: timeout,
Refresh: ResourceRefreshFunc(ctx, waitParams, api),
Timeout: waitParams.Timeout,
MinTimeout: RetryMinTimeout,
Delay: RetryDelay,
PollInterval: RetryPollInterval,
}
}

func ResourceRefreshFunc(ctx context.Context, name, projectID string, api admin.ClustersApi) retry.StateRefreshFunc {
func ResourceRefreshFunc(ctx context.Context, waitParams *ClusterWaitParams, api admin.ClustersApi) retry.StateRefreshFunc {
return func() (any, string, error) {
cluster, resp, err := api.GetCluster(ctx, projectID, name).Execute()
cluster, resp, err := api.GetCluster(ctx, waitParams.ProjectID, waitParams.ClusterName).UseEffectiveInstanceFields(waitParams.UseEffectiveFields).Execute()
if err != nil && strings.Contains(err.Error(), "reset by peer") {
return nil, retrystrategy.RetryStrategyRepeatingState, nil
}
Expand Down
2 changes: 1 addition & 1 deletion internal/service/advancedcluster/data_source.go
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ func (d *ds) Read(ctx context.Context, req datasource.ReadRequest, resp *datasou
func (d *ds) readCluster(ctx context.Context, diags *diag.Diagnostics, modelDS *TFModelDS) *TFModelDS {
clusterName := modelDS.Name.ValueString()
projectID := modelDS.ProjectID.ValueString()
clusterResp, flexClusterResp := GetClusterDetails(ctx, diags, projectID, clusterName, d.Client, false)
clusterResp, flexClusterResp := GetClusterDetails(ctx, diags, projectID, clusterName, d.Client, false, modelDS.UseEffectiveFields.ValueBool())
if diags.HasError() {
return nil
}
Expand Down
Loading
Loading