Skip to content

Unable to use Terraform modules stored in a private S3 bucket #3294

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 tasks
carlosjgp opened this issue Jul 25, 2024 · 6 comments
Open
2 tasks

Unable to use Terraform modules stored in a private S3 bucket #3294

carlosjgp opened this issue Jul 25, 2024 · 6 comments
Assignees
Labels
bug Something isn't working

Comments

@carlosjgp
Copy link

Describe the bug

Storing private Terraform modules into an S3 bucket is not supported with Terragrunt

This works just fine with Terraform

Steps To Reproduce

Upload a Terraform module to a private S3 bucket that can only be accessed with AWS credentials

terraform {
  source = "s3::https://s3-<REGION>.amazonaws.com/<BUCKET>/<MODULE>.zip"
}

Expected behavior

Similar behaviour as seen when using Terraform where this works just fine

Maybe make use the iam_role to download the modules from S3
https://terragrunt.gruntwork.io/docs/reference/config-blocks-and-attributes/#iam_role

IAM role that Terragrunt should assume prior to invoking Terraform.

Nice to haves

  • Terminal output
  • Screenshots

Versions

  • Terragrunt version: v0.63.5
  • OpenTofu/Terraform version: v1.9.2
  • Environment details (Ubuntu 20.04, Windows 10, etc.): 22.04.1-Ubuntu

Additional context

More problems with go-getter

@carlosjgp carlosjgp added the bug Something isn't working label Jul 25, 2024
@denis256
Copy link
Member

I think can be investigated approach of updating getter to inject session details or replace existing ones with one that can use AWS details

References:
https://github.com/gruntwork-io/terragrunt/blob/master/cli/commands/terraform/download_source.go#L191

@carlosjgp
Copy link
Author

To be honest, if I could understand what credentials chain is being used I could solve this problem myself but it seems to be a super obscure thing at the moment

It works fine on Terraform but not on Terragrunt this could be a blocker adopting Terragrunt if we haven't been using it heavily already... creating a publicly accessible bucket is not an option and using simple auth User/Pass is a heavy step down on security from IAM

It's also awful to debug 😓

Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for raising this issue.

@github-actions github-actions bot added the stale Stale label Feb 14, 2025
@carlosjgp
Copy link
Author

if anyone comes across this... the only workaround I found is to add a configuration to your ~/.aws/config where the default profile has access to the S3 bucket where you host your modules (tested, it works)

But this is not the ideal scenario!
I want to be able to control the credentials for this process better this solution is forcing me to change every other profile I have and my entrypoint for privilege chaining locally

@yhakbar
Copy link
Collaborator

yhakbar commented Mar 24, 2025

Hey @carlosjgp ,

Have you tried using the auth provider command?

@yhakbar yhakbar removed the stale Stale label Mar 24, 2025
@carlosjgp
Copy link
Author

Hey @carlosjgp ,

Have you tried using the auth provider command?

I haven't explore this option but it looks like it overrides my setup for AWS credentials to run Terraform itself

#-----------------------------------------------------------------------------------------------------------------------
# Assumable AWS role for terraform/terragrunt
#-----------------------------------------------------------------------------------------------------------------------
# When using terraform/terragrunt with an AWS SSO role to assume a role in another account set the AWS_SDK_LOAD_CONFIG env variable in zshrc or equivalent
# Go AWS SDK does not process the the AWS SSO 'config' file by default
# export AWS_SDK_LOAD_CONFIG=1
# https://github.com/benkehoe/aws-sso-util
# if this is not set, credential chain errors will appear
# eg Error: NoCredentialProviders: no valid providers in chain. Deprecated.
#   For verbose messaging see aws.Config.CredentialsChainVerboseErrors
iam_role = "arn:aws:iam::${local.aws_account_id}:role/terragrunt"

I'm only interested on the "modules get" command that runs on a different thread so I can't specify what AWS profile runs this

It's not a problem on CI/CD pipeline because there is only 1 profile: "default" but locally I would like to have a different "default" profile for... well convenience

To illustrate this

On CI

CI executor ---> Assume role --> Sets AWS profile "default" (can fetch modules from S3 and assume all "terragrunt" roles in all accounts)

It works

Locally

aws sso login --profile terragrunt (can assume terragrunt in all accounts can download from S3)

It doesn't work

Rename profile "terragrunt" to "default"

aws sso login

It works

I hope this helps

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants