Skip to content

SourceFuse Reference Architecture to implement a sample EKS Multi-Tenant SaaS Solution.

License

Notifications You must be signed in to change notification settings

sourcefuse/terraform-aws-arc-eks-saas

Repository files navigation

SourceFuse ARC EKS SAAS Reference Architecture

ARC Logo

With ARC SaaS, we’re introducing a pioneering SaaS factory model based control plane microservices and IaC modules that promises to revolutionize your SaaS journey.

Quality Gate Status Synk License

Overview

SourceFuse Reference Architecture to implement a sample EKS Multi-Tenant SaaS Solution. This solution will use AWS Codepipeline to deploy all the control plane infrastructure component of Networking, Compute, Database, Monitoring & Logging and Security alongwith the control plane application using helm chart. This solution will also setup tenant codebuild projects which is responsible for onboarding of new silo and pooled tenant. Each tenant will have it's own infrastructure and application helm chart Which will be managed using gitops tool like ArgoCD and Argo Workflow. This solution will also have strict IAM policy and Kubernetes Authorization Policy for tenants to avoid cross namespace access.

For more details, you can go through the eks saas architecture documentation.

Requirements

  1. AWS Account
  2. Terraform CLI
  3. AWS CLI

Pre-Requisitie

⚠️ Please ensure you are logged into AWS Account as an IAM user with Administrator privileges, not the root account user.

  1. If you don't have registered domain in Route53 then register domain in Route53. (If you have domain registered with 3rd party registrars then create hosted zone on route53 for your domain.)
  2. Generate Public Certificate for the domain using AWS ACM. (please ensure to give both wildcard and root domain in Fully qualified domain name while generating ACM, e.g. if domain name is xyz.com then use both xyz.com & *.xyz.com in ACM)
  3. SES account should be setup in production mode and domain should be verified. Generate smtp credentials and store them in ssm parameter store as SecureString. (using parameter name - /{namespace}/ses_access_key & /{namespace}/ses_secret_access_key where namespace is project name)
  4. Generate http credentials for your IAM user and store them in ssm parameter as SecureString. (using parameter name - /{namespace}/https_connection_user & /{namespace}/https_connection_password where namespace is project name)
  5. Create a codepipeline connection for github with your github account and repository.
  6. If you want to use client-vpn to access opensearch dashboard then enable it using variable defined in .tfvars file of client-vpn folder. [follow doc to connect with VPN ]

Setting up the environment

  • First clone/fork the Github repository.
  • Based on the requirements, change terraform.tfvars file in all the terraform folders.
  • Update the variables namespace,environment,region,domain_name in the script/replace-variable.sh file.
  • Execute the script using command ./scripts/replace-variable.sh
  • Update the codepipeline connection name (created in pre-requisite section), github repository name and other required variables in terraform.tfvars file of terraform/core-infra-pipeline folder.
  • Check if AWSServiceRoleForAmazonOpenSearchService Role is already created in your AWS account then set create_iam_service_linked_role variables to false in tfvars file of terraform/opensearch otherwise set it to true.
  • Update the ACM ((created in pre-requisite section)) ARN in terraform.tfvars file of terraform/istio folder.
  • Go thorugh all the variables decalred in tfvars file and update the variables according to your requirement.

Once the variables are updated, We will setup terraform codepipeline which will deploy all control plane infrastructure components alongwith control plane helm. We have multiple option to do that -

  1. Using Github Actions ::

NOTE: We are using slef hosted github runners to execute workflow action. please follow this document to setup runners.

  • First create an IAM role for github workflow actions and update the role name and other required variables like environment etc. in workflow yaml files defined under .github directory.
  • Add AWS_ACCOUNT_ID in github repository secret.
  • Execute the workflow of apply-bootstrap.yaml & apply-pipeline.yaml by updating the github events in these files. Currently these workflows will be executed when pull request will be merged to main branch so change the invocation of these workflow files according to you.
  • Push the code to your github repository.

NOTE: If you want to run other workflows, which are terraform plans, make sure to update the workflow files. Terraform bootstrap is one time activity so once bootstrap workflow is executed, please disable that to run again.

  1. Using Local ::

AWS CLI version2 & Terraform CLI version 1.7 must be installed on your machine. If not installed, then follow the documentation to install aws cli & terraform cli.

  • Configure your terminal with aws.

  • Go to the terraform/bootstrap folder and run the floowing command to deploy it -

    terraform init
    terraform plan 
    terraform apply
    
  • After that, Go to the terraform/core-infra-pipeline and update the bucket name, dynamodb table name (created in above step) and region in config.hcl.

NOTE: Update config.hcl file based using s

  • Push the code to your github repository.

  • Run the Followign command to create terraform codepipeline -

    terraform init --backend-config=config.hcl
    terraform plan
    terraform apply 
    

NOTE: All Terraform module README files are present in respective folder.

Once the codepipeline is created, Monitor the pipeline and when Codepipeline is executed successfully then create the following records in route53 hosted zone of the domain, using Load Balancer DNS address.

Record Entry Type Description
{domain-name} A control-plane application URL.
argocd.{domain-name} CNAME ArgoCD URL
argo-workflow.{domain-name} CNAME Argo Workflow URL
grafana.{domain-name} CNAME Grafana Dashboard URL

NOTE: All authentication password will be saved in SSM Paramater store. On Grafana, Please add athena, cloudwatch and prometheus data source and import the dashboard using json mentioned in billing and observability folder.

After Creating record in the Route53, you can access the control plane application using {domain-name} URL (eg. if your domain name is xyz.com then control plane will be accessible on xyz.com). Tenant onboarding can be done using the URL {domain-name}/tenant/signup. Once the tenant will be onboarded successfully then you can access the tenant application plane on URL {tenant-key}.{domain-name}

Authors

This project is authored by below people

  • SourceFuse ARC Team

License

Distributed under the MIT License. See LICENSE for more information.