You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: streaming/spark-streaming/terraform/README.md
+13-11Lines changed: 13 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
# Spark on K8s Operator with EKS
2
-
Checkout the [documentation website](https://awslabs.github.io/data-on-eks/docs/blueprints/data-analytics/spark-operator-yunikorn) to deploy this pattern and run sample tests.
2
+
Checkout the [documentation website](https://awslabs.github.io/data-on-eks/docs/blueprints/streaming-platforms/spark-streaming) to deploy this pattern and run sample tests.
3
3
4
4
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
5
5
## Requirements
@@ -19,20 +19,20 @@ Checkout the [documentation website](https://awslabs.github.io/data-on-eks/docs/
|[aws_availability_zones.available](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/availability_zones)| data source |
57
56
|[aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity)| data source |
58
57
|[aws_ecrpublic_authorization_token.token](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecrpublic_authorization_token)| data source |
59
58
|[aws_iam_policy_document.grafana](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document)| data source |
60
-
|[aws_iam_policy_document.spark_operator](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document)| data source |
61
59
|[aws_partition.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition)| data source |
62
60
|[aws_region.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/region)| data source |
63
61
|[aws_secretsmanager_secret_version.admin_password_version](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret_version)| data source |
@@ -70,7 +68,7 @@ Checkout the [documentation website](https://awslabs.github.io/data-on-eks/docs/
70
68
| <aname="input_eks_data_plane_subnet_secondary_cidr"></a> [eks\_data\_plane\_subnet\_secondary\_cidr](#input\_eks\_data\_plane\_subnet\_secondary\_cidr)| Secondary CIDR blocks. 32766 IPs per Subnet per Subnet/AZ for EKS Node and Pods |`list(string)`| <pre>[<br> "100.64.0.0/17",<br> "100.64.128.0/17"<br>]</pre> | no |
71
69
| <aname="input_enable_amazon_prometheus"></a> [enable\_amazon\_prometheus](#input\_enable\_amazon\_prometheus)| Enable AWS Managed Prometheus service |`bool`|`true`| no |
72
70
| <aname="input_enable_vpc_endpoints"></a> [enable\_vpc\_endpoints](#input\_enable\_vpc\_endpoints)| Enable VPC Endpoints |`bool`|`false`| no |
73
-
| <aname="input_enable_yunikorn"></a> [enable\_yunikorn](#input\_enable\_yunikorn)| Enable Apache YuniKorn Scheduler |`bool`|`true`| no |
71
+
| <aname="input_enable_yunikorn"></a> [enable\_yunikorn](#input\_enable\_yunikorn)| Enable Apache YuniKorn Scheduler |`bool`|`false`| no |
74
72
| <aname="input_name"></a> [name](#input\_name)| Name of the VPC and EKS Cluster |`string`|`"spark-operator-doeks"`| no |
75
73
| <aname="input_private_subnets"></a> [private\_subnets](#input\_private\_subnets)| Private Subnets CIDRs. 254 IPs per Subnet/AZ for Private NAT + NLB + Airflow + EC2 Jumphost etc. |`list(string)`| <pre>[<br> "10.1.1.0/24",<br> "10.1.2.0/24"<br>]</pre> | no |
76
74
| <aname="input_public_subnets"></a> [public\_subnets](#input\_public\_subnets)| Public Subnets CIDRs. 62 IPs per Subnet/AZ |`list(string)`| <pre>[<br> "10.1.0.0/26",<br> "10.1.0.64/26"<br>]</pre> | no |
@@ -82,10 +80,14 @@ Checkout the [documentation website](https://awslabs.github.io/data-on-eks/docs/
82
80
83
81
| Name | Description |
84
82
|------|-------------|
83
+
| <aname="output_bootstrap_brokers"></a> [bootstrap\_brokers](#output\_bootstrap\_brokers)| Bootstrap brokers for the MSK cluster |
85
84
| <aname="output_cluster_arn"></a> [cluster\_arn](#output\_cluster\_arn)| The Amazon Resource Name (ARN) of the cluster |
86
85
| <aname="output_cluster_name"></a> [cluster\_name](#output\_cluster\_name)| The Amazon Resource Name (ARN) of the cluster |
87
86
| <aname="output_configure_kubectl"></a> [configure\_kubectl](#output\_configure\_kubectl)| Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig |
87
+
| <aname="output_consumer_iam_role_arn"></a> [consumer\_iam\_role\_arn](#output\_consumer\_iam\_role\_arn)| IAM role ARN for the consumer |
88
88
| <aname="output_grafana_secret_name"></a> [grafana\_secret\_name](#output\_grafana\_secret\_name)| Grafana password secret name |
89
+
| <aname="output_producer_iam_role_arn"></a> [producer\_iam\_role\_arn](#output\_producer\_iam\_role\_arn)| IAM role ARN for the producer |
90
+
| <aname="output_s3_bucket_id_iceberg_bucket"></a> [s3\_bucket\_id\_iceberg\_bucket](#output\_s3\_bucket\_id\_iceberg\_bucket)| Spark History server logs S3 bucket ID |
89
91
| <aname="output_s3_bucket_id_spark_history_server"></a> [s3\_bucket\_id\_spark\_history\_server](#output\_s3\_bucket\_id\_spark\_history\_server)| Spark History server logs S3 bucket ID |
90
92
| <aname="output_s3_bucket_region_spark_history_server"></a> [s3\_bucket\_region\_spark\_history\_server](#output\_s3\_bucket\_region\_spark\_history\_server)| Spark History server logs S3 bucket ID |
91
93
| <aname="output_subnet_ids_starting_with_100"></a> [subnet\_ids\_starting\_with\_100](#output\_subnet\_ids\_starting\_with\_100)| Secondary CIDR Private Subnet IDs for EKS Data Plane |
0 commit comments