Skip to content

Commit

Permalink
chore(deps): Bump axios, @docusaurus/core and @docusaurus/preset-clas…
Browse files Browse the repository at this point in the history
…sic in /website (#383)
  • Loading branch information
askulkarni2 authored Dec 14, 2023
2 parents 8b8d02f + 600b14d commit 9244219
Show file tree
Hide file tree
Showing 12 changed files with 10,660 additions and 6,612 deletions.
2 changes: 1 addition & 1 deletion ai-ml/mlflow/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# MLflow on EKS

Docs comming soon ...
Docs coming soon ...

## Requirements

Expand Down
2 changes: 1 addition & 1 deletion ai-ml/mlflow/helm-values/aws-for-fluentbit-values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ filter:
Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
# CATION: Donot use `cloudwatch` plugin. This Golang Plugin is not recommnded by AWS anymore instead use C plugin(`cloudWatchLogs`) for better performance.
# CATION: Do not use `cloudwatch` plugin. This Golang Plugin is not recommended by AWS anymore instead use C plugin(`cloudWatchLogs`) for better performance.
# cloudWatch:
# enabled: false

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ kind: Provisioner
metadata:
name: default
spec:
# Wich AWS Node Template to pick
# Which AWS Node Template to pick
providerRef:
name: default

Expand Down
6 changes: 3 additions & 3 deletions ai-ml/mlflow/mlflow-core.tf
Original file line number Diff line number Diff line change
Expand Up @@ -199,9 +199,9 @@ module "mlflow_irsa" {
tags = local.tags
}

#---------------------------------------------------------------
# IAM policy for MLflow for accesing S3 artifacts and RDS Postgres backend
#---------------------------------------------------------------
#--------------------------------------------------------------------------
# IAM policy for MLflow for accessing S3 artifacts and RDS Postgres backend
#--------------------------------------------------------------------------
resource "aws_iam_policy" "mlflow" {
count = var.enable_mlflow_tracking ? 1 : 0

Expand Down
2 changes: 1 addition & 1 deletion website/docs/blueprints/ai-ml/emr-spark-rapids.md
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ This dataset is sourced from [Fannie Mae’s Single-Family Loan Performance Data
4. Click on `Download Data` and choose `Single-Family Loan Performance Data`
5. You will find a tabular list of Acquisition and Performance` files sorted based on year and quarter. Click on the file to download. You can download three years(2020, 2021 and 2022 - 4 files for each year and one for each quarter) worth of data that will be used in our example job. e.g.,: 2017Q1.zip
6. Unzip the download file to extract the csv file to your local machine. e.g.,: 2017Q1.csv
7. Copy only the CSV files to an S3 bucket under ${S3_BUCKET}/${EMR_VIRTUAL_CLUSTER_ID}/spark-rapids-emr/input/fannie-mae-single-family-loan-performance/. The example below uses three years of data (one file for each quarter, 12 files in total). Note: `${S3_BUCKET}` and `${EMR_VIRTUAL_CLUSTER_ID}` values can be extracted from Terraform outputs.
7. Copy only the CSV files to an S3 bucket under `${S3_BUCKET}/${EMR_VIRTUAL_CLUSTER_ID}/spark-rapids-emr/input/fannie-mae-single-family-loan-performance/`. The example below uses three years of data (one file for each quarter, 12 files in total). Note: `${S3_BUCKET}` and `${EMR_VIRTUAL_CLUSTER_ID}` values can be extracted from Terraform outputs.

```
aws s3 ls s3://emr-spark-rapids-<aws-account-id>-us-west-2/949wt7zuphox1beiv0i30v65i/spark-rapids-emr/input/fannie-mae-single-family-loan-performance/
Expand Down
14 changes: 7 additions & 7 deletions website/docs/blueprints/amazon-emr-on-eks/emr-eks-studio.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,13 +113,13 @@ To submit a job we will use Below you use `start-job-run` command with AWS CLI.

Before you run the command below, make sure to change update the following parameters with the on created by your own deployment.

- <CLUSTER-ID> – The EMR virtual cluster ID, which you get from the AWS CDK output
- <SPARK-JOB-NAME> – The name of your Spark job
- <ROLE-ARN> – The execution role you created, which you get from the AWS CDK output
- <S3URI-CRITICAL-DRIVER> – The Amazon S3 URI of the driver pod template, which you get from the AWS CDK output
- <S3URI-CRITICAL-EXECUTOR> – The Amazon S3 URI of the executor pod template, which you get from the AWS CDK output
- <Log_Group_Name> – Your CloudWatch log group name
- <Log_Stream_Prefix> – Your CloudWatch log stream prefix
- \<CLUSTER-ID\> – The EMR virtual cluster ID, which you get from the AWS CDK output
- \<SPARK-JOB-NAME\> – The name of your Spark job
- \<ROLE-ARN\> – The execution role you created, which you get from the AWS CDK output
- \<S3URI-CRITICAL-DRIVER\> – The Amazon S3 URI of the driver pod template, which you get from the AWS CDK output
- \<S3URI-CRITICAL-EXECUTOR\> – The Amazon S3 URI of the executor pod template, which you get from the AWS CDK output
- \<Log_Group_Name\> – Your CloudWatch log group name
- \<Log_Stream_Prefix\> – Your CloudWatch log stream prefix

<details>
<summary>AWS CLI for start-job-run command</summary>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ let's navigate to one example folder under spark-k8s-operator and run the shell
```bash
cd data-on-eks/analytics/terraform/spark-k8s-operator/examples/cluster-autoscaler/nvme-ephemeral-storage

# replace <S3_BUCKET> with your S3 bucket and <REGION> with your region, then run
# replace \<S3_BUCKET\> with your S3 bucket and \<REGION\> with your region, then run
./taxi-trip-execute.sh
```

Expand All @@ -30,7 +30,7 @@ When you submit a Spark application, Spark context is created which ideally give

When your application is done with the processing, Spark context will be terminated so your Web UI as well. and if you wanted to see the monitoring for already finished application, we cannot do it.

To try Spark web UI, let's update <S3_BUCKET> with your bucket name and <JOB_NAME> with "nvme-taxi-trip" in nvme-ephemeral-storage.yaml
To try Spark web UI, let's update \<S3_BUCKET\> with your bucket name and \<JOB_NAME\> with "nvme-taxi-trip" in nvme-ephemeral-storage.yaml

```bash
kubectl apply -f nvme-ephemeral-storage.yaml
Expand All @@ -51,11 +51,13 @@ As mentioned above, spark web UI will be terminated once the spark job is done.

In this example, we installed Spark history Server to read logs from S3 bucket. In your spark application yaml file, make sure you have the following setting:

```yaml
sparkConf:
"spark.hadoop.fs.s3a.aws.credentials.provider": "com.amazonaws.auth.InstanceProfileCredentialsProvider"
"spark.hadoop.fs.s3a.impl": "org.apache.hadoop.fs.s3a.S3AFileSystem"
"spark.eventLog.enabled": "true"
"spark.eventLog.dir": "s3a://<your bucket>/logs/"
```
Run port forward command to expose spark-history-server service.
```bash
Expand Down
5 changes: 4 additions & 1 deletion website/docs/blueprints/streaming-platforms/flink.md
Original file line number Diff line number Diff line change
Expand Up @@ -219,6 +219,7 @@ chmod +x install.sh

Verify the cluster status

```bash
➜ kubectl get nodes -A
NAME STATUS ROLES AGE VERSION
ip-10-1-160-150.us-west-2.compute.internal Ready <none> 24h v1.24.11-eks-a59e1f0
Expand All @@ -234,9 +235,11 @@ Verify the cluster status
cert-manager-77fc7548dc-dzdms 1/1 Running 0 24h
cert-manager-cainjector-8869b7ff7-4w754 1/1 Running 0 24h
cert-manager-webhook-586ddf8589-g6s87 1/1 Running 0 24h
```

To list all the resources created for Flink team to run Flink jobs using this namespace

```bash
➜ ~ kubectl get all,role,rolebinding,serviceaccount --namespace flink-team-a-ns
NAME CREATED AT
role.rbac.authorization.k8s.io/flink-team-a-role 2023-04-06T13:17:05Z
Expand All @@ -247,7 +250,7 @@ To list all the resources created for Flink team to run Flink jobs using this na
NAME SECRETS AGE
serviceaccount/default 0 22h
serviceaccount/flink-team-a-sa 0 22h

```

</CollapsibleContent>

Expand Down
4 changes: 2 additions & 2 deletions website/docs/gen-ai/inference/Llama2.md
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ llama2-ingress nginx * k8s-ingressn-ingressn-randomid-randomid.elb.us-

Now, you can access the Ray Dashboard from the Load balancer URL below.

http://<NLB_DNS_NAME>/dashboard/#/serve
http://\<NLB_DNS_NAME\>/dashboard/#/serve

If you don't have access to a public Load Balancer, you can use port-forwarding and browse the Ray Dashboard using localhost with the following command:

Expand All @@ -206,7 +206,7 @@ Once you see the status of the model deployment is in `running` state then you c

You can use the following URL with a query added at the end of the URL.

http://<NLB_DNS_NAME>/serve/infer?sentence=what is data parallelism and tensor parallelisma and the differences
http://\<NLB_DNS_NAME\>/serve/infer?sentence=what is data parallelism and tensor parallelisma and the differences

You will see an output like this in your browser:

Expand Down
4 changes: 2 additions & 2 deletions website/docusaurus.config.js
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
// @ts-check
// Note: type annotations allow type checking and IDEs autocompletion

const lightCodeTheme = require('prism-react-renderer/themes/github');
const darkCodeTheme = require('prism-react-renderer/themes/dracula');
const lightCodeTheme = require('prism-react-renderer').themes.github;
const darkCodeTheme = require('prism-react-renderer').themes.dracula;

/** @type {{onBrokenLinks: string, organizationName: string, plugins: string[], title: string, url: string, onBrokenMarkdownLinks: string, i18n: {defaultLocale: string, locales: string[]}, trailingSlash: boolean, baseUrl: string, presets: [string,Options][], githubHost: string, tagline: string, themeConfig: ThemeConfig & UserThemeConfig & AlgoliaThemeConfig, projectName: string}} */
const config = {
Expand Down
Loading

0 comments on commit 9244219

Please sign in to comment.