From f90961286eba308da6a1a2fc6e1cfdb8688640e4 Mon Sep 17 00:00:00 2001 From: Asia <2736300+humpydonkey@users.noreply.github.com> Date: Thu, 13 Jun 2024 11:47:11 -0700 Subject: [PATCH] Update README.md Upload a screenshot for AzureVisionAgent in README.md --- README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 38a2ae17..222b9f10 100644 --- a/README.md +++ b/README.md @@ -187,19 +187,20 @@ If you want to use Azure OpenAI models, you need to have two OpenAI model deploy 1. OpenAI GPT-4o model 2. OpenAI text embedding model +Screenshot 2024-06-12 at 5 54 48 PM Then you can set the following environment variables: ```bash export AZURE_OPENAI_API_KEY="your-api-key" export AZURE_OPENAI_ENDPOINT="your-endpoint" -# The deployment name of your OpenAI chat model +# The deployment name of your Azure OpenAI chat model export AZURE_OPENAI_CHAT_MODEL_DEPLOYMENT_NAME="your_gpt4o_model_deployment_name" -# The deployment name of your OpenAI text embedding model +# The deployment name of your Azure OpenAI text embedding model export AZURE_OPENAI_EMBEDDING_MODEL_DEPLOYMENT_NAME="your_embedding_model_deployment_name" ``` -> NOTE: make sure your Azure model deployment have enough quota (token per minute) to support it. +> NOTE: make sure your Azure model deployment have enough quota (token per minute) to support it. The default value 8000TPM is not enough. You can then run Vision Agent using the Azure OpenAI models: