From a321bc6e65de3a854c416fc49ec37bfd56a66abc Mon Sep 17 00:00:00 2001 From: Dillon Laird Date: Thu, 13 Jun 2024 14:38:34 -0700 Subject: [PATCH] fix readme (#134) --- README.md | 1 - docs/index.md | 16 ++++++++++++++-- 2 files changed, 14 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 222b9f10..4e0eefba 100644 --- a/README.md +++ b/README.md @@ -206,6 +206,5 @@ You can then run Vision Agent using the Azure OpenAI models: ```python import vision_agent as va -import vision_agent.tools as T agent = va.agent.AzureVisionAgent() ``` diff --git a/docs/index.md b/docs/index.md index 2b34b5b7..34875b57 100644 --- a/docs/index.md +++ b/docs/index.md @@ -174,17 +174,29 @@ ensure the documentation is in the same format above with description, `Paramete `Returns:`, and `Example\n-------`. You can find an example use case [here](examples/custom_tools/). ### Azure Setup -If you want to use Azure OpenAI models, you can set the environment variable: +If you want to use Azure OpenAI models, you need to have two OpenAI model deployments: + +1. OpenAI GPT-4o model +2. OpenAI text embedding model + +Screenshot 2024-06-12 at 5 54 48 PM + +Then you can set the following environment variables: ```bash export AZURE_OPENAI_API_KEY="your-api-key" export AZURE_OPENAI_ENDPOINT="your-endpoint" +# The deployment name of your Azure OpenAI chat model +export AZURE_OPENAI_CHAT_MODEL_DEPLOYMENT_NAME="your_gpt4o_model_deployment_name" +# The deployment name of your Azure OpenAI text embedding model +export AZURE_OPENAI_EMBEDDING_MODEL_DEPLOYMENT_NAME="your_embedding_model_deployment_name" ``` +> NOTE: make sure your Azure model deployment have enough quota (token per minute) to support it. The default value 8000TPM is not enough. + You can then run Vision Agent using the Azure OpenAI models: ```python import vision_agent as va -import vision_agent.tools as T agent = va.agent.AzureVisionAgent() ```