diff --git a/README.md b/README.md index b5bb5401..fda69986 100644 --- a/README.md +++ b/README.md @@ -233,6 +233,7 @@ tools. You can use it just like you would use `VisionAgentCoder`: >>> agent = va.agent.OllamaVisionAgentCoder() >>> agent("Count the apples in the image", media="apples.jpg") ``` +> WARNING: VisionAgent doesn't work well unless the underlying LMM is sufficiently powerful. Do not expect good results or even working code with smaller models like Llama 3.1 8B. ### Azure OpenAI We also provide a `AzureVisionAgentCoder` that uses Azure OpenAI models. To get started diff --git a/docs/index.md b/docs/index.md index 569231de..fc5ddde1 100644 --- a/docs/index.md +++ b/docs/index.md @@ -241,7 +241,7 @@ follow the Azure Setup section below. You can use it just like you would use= >>> agent = va.agent.AzureVisionAgentCoder() >>> agent("Count the apples in the image", media="apples.jpg") ``` - +> WARNING: VisionAgent doesn't work well unless the underlying LMM is sufficiently powerful. Do not expect good results or even working code with smaller models like Llama 3.1 8B. ### Azure Setup If you want to use Azure OpenAI models, you need to have two OpenAI model deployments: