From 2fcfa6b203499e644211c5f872f8c13a17a17022 Mon Sep 17 00:00:00 2001 From: Dillon Laird Date: Sat, 5 Oct 2024 13:40:07 -0700 Subject: [PATCH] update docs --- README.md | 306 +++++++++++++++++++++++++++++++++---------------- docs/index.md | 307 ++++++++++++++++++++++++++++++++++---------------- 2 files changed, 424 insertions(+), 189 deletions(-) diff --git a/README.md b/README.md index 1529e354..29292d65 100644 --- a/README.md +++ b/README.md @@ -15,19 +15,23 @@ accomplish the task you want. Vision Agent aims to provide an in-seconds experie allowing users to describe their problem in text and have the agent framework generate code to solve the task for them. Check out our discord for updates and roadmaps! +## Table of Contents +- [🚀Quick Start](#quick-start) +- [📚Documentation](#documentation) +- [🔍🤖Vision Agent](#vision-agent-basic-usage) +- [🛠️Tools](#tools) +- [🤖LMMs](#lmms) +- [💻🤖Vision Agent Coder](#vision-agent-coder) +- [🏗️Additional Backends](#additional-backends) -## Web Application +## Quick Start +### Web Application +The fastest way to test out Vision Agent is to use our web application. You can find it +[here](https://va.landing.ai/). -Try Vision Agent live on (note this may not be running the most up-to-date version) [va.landing.ai](https://va.landing.ai/) -## Documentation - -[Vision Agent Library Docs](https://landing-ai.github.io/vision-agent/) - - -## Getting Started ### Installation -To get started, you can install the library using pip: +To get started with the python library, you can install it using pip: ```bash pip install vision-agent @@ -41,17 +45,93 @@ export ANTHROPIC_API_KEY="your-api-key" export OPENAI_API_KEY="your-api-key" ``` -### Vision Agent -There are two agents that you can use. `VisionAgent` is a conversational agent that has -access to tools that allow it to write an navigate python code and file systems. It can -converse with the user in natural language. `VisionAgentCoder` is an agent specifically -for writing code for vision tasks, such as counting people in an image. However, it -cannot chat with you and can only respond with code. `VisionAgent` can call -`VisionAgentCoder` to write vision code. +### Basic Usage +To get started you can just import the `VisionAgent` and start chatting with it: +```python +>>> from vision_agent.agent import VisionAgent +>>> agent = VisionAgent() +>>> resp = agent("Hello") +>>> print(resp) +[{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "{'thoughts': 'The user has greeted me. I will respond with a greeting and ask how I can assist them.', 'response': 'Hello! How can I assist you today?', 'let_user_respond': True}"}] +>>> resp.append({"role": "user", "content": "Can you count the number of people in this image?", "media": ["people.jpg"]}) +>>> resp = agent(resp) +``` + +The chat messages are similar to `OpenAI`'s format with `role` and `content` keys but +in addition to those you can add `medai` which is a list of media files that can either +be images or video files. + +## Documentation + +[Vision Agent Library Docs](https://landing-ai.github.io/vision-agent/) + +## Vision Agent Basic Usage +### Chatting and Message Formats +`VisionAgent` is an agent that can chat with you and call other tools or agents to +write vision code for you. You can interact with it like you would ChatGPT or any other +chatbot. The agent uses Clause-3.5 for it's LMM and OpenAI for embeddings for searching +for tools. + +The message format is: +```json +{ + "role": "user", + "content": "Hello", + "media": ["image.jpg"] +} +``` +Where `role` can be `user`, `assistant` or `observation` if the agent has executed a +function and needs to observe the output. `content` is always the text message and +`media` is a list of media files that can be images or videos that you want the agent +to examine. + +When the agent responds, inside it's `context` you will find the following data structure: +```json +{ + "thoughts": "The user has greeted me. I will respond with a greeting and ask how I can assist them.", + "response": "Hello! How can I assist you today?", + "let_user_respond": true +} +``` + +`thoughts` are the thoughts the agent had when processing the message, `response` is the +response it generated which could contain a python execution, and `let_user_respond` is +a boolean that tells the agent if it should wait for the user to respond before +continuing, for example it may want to execute code and look at the output before +letting the user respond. -#### Basic Usage -To run the streamlit app locally to chat with `VisionAgent`, you can run the following -command: +### Chatting and Artifacts +If you run `chat_with_code` you will also notice an `Artifact` object. `Artifact`'s +are a way to sync files between local and remote environments. The agent will read and +write to the artifact object, which is just a pickle object, when it wants to save or +load files. + +```python +import vision_agent as va +from vision_agent.tools.meta_tools import Artifact + +artifact = Artifact("artifact.pkl") +# you can store text files such as code or images in the artifact +with open("code.py", "r") as f: + artifacts["code.py"] = f.read() +with open("image.png", "rb") as f: + artifacts["image.png"] = f.read() + +agent = va.agent.VisionAgent() +response, artifacts = agent.chat_with_code( + [ + { + "role": "user", + "content": "Can you write code to count the number of people in image.png", + } + ], + artifacts=artifacts, +) +``` + +### Running the Streamlit App +To test out things quickly, sometimes it's easier to run the streamlit app locally to +chat with `VisionAgent`, you can run the following command: ```bash pip install -r examples/chat/requirements.txt @@ -59,25 +139,117 @@ export WORKSPACE=/path/to/your/workspace export ZMQ_PORT=5555 streamlit run examples/chat/app.py ``` -You can find more details about the streamlit app [here](examples/chat/). +You can find more details about the streamlit app [here](examples/chat/), there are +still some concurrency issues with the streamlit app so if you find it doing weird things +clear your workspace and restart the app. + +## Tools +There are a variety of tools for the model or the user to use. Some are executed locally +while others are hosted for you. You can easily access them yourself, for example if +you want to run `owl_v2_image` and visualize the output you can run: -#### Basic Programmatic Usage ```python ->>> from vision_agent.agent import VisionAgent ->>> agent = VisionAgent() ->>> resp = agent("Hello") ->>> print(resp) -[{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "{'thoughts': 'The user has greeted me. I will respond with a greeting and ask how I can assist them.', 'response': 'Hello! How can I assist you today?', 'let_user_respond': True}"}] ->>> resp.append({"role": "user", "content": "Can you count the number of people in this image?", "media": ["people.jpg"]}) ->>> resp = agent(resp) +import vision_agent.tools as T +import matplotlib.pyplot as plt + +image = T.load_image("dogs.jpg") +dets = T.owl_v2_image("dogs", image) +# visualize the owl_v2_ bounding boxes on the image +viz = T.overlay_bounding_boxes(image, dets) + +# plot the image in matplotlib or save it +plt.imshow(viz) +plt.show() +T.save_image(viz, "viz.png") ``` -`VisionAgent` currently utilizes Claude-3.5 as it's default LMM and uses OpenAI for -embeddings for tool searching. +Or if you want to run on video data, for example track sharks and people at 10 FPS: -### Vision Agent Coder -#### Basic Usage -You can interact with the agent as you would with any LLM or LMM model: +```python +frames_and_ts = T.extract_frames_and_timestamps("sharks.mp4", fps=10) +# extract only the frames from frames and timestamps +frames = [f["frame"] for f in frames_and_ts] +# track the sharks and people in the frames, returns segmentation masks +track = T.florence2_sam2_video_tracking("shark, person", frames) +# plot the segmentation masks on the frames +viz = T.overlay_segmentation_masks(frames, track) +T.save_video(viz, "viz.mp4") +``` + +You can find all available tools in `vision_agent/tools/tools.py`, however the +`VisionAgent` will only utilizes a subset of tools that have been tested and provide +the best performance. Those can be found in the same file under the `FUNCION_TOOLS` +variable inside `tools.py`. + +#### Custom Tools +If you can't find the tool you are looking for you can also add custom tools to the +agent: + +```python +import vision_agent as va +import numpy as np + +@va.tools.register_tool(imports=["import numpy as np"]) +def custom_tool(image_path: str) -> str: + """My custom tool documentation. + + Parameters: + image_path (str): The path to the image. + + Returns: + str: The result of the tool. + + Example + ------- + >>> custom_tool("image.jpg") + """ + + return np.zeros((10, 10)) +``` + +You need to ensure you call `@va.tools.register_tool` with any imports it uses. Global +variables will not be captured by `register_tool` so you need to include them in the +function. Make sure the documentation is in the same format above with description, +`Parameters:`, `Returns:`, and `Example\n-------`. The `VisionAgent` will use your +documentation when trying to determine when to use your tool. You can find an example +use case [here](examples/custom_tools/) for adding a custom tool. Note you may need to +play around with the prompt to ensure the model picks the tool when you want it to. + +Can't find the tool you need and want us to host it? Check out our +[vision-agent-tools](https://github.com/landing-ai/vision-agent-tools) repository where +we add the source code for all the tools used in `VisionAgent`. + +## LMMs +All of our agents are based off of LMMs or Large Multimodal Models. We provide a thin +abstraction layer on top of the underlying provider APIs to be able to more easily +handle media. + + +```python +from vision_agent.lmm import AnthropicLMM + +lmm = AnthropicLMM() +response = lmm("Describe this image", media=["apple.jpg"]) +>>> "This is an image of an apple." +``` + +Or you can use the `OpenAI` chat interaface and pass it other media like videos: + +```python +response = lmm( + [ + { + "role": "user", + "content": "What's going on in this video?", + "media": ["video.mp4"] + } + ] +) +``` + +## Vision Agent Coder +Underneath the hood, `VisionAgent` uses `VisionAgentCoder` to generate code to solve +vision tasks. You can use `VisionAgentCoder` directly to generate code if you want: ```python >>> from vision_agent.agent import VisionAgentCoder @@ -87,17 +259,17 @@ You can interact with the agent as you would with any LLM or LMM model: Which produces the following code: ```python -from vision_agent.tools import load_image, grounding_sam +from vision_agent.tools import load_image, florence2_sam2_image def calculate_filled_percentage(image_path: str) -> float: # Step 1: Load the image image = load_image(image_path) # Step 2: Segment the jar - jar_segments = grounding_sam(prompt="jar", image=image) + jar_segments = florence2_sam2_image("jar", image) # Step 3: Segment the coffee beans - coffee_beans_segments = grounding_sam(prompt="coffee beans", image=image) + coffee_beans_segments = florence2_sam2_image("coffee beans", image) # Step 4: Calculate the area of the segmented jar jar_area = 0 @@ -125,7 +297,7 @@ mode by passing in the verbose argument: >>> agent = VisionAgentCoder(verbosity=2) ``` -#### Detailed Usage +### Detailed Usage You can also have it return more information by calling `chat_with_workflow`. The format of the input is a list of dictionaries with the keys `role`, `content`, and `media`: @@ -145,7 +317,7 @@ of the input is a list of dictionaries with the keys `role`, `content`, and `med With this you can examine more detailed information such as the testing code, testing results, plan or working memory it used to complete the task. -#### Multi-turn conversations +### Multi-turn conversations You can have multi-turn conversations with vision-agent as well, giving it feedback on the code and having it update. You just need to add the code as a response from the assistant: @@ -171,60 +343,6 @@ conv.append( result = agent.chat_with_workflow(conv) ``` -### Tools -There are a variety of tools for the model or the user to use. Some are executed locally -while others are hosted for you. You can easily access them yourself, for example if -you want to run `owl_v2_image` and visualize the output you can run: - -```python -import vision_agent.tools as T -import matplotlib.pyplot as plt - -image = T.load_image("dogs.jpg") -dets = T.owl_v2_image("dogs", image) -viz = T.overlay_bounding_boxes(image, dets) -plt.imshow(viz) -plt.show() -``` - -You can find all available tools in `vision_agent/tools/tools.py`, however, -`VisionAgentCoder` only utilizes a subset of tools that have been tested and provide -the best performance. Those can be found in the same file under the `TOOLS` variable. - -If you can't find the tool you are looking for you can also add custom tools to the -agent: - -```python -import vision_agent as va -import numpy as np - -@va.tools.register_tool(imports=["import numpy as np"]) -def custom_tool(image_path: str) -> str: - """My custom tool documentation. - - Parameters: - image_path (str): The path to the image. - - Returns: - str: The result of the tool. - - Example - ------- - >>> custom_tool("image.jpg") - """ - - return np.zeros((10, 10)) -``` - -You need to ensure you call `@va.tools.register_tool` with any imports it uses. Global -variables will not be captured by `register_tool` so you need to include them in the -function. Make sure the documentation is in the same format above with description, -`Parameters:`, `Returns:`, and `Example\n-------`. You can find an example use case -[here](examples/custom_tools/) as this is what the agent uses to pick and use the tool. - -Can't find the tool you need and want add it to `VisionAgent`? Check out our -[vision-agent-tools](https://github.com/landing-ai/vision-agent-tools) repository where -we add the source code for all the tools used in `VisionAgent`. ## Additional Backends ### Anthropic @@ -329,9 +447,9 @@ agent = va.agent.AzureVisionAgentCoder() ****************************************************************************************************************************** -### Q&A +## Q&A -#### How to get started with OpenAI API credits +### How to get started with OpenAI API credits 1. Visit the [OpenAI API platform](https://beta.openai.com/signup/) to sign up for an API key. 2. Follow the instructions to purchase and manage your API credits. diff --git a/docs/index.md b/docs/index.md index a83e343e..ee04f3d6 100644 --- a/docs/index.md +++ b/docs/index.md @@ -3,7 +3,6 @@ ![ci_status](https://github.com/landing-ai/vision-agent/actions/workflows/ci_cd.yml/badge.svg) [![PyPI version](https://badge.fury.io/py/vision-agent.svg)](https://badge.fury.io/py/vision-agent) ![version](https://img.shields.io/pypi/pyversions/vision-agent) - Vision Agent is a library that helps you utilize agent frameworks to generate code to solve your vision task. Many current vision problems can easily take hours or days to @@ -12,19 +11,23 @@ accomplish the task you want. Vision Agent aims to provide an in-seconds experie allowing users to describe their problem in text and have the agent framework generate code to solve the task for them. Check out our discord for updates and roadmaps! +## Table of Contents +- [🚀Quick Start](#quick-start) +- [📚Documentation](#documentation) +- [🔍🤖Vision Agent](#vision-agent-basic-usage) +- [🛠️Tools](#tools) +- [🤖LMMs](#lmms) +- [💻🤖Vision Agent Coder](#vision-agent-coder) +- [🏗️Additional Backends](#additional-backends) -## Web Application +## Quick Start +### Web Application +The fastest way to test out Vision Agent is to use our web application. You can find it +[here](https://va.landing.ai/). -Try Vision Agent live on (note this may not be running the most up-to-date version) [va.landing.ai](https://va.landing.ai/) -## Documentation - -[Vision Agent Library Docs](https://landing-ai.github.io/vision-agent/) - - -## Getting Started ### Installation -To get started, you can install the library using pip: +To get started with the python library, you can install it using pip: ```bash pip install vision-agent @@ -38,17 +41,93 @@ export ANTHROPIC_API_KEY="your-api-key" export OPENAI_API_KEY="your-api-key" ``` -### Vision Agent -There are two agents that you can use. `VisionAgent` is a conversational agent that has -access to tools that allow it to write an navigate python code and file systems. It can -converse with the user in natural language. `VisionAgentCoder` is an agent specifically -for writing code for vision tasks, such as counting people in an image. However, it -cannot chat with you and can only respond with code. `VisionAgent` can call -`VisionAgentCoder` to write vision code. +### Basic Usage +To get started you can just import the `VisionAgent` and start chatting with it: +```python +>>> from vision_agent.agent import VisionAgent +>>> agent = VisionAgent() +>>> resp = agent("Hello") +>>> print(resp) +[{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "{'thoughts': 'The user has greeted me. I will respond with a greeting and ask how I can assist them.', 'response': 'Hello! How can I assist you today?', 'let_user_respond': True}"}] +>>> resp.append({"role": "user", "content": "Can you count the number of people in this image?", "media": ["people.jpg"]}) +>>> resp = agent(resp) +``` + +The chat messages are similar to `OpenAI`'s format with `role` and `content` keys but +in addition to those you can add `medai` which is a list of media files that can either +be images or video files. + +## Documentation + +[Vision Agent Library Docs](https://landing-ai.github.io/vision-agent/) + +## Vision Agent Basic Usage +### Chatting and Message Formats +`VisionAgent` is an agent that can chat with you and call other tools or agents to +write vision code for you. You can interact with it like you would ChatGPT or any other +chatbot. The agent uses Clause-3.5 for it's LMM and OpenAI for embeddings for searching +for tools. + +The message format is: +```json +{ + "role": "user", + "content": "Hello", + "media": ["image.jpg"] +} +``` +Where `role` can be `user`, `assistant` or `observation` if the agent has executed a +function and needs to observe the output. `content` is always the text message and +`media` is a list of media files that can be images or videos that you want the agent +to examine. + +When the agent responds, inside it's `context` you will find the following data structure: +```json +{ + "thoughts": "The user has greeted me. I will respond with a greeting and ask how I can assist them.", + "response": "Hello! How can I assist you today?", + "let_user_respond": true +} +``` + +`thoughts` are the thoughts the agent had when processing the message, `response` is the +response it generated which could contain a python execution, and `let_user_respond` is +a boolean that tells the agent if it should wait for the user to respond before +continuing, for example it may want to execute code and look at the output before +letting the user respond. -#### Basic Usage -To run the streamlit app locally to chat with `VisionAgent`, you can run the following -command: +### Chatting and Artifacts +If you run `chat_with_code` you will also notice an `Artifact` object. `Artifact`'s +are a way to sync files between local and remote environments. The agent will read and +write to the artifact object, which is just a pickle object, when it wants to save or +load files. + +```python +import vision_agent as va +from vision_agent.tools.meta_tools import Artifact + +artifact = Artifact("artifact.pkl") +# you can store text files such as code or images in the artifact +with open("code.py", "r") as f: + artifacts["code.py"] = f.read() +with open("image.png", "rb") as f: + artifacts["image.png"] = f.read() + +agent = va.agent.VisionAgent() +response, artifacts = agent.chat_with_code( + [ + { + "role": "user", + "content": "Can you write code to count the number of people in image.png", + } + ], + artifacts=artifacts, +) +``` + +### Running the Streamlit App +To test out things quickly, sometimes it's easier to run the streamlit app locally to +chat with `VisionAgent`, you can run the following command: ```bash pip install -r examples/chat/requirements.txt @@ -56,25 +135,117 @@ export WORKSPACE=/path/to/your/workspace export ZMQ_PORT=5555 streamlit run examples/chat/app.py ``` -You can find more details about the streamlit app [here](examples/chat/). +You can find more details about the streamlit app [here](examples/chat/), there are +still some concurrency issues with the streamlit app so if you find it doing weird things +clear your workspace and restart the app. + +## Tools +There are a variety of tools for the model or the user to use. Some are executed locally +while others are hosted for you. You can easily access them yourself, for example if +you want to run `owl_v2_image` and visualize the output you can run: -#### Basic Programmatic Usage ```python ->>> from vision_agent.agent import VisionAgent ->>> agent = VisionAgent() ->>> resp = agent("Hello") ->>> print(resp) -[{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "{'thoughts': 'The user has greeted me. I will respond with a greeting and ask how I can assist them.', 'response': 'Hello! How can I assist you today?', 'let_user_respond': True}"}] ->>> resp.append({"role": "user", "content": "Can you count the number of people in this image?", "media": ["people.jpg"]}) ->>> resp = agent(resp) +import vision_agent.tools as T +import matplotlib.pyplot as plt + +image = T.load_image("dogs.jpg") +dets = T.owl_v2_image("dogs", image) +# visualize the owl_v2_ bounding boxes on the image +viz = T.overlay_bounding_boxes(image, dets) + +# plot the image in matplotlib or save it +plt.imshow(viz) +plt.show() +T.save_image(viz, "viz.png") ``` -`VisionAgent` currently utilizes Claude-3.5 as it's default LMM and uses OpenAI for -embeddings for tool searching. +Or if you want to run on video data, for example track sharks and people at 10 FPS: -### Vision Agent Coder -#### Basic Usage -You can interact with the agent as you would with any LLM or LMM model: +```python +frames_and_ts = T.extract_frames_and_timestamps("sharks.mp4", fps=10) +# extract only the frames from frames and timestamps +frames = [f["frame"] for f in frames_and_ts] +# track the sharks and people in the frames, returns segmentation masks +track = T.florence2_sam2_video_tracking("shark, person", frames) +# plot the segmentation masks on the frames +viz = T.overlay_segmentation_masks(frames, track) +T.save_video(viz, "viz.mp4") +``` + +You can find all available tools in `vision_agent/tools/tools.py`, however the +`VisionAgent` will only utilizes a subset of tools that have been tested and provide +the best performance. Those can be found in the same file under the `FUNCION_TOOLS` +variable inside `tools.py`. + +#### Custom Tools +If you can't find the tool you are looking for you can also add custom tools to the +agent: + +```python +import vision_agent as va +import numpy as np + +@va.tools.register_tool(imports=["import numpy as np"]) +def custom_tool(image_path: str) -> str: + """My custom tool documentation. + + Parameters: + image_path (str): The path to the image. + + Returns: + str: The result of the tool. + + Example + ------- + >>> custom_tool("image.jpg") + """ + + return np.zeros((10, 10)) +``` + +You need to ensure you call `@va.tools.register_tool` with any imports it uses. Global +variables will not be captured by `register_tool` so you need to include them in the +function. Make sure the documentation is in the same format above with description, +`Parameters:`, `Returns:`, and `Example\n-------`. The `VisionAgent` will use your +documentation when trying to determine when to use your tool. You can find an example +use case [here](examples/custom_tools/) for adding a custom tool. Note you may need to +play around with the prompt to ensure the model picks the tool when you want it to. + +Can't find the tool you need and want us to host it? Check out our +[vision-agent-tools](https://github.com/landing-ai/vision-agent-tools) repository where +we add the source code for all the tools used in `VisionAgent`. + +## LMMs +All of our agents are based off of LMMs or Large Multimodal Models. We provide a thin +abstraction layer on top of the underlying provider APIs to be able to more easily +handle media. + + +```python +from vision_agent.lmm import AnthropicLMM + +lmm = AnthropicLMM() +response = lmm("Describe this image", media=["apple.jpg"]) +>>> "This is an image of an apple." +``` + +Or you can use the `OpenAI` chat interaface and pass it other media like videos: + +```python +response = lmm( + [ + { + "role": "user", + "content": "What's going on in this video?", + "media": ["video.mp4"] + } + ] +) +``` + +## Vision Agent Coder +Underneath the hood, `VisionAgent` uses `VisionAgentCoder` to generate code to solve +vision tasks. You can use `VisionAgentCoder` directly to generate code if you want: ```python >>> from vision_agent.agent import VisionAgentCoder @@ -84,17 +255,17 @@ You can interact with the agent as you would with any LLM or LMM model: Which produces the following code: ```python -from vision_agent.tools import load_image, grounding_sam +from vision_agent.tools import load_image, florence2_sam2_image def calculate_filled_percentage(image_path: str) -> float: # Step 1: Load the image image = load_image(image_path) # Step 2: Segment the jar - jar_segments = grounding_sam(prompt="jar", image=image) + jar_segments = florence2_sam2_image("jar", image) # Step 3: Segment the coffee beans - coffee_beans_segments = grounding_sam(prompt="coffee beans", image=image) + coffee_beans_segments = florence2_sam2_image("coffee beans", image) # Step 4: Calculate the area of the segmented jar jar_area = 0 @@ -122,7 +293,7 @@ mode by passing in the verbose argument: >>> agent = VisionAgentCoder(verbosity=2) ``` -#### Detailed Usage +### Detailed Usage You can also have it return more information by calling `chat_with_workflow`. The format of the input is a list of dictionaries with the keys `role`, `content`, and `media`: @@ -142,7 +313,7 @@ of the input is a list of dictionaries with the keys `role`, `content`, and `med With this you can examine more detailed information such as the testing code, testing results, plan or working memory it used to complete the task. -#### Multi-turn conversations +### Multi-turn conversations You can have multi-turn conversations with vision-agent as well, giving it feedback on the code and having it update. You just need to add the code as a response from the assistant: @@ -168,60 +339,6 @@ conv.append( result = agent.chat_with_workflow(conv) ``` -### Tools -There are a variety of tools for the model or the user to use. Some are executed locally -while others are hosted for you. You can easily access them yourself, for example if -you want to run `owl_v2_image` and visualize the output you can run: - -```python -import vision_agent.tools as T -import matplotlib.pyplot as plt - -image = T.load_image("dogs.jpg") -dets = T.owl_v2_image("dogs", image) -viz = T.overlay_bounding_boxes(image, dets) -plt.imshow(viz) -plt.show() -``` - -You can find all available tools in `vision_agent/tools/tools.py`, however, -`VisionAgentCoder` only utilizes a subset of tools that have been tested and provide -the best performance. Those can be found in the same file under the `TOOLS` variable. - -If you can't find the tool you are looking for you can also add custom tools to the -agent: - -```python -import vision_agent as va -import numpy as np - -@va.tools.register_tool(imports=["import numpy as np"]) -def custom_tool(image_path: str) -> str: - """My custom tool documentation. - - Parameters: - image_path (str): The path to the image. - - Returns: - str: The result of the tool. - - Example - ------- - >>> custom_tool("image.jpg") - """ - - return np.zeros((10, 10)) -``` - -You need to ensure you call `@va.tools.register_tool` with any imports it uses. Global -variables will not be captured by `register_tool` so you need to include them in the -function. Make sure the documentation is in the same format above with description, -`Parameters:`, `Returns:`, and `Example\n-------`. You can find an example use case -[here](examples/custom_tools/) as this is what the agent uses to pick and use the tool. - -Can't find the tool you need and want add it to `VisionAgent`? Check out our -[vision-agent-tools](https://github.com/landing-ai/vision-agent-tools) repository where -we add the source code for all the tools used in `VisionAgent`. ## Additional Backends ### Anthropic @@ -326,9 +443,9 @@ agent = va.agent.AzureVisionAgentCoder() ****************************************************************************************************************************** -### Q&A +## Q&A -#### How to get started with OpenAI API credits +### How to get started with OpenAI API credits 1. Visit the [OpenAI API platform](https://beta.openai.com/signup/) to sign up for an API key. 2. Follow the instructions to purchase and manage your API credits.