Skip to content

firstbatchxyz/dria-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dria Agent

tiny-agent-α is a tiny model for building tool calling agents on edge devices.

It's fast and veeeery good compared to it's size.

Demo:

tinyagent_demo2_lo.mp4

Features

Tiny-Agent-α is an extension of Dria-Agent-a, trained on top of the Qwen2.5-Coder series to be used in edge devices. These models are carefully fine-tuned with quantization aware training to minimize performance degradation after quantization. The smallest model is 0.5B with 4bit quantization (398MB on disk), and the largest model is 3B with 4bit quantization.

It's good at:

  • One-shot Parallel Multiple Function Calls

  • Free-form Reasoning and Actions

  • On-the-fly Complex Solution Generation

Demo:

tinyagent_demo1_lo.mp4

Edge Device Optimized:

  • Supports mlx, ollama, and transformers (Hugging Face).
  • Includes built-in support for macOS, Gmail, search, and more.
  • Uses similarity search to efficiently select relevant tools.
  • Optimized for Edge

tiny-agent-a-0.5b gets a whopping 72 on the DPAB benchmark and run with 183.49 tokens/s on a M1 macbook pro. Yet it's only 530MB!

Installation

To install the package run:

pip install dria_agent # Best for CPU inference, uses ollama
pip install 'dria_agent[mlx]' # To use MLX as backend for macOS. 
pip install 'dria_agent[huggingface]' # HuggingFace/transformers backend for GPU.
pip install 'dria_agent[mlx, tools]' # In order to use factory tools in package, run with backend of your choice

Quick Start

CLI Mode

You can run the agent with pre-defined tools using the CLI. Agent will use all of the tools in the library. For CLI, you should install tools with backend of your choice

pip install 'dria_agent[ollama, tools]'

And then, run:

dria_agent --chat  # for chat mode
dria_agent Please solve 5x^2 + 8x + 9 = 0 and 4x^2 + 11x - 3 = 0 # for single query

For help, dria_agent --help

dria_agent [-h] [--chat] [--backend {mlx,ollama,huggingface}]
                  [--agent_mode {ultra_light,fast,balanced,performant}]
                  [query ...]

Using your own tools

Write your functions in pure python, decorate them with @tool to expose them to the agent.

from dria_agent import tool

@tool
def check_availability(day: str, start_time: str, end_time: str) -> bool:
    """
    Checks if a given time slot is available.

    :param day: The date in "YYYY-MM-DD" format.
    :param start_time: The start time of the desired slot (HH:MM format, 24-hour).
    :param end_time: The end time of the desired slot (HH:MM format, 24-hour).
    :return: True if the slot is available, otherwise False.
    """
    # Mock implementation
    if start_time == "12:00" and end_time == "13:00":
        return False
    return True

Create an agent:

from dria_agent import ToolCallingAgent

agent = ToolCallingAgent(
    tools=[check_availability]
)

Use agent.run(query) to execute tasks with tools.

execution = agent.run("Check my calendar for tomorrow noon", print_results=True)

Run Modes

Agent has 4 modes to choose from, depending on your needs:

  • Ultra Light: Fastest inference, uses the least amount of memory.
  • Fast: Faster inference, uses more memory.
  • Balanced: Balanced between speed and memory.
  • Performant: Best performance, uses the most memory.

To initialize the agent with a specific mode:

agent = ToolCallingAgent(tools=[my_tool], backend="ollama", mode="ultra_light")

agent.run()

  • query (str): The user query to process.
  • dry_run (bool, default=False): If True, only performs inference—no tool execution.
  • show_completion (bool, default=True): Displays the model’s raw output before tool execution.
  • num_tools (int, default=2): Selects the best K tools for inference (using similarity search).
    • Allows handling thousands of tools efficiently.
      • perform best with 4-5 tools max*.
  • print_results (bool, default=True): Prints execution results.

agent.run_feedback()

Same as run, but if there are errors in the execution, it will feed the errors back until execution is successful.

Tool Library

See tool's library for implemented tools.

Models

A fast and powerful tool calling model designed to run on edge devices.

Model Description HF Download Link Ollama Tag Size
Tiny-Agent-a-3B (8bit) High performance and reasoning Download driaforall/tiny-agent-a:3B-q8_0 3.3 GB
Tiny-Agent-a-3B (4bit) Tradeoff 3B quality for memory Download driaforall/tiny-agent-a:3B-q4_K_M 1.9 GB
Tiny-Agent-a-1.5B (8bit) Balanced performance and speed Download driaforall/tiny-agent-a:1.5B-q8_0 1.6 GB
Tiny-Agent-a-1.5B (4bit) Faster CPU inference, performance tradeoff Download driaforall/tiny-agent-a:1.5B-q4_K_M 986 MB
Tiny-Agent-a-0.5B (8bit) Ultra-light Download driaforall/tiny-agent-a:0.5B-q8_0 531 MB

Evaluation & Performance

We evaluate the model on the Dria-Pythonic-Agent-Benchmark (DPAB): The benchmark we curated with a synthetic data generation +model-based validation + filtering and manual selection to evaluate LLMs on their Pythonic function calling ability, spanning multiple scenarios and tasks. See blog for more information.

Below are the DPAB results:

Current benchmark results for various models (strict):

Model Name Pythonic JSON
Closed Models
Claude 3.5 Sonnet 87 45
gpt-4o-2024-11-20 60 30
Open Models
> 100B Parameters
DeepSeek V3 (685B) 63 33
MiniMax-01 62 40
Llama-3.1-405B-Instruct 60 38
> 30B Parameters
Qwen-2.5-Coder-32b-Instruct 68 32
Qwen-2.5-72b-instruct 65 39
Llama-3.3-70b-Instruct 59 40
QwQ-32b-Preview 47 21
< 20B Parameters
Phi-4 (14B) 55 35
Qwen2.5-Coder-7B-Instruct 44 39
Qwen-2.5-7B-Instruct 47 34
Tiny-Agent-a-3B 72 34
Qwen2.5-Coder-3B-Instruct 26 37
Tiny-Agent-a-1.5B 73 30

Citation

@misc{Dria-Agent-a,
      url={https://huggingface.co/blog/andthattoo/dria-agent-a},
      title={Dria-Agent-a},
      author={"andthattoo", "Atakan Tekparmak"}
}

About

powerful and fast tool calling agents

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages