AshSwarm is an Elixir-based project that explores how to use the Ash Framework alongside concurrency patterns (Reactor, Oban jobs, etc.) and AI-driven logic to create domain-specific "swarms." These swarms coordinate multiple steps and tasks—particularly LLM (large language model) interactions—without relying on complex, black-box "agent" frameworks.
- Background and Motivation
- Swarm Concept
- Key Components
- Examples and Livebooks
- Why Elixir (and Ash)?
- Roadmap / Future Directions
- How to Run Locally
- Livebook Guide
- LLM Provider Configuration
- License
- Development Setup
"AshSwarm" is named for the idea that we can have many small tasks (or calls to a language model) coordinating to solve real problems in a well-defined, incremental way—rather than using large, monolithic 'agent' loops.
- 25+ years of dev experience: This project reflects lessons learned in both Python and JavaScript, but applies them in Elixir, which provides excellent concurrency and reliability.
- Ash Ecosystem: Ash already includes advanced abstractions for resource definition, data layer interactions, and more. The goal is to complement Ash with domain reasoning and step-based "reactors" that can invoke AI at specific points.
- Bridging Python and Elixir: Many AI/LLM techniques come from the Python community. AshSwarm adopts those ideas (like chain-of-thought prompting or DSL modeling) and adapts them to Elixir's unique strengths.
"Swarm" here doesn't refer to a brand-new framework, but rather a practice of:
- Defining tasks or steps explicitly (e.g., "fetch this data," "run a GPT prompt," "validate the result").
- Invoking LLM reasoning at well-defined boundaries rather than letting an "agent" loop roam freely.
- Leveraging concurrency via Elixir processes, Reactors, or workflows so multiple "mini-questions" or LLM calls can happen in parallel.
This approach is especially helpful if you want to keep your AI usage well-structured, testable, and cost-effective (for instance, hitting a cheaper model multiple times rather than once on a very expensive model).
A good chunk of the transcript discusses the idea of using "DSL models" to transform YAML/JSON data into typed structures or internal Elixir modules:
DSLModel
-style approach: The Python concept of something like "Pydantic" or "SQLModel" is adapted here, so you can callfrom_yaml
orfrom_prompt
to generate Elixir structures.to_from_dsl.ex
: Provides callbacks for reading/writing domain data to or from DSL files.
AshSwarm includes a "domain reasoning" feature that can represent resources, relationships, or even entire ontologies in structured YAML, then store them in Ecto schemas. The transcript references:
domain_reasoning.ex
&ecto_schema/domain_reasoning.ex
: Where the logic for domain-based transformations lives.- AI Validation: Possibly calling an LLM to decide whether a given domain model or relationship is "valid" or "helpful," then iterating until it passes.
The transcript shows examples of:
reactors/qa_saga.ex
: A Reactor that coordinates question-and-answer tasks with an LLM.qa_saga_job.ex
/Oban
usage: A periodic job (for instance, every minute) that calls the QA Reactor or attempts a domain reasoning step.- Parallel or iterative flows: Instead of a single "agent," you define clear steps or "Reactors" triggered by Oban jobs.
The project uses (or plans to use) an Instructor
approach, letting you specify small AI tasks ("Is this a good idea?" "Should this resource be related to X?") and then repeating or adjusting them until they produce acceptable results.
A highlighted example is the "StreamingOrderBot," which uses OpenAI or another LLM in a streaming mode to simulate building a pizza order (or any text-based session) step by step.
- OpenAI / Groq Key: You add your LLM API key to Livebook secrets so you can stream answers in real time.
There's also a "Domain Reasoning" Livebook illustrating how to:
- Load YAML describing resources, attributes, or relationships.
- Convert them into Ash-like data structures.
- Add "reasoning steps" that can be validated or extended by LLM calls.
AshSwarm includes an AI-powered adaptive code evolution system that can analyze, optimize, and evaluate Elixir code. The system uses language models to identify optimization opportunities, generate optimized implementations, and evaluate the results.
Two scripts are provided to demonstrate the adaptive code evolution capabilities:
-
Demo Script: Run a simple demonstration of the adaptive code evolution system:
GROQ_API_KEY=your_api_key_here mix run demo_adaptive_code_evolution.exs
-
Stress Test: Run a comprehensive stress test that processes multiple complex modules sequentially:
GROQ_API_KEY=your_api_key_here mix run stress_test_adaptive_code_evolution.exs
The stress test includes:
- Processing of 5 different module types with varying complexity
- Rate limit handling with automatic retries
- Detailed performance metrics and success ratings
Note: Both scripts require a valid Groq API key set as the GROQ_API_KEY
environment variable.
- Concurrency & Let-It-Crash: Elixir's concurrency model (processes, supervision trees) naturally maps to "many small tasks."
- Ash's Abstractions: Ash reduces the burden of building resources, schemas, or APIs by hand—so you can focus on domain logic.
- Reactor: Perfect for step-based sagas that might need to call GPT or other AI functionalities at each stage.
- Oban: Schedule or retry tasks with minimal overhead.
- Adaptive Code Evolution: Implemented - AI-powered system to analyze, optimize, and evaluate Elixir code using language models.
- LLM Validation: Expand "Instructor" code to judge or rank domain changes across multiple "AI experts" (e.g., database vs. ontology vs. Ash).
- Swarm Intelligence: Let multiple cheap model calls vote or combine answers, rather than paying for a single expensive LLM pass.
- Service Colonies & MAPE-K: Incorporate concepts from the service colonies paper and "monitor-analyze-plan-execute" loops for adaptive systems.
- Deeper Python Interop: Possibly share domain data between Python and Elixir purely via DSL files or minimal bridging.
-
Clone the Repo
git clone https://github.com/you/ash_swarm.git cd ash_swarm
-
Install Dependencies
- Ensure you have Elixir and Erlang installed (e.g., via
asdf
). - Install the project deps:
mix deps.get
- Ensure you have Elixir and Erlang installed (e.g., via
-
Database (If Needed)
- If using
Oban
or Ecto-based domain logic, configure and migrate your Postgres DB:mix ecto.create mix ecto.migrate
- If using
-
Run Phoenix / Oban
mix phx.server
Then visit
localhost:4000
if a web interface is enabled. -
Check Livebooks
- Launch
livebook server
(or your local environment). - Open the
live_books/
folder to see examples likeStreamingOrderBot.livemd
ordomain_reasoning.livemd
. - Don't forget to set your LLM API key as a Livebook secret if you plan to test streaming.
- Launch
For detailed instructions on setting up and using Livebook with AshSwarm, please refer to our Livebook Guide. The guide covers:
- Installing Livebook through Mix or Homebrew
- Starting the Livebook server with various configurations
- Authentication methods (token and password)
- Importing AshSwarm notebooks
- Setting up the runtime to connect to your project
- Working with the example notebooks
- Troubleshooting common issues
Livebook is an essential tool for exploring AshSwarm's capabilities through interactive notebooks. Our provided notebooks demonstrate key concepts like domain reasoning, reactors, and streaming LLM interactions.
AshSwarm uses environment variables to configure LLM providers for the Instructor library. You can customize these settings based on your preferred provider:
ASH_SWARM_DEFAULT_INSTRUCTOR_ADAPTER
: Sets the default adapter for Instructor (defaults toInstructor.Adapters.Groq
)ASH_SWARM_DEFAULT_MODEL
: Specifies the default model to use (defaults togpt-4o
)
GROQ_API_URL=https://api.groq.com/openai # Default URL for Groq
GROQ_API_KEY=your_groq_api_key
OPENAI_API_URL=https://api.openai.com/v1 # Optional custom URL
OPENAI_API_KEY=your_openai_api_key
GEMINI_API_KEY=your_gemini_api_key
Please see the LICENSE file in this repository for licensing details.
- LLM Cost: The approach encourages multiple small queries instead of large, expensive queries, but you must still track usage if running frequent jobs (e.g., an Oban cron job every minute).
- API Keys: The transcript warns that you could accidentally expose your keys. Use Livebook secrets, environment variables, or Docker secrets to keep credentials private.
In summary, AshSwarm is an ongoing exploration of how to harness Elixir's concurrency and Ash's resource DSL to build explicit, domain-driven AI flows, rather than rely on a single monolithic "agent." By combining short, well-defined tasks with LLM calls, we can achieve flexible domain reasoning while staying cost-effective and maintainable.
The project uses PostgreSQL. To set up your local development environment:
-
Copy the example configuration file:
cp config/dev.exs.example config/dev.exs
-
Edit the
config/dev.exs
file with your database credentials or set the following environment variables:POSTGRES_USER
- PostgreSQL username (default: "postgres")POSTGRES_PASSWORD
- PostgreSQL password (default: "postgres")POSTGRES_HOST
- PostgreSQL host (default: "localhost")POSTGRES_DB
- PostgreSQL database name (default: "ash_swarm_dev")
Use the provided startup script to run both Phoenix and Livebook servers:
./start_ash_swarm.sh
This will start:
- Phoenix server at http://localhost:4000
- Livebook server at http://localhost:8092 (password: livebooksecretpassword)
Available Livebooks:
- streaming_orderbot
- reactor_practice
- ash_domain_reasoning
To stop the servers:
pkill -f phx.server && pkill -f livebook
- AI-powered code analysis
- Adaptive code evolution strategies
- Experiment evaluation with language models
# Clone the repository
git clone https://github.com/seanchatmangpt/ash_swarm.git
cd ash_swarm
# Install dependencies
mix deps.get
# Compile the project
mix compile
The test suite is designed to run without making real API calls to external LLM services.
# Run all tests
mix test
# Run a specific test file
mix test test/ash_swarm/foundations/ai_adaptive_evolution_example_test.exs
- Tests use mock implementations of the AI services to avoid making real API calls.
- If you want to run tests with real API calls (not recommended), you need to:
- Set the appropriate environment variables (GROQ_API_KEY, OPENAI_API_KEY, etc.)
- Modify the test setup to bypass the mocking
# Start the Livebook server with the project loaded
./start_livebook_final.sh
This will start a Livebook server on port 8082 with the AshSwarm project loaded and ready to use.
lib/ash_swarm/foundations
: Core concepts and implementationsai_code_analysis.ex
: Code analysis using language modelsai_adaptation_strategies.ex
: Strategies for adapting codeai_experiment_evaluation.ex
: Evaluation of code adaptationsadaptive_code_evolution.ex
: Core pattern implementation
lib/ash_swarm/examples
: Example implementationstest/ash_swarm
: Test suite
The test suite was initially making real API calls to language model services (Groq, OpenAI), which led to:
- Rate limiting errors
- Dependency on external services
- Slow and unreliable tests
This has been fixed by implementing mock versions of all AI services, allowing the tests to run without making any real API calls.
For actual use (not tests), you'll need to set environment variables for the LLM services:
# Groq API
export GROQ_API_KEY="your-key-here"
# OpenAI API
export OPENAI_API_KEY="your-key-here"
# Gemini API
export GEMINI_API_KEY="your-key-here"
- Fork the repository
- Create your feature branch (
git checkout -b feature/my-new-feature
) - Commit your changes (
git commit -am 'Add some feature'
) - Push to the branch (
git push origin feature/my-new-feature
) - Create a new Pull Request </rewritten_file>