A sophisticated bash-based AI chat interface supporting multiple LLM providers with agents, streaming, and parallel processing.
AI-Gents provides a powerful terminal interface for interacting with various LLM providers (OpenAI, Anthropic, OpenRouter, Ollama, LMStudio, DeepSeek, Moonshot). Features include:
- Multiple LLM Providers - Plugin-based architecture supporting 7+ providers
- Agent System - Create and manage custom AI agents with YAML configuration
- Streaming Support - Real-time responses with 60fps optimization
- Command Execution - Execute bash commands within prompts (
#!/command;) - Task System - Predefined agent tasks for common operations
- Connection Pooling - HTTP keep-alive for improved performance
- Security - Command blacklist system (user-configurable)
- Parallel Processing - Multi-model racing and concurrent operations
# Install AI-Gents
./utils/install
# Ask a question
ai ask "What is the capital of France?"
# Start a chat
ai chat "Hello, let's discuss bash scripting"
# Create an agent
ai agent create MyAgent \
--prompt "You are a helpful coding assistant" \
--provider openai \
--model gpt-4o-mini
# Chat with your agent
ai agent chat MyAgent "How do I write a bash function?"AI-Gents uses a modular architecture with:
- parseArger-generated CLI - Standardized argument parsing
- Provider Plugin System - Dynamic loading of LLM providers
- Modular Libraries - Core, validation, security, API, and parallel processing
- BATS Test Suite - Comprehensive testing framework
ai-gents/
├── ai # Main entry point
├── src/bash/
│ ├── ask # Single query command
│ ├── chat # Interactive chat command
│ ├── agent # Agent command dispatcher
│ ├── race # Multi-model racing
│ ├── lib/ # Modular library system
│ │ ├── core # Logging, caching, lazy loading
│ │ ├── validation # Input validation functions
│ │ ├── security # Command blacklist system
│ │ ├── api # HTTP client with connection pooling
│ │ ├── parallel # Parallel processing utilities
│ │ ├── errors # Error handling with exit codes
│ │ └── providers/ # Provider plugins
│ │ ├── _base # Base provider interface
│ │ ├── _loader # Dynamic provider loader
│ │ ├── openrouter # OpenRouter provider
│ │ ├── openai # OpenAI provider
│ │ ├── anthropic # Claude provider
│ │ ├── ollama # Ollama provider
│ │ ├── lmstudio # LM Studio provider
│ │ ├── deepseek # DeepSeek provider
│ │ └── moonshot # Moonshot provider
│ └── _agent/ # Agent subcommands
│ ├── ask
│ ├── chat
│ ├── create
│ └── list
├── tests/ # BATS test suite
│ ├── unit/ # Unit tests
│ ├── security/ # Security tests
│ └── integration/ # Integration tests
└── docs/ # Documentation
├── ARCHITECTURE.md
├── CODING_STANDARDS
└── SECURITY.md
Ask a one-time question to an AI:
ai ask "Explain quantum computing"
ai ask "Write a python script" --provider openai --model gpt-4o
ai ask "Analyze this" --stream # Stream responseOptions: --provider, --model, --temperature, --max_tokens, --stream, etc.
Start an interactive conversation:
ai chat "Let's discuss architecture"
ai chat --title "Project Planning" --logFeatures: Chat history, customizable user/AI names, markdown logging
Create and manage AI agents:
# Create agent
ai agent create MyAgent --prompt "You are an expert" --provider openai
# Chat with agent
ai agent chat MyAgent "Hello"
# Ask agent
ai agent ask MyAgent "Quick question"
# List agents
ai agent listExecute bash commands within prompts:
You > Analyze these files: #!/ls -la; what do you see?The command output is inserted into the prompt before being sent to the AI.
Security Note: Commands are filtered through a user-configurable blacklist (empty by default). Create ~/.config/ai-gents/command-blacklist to block dangerous commands.
Agents can have predefined tasks in their YAML config:
# ~/.config/ai-gents/agents/myagent.yml
tasks:
summarize:
description: "Summarize content"
prompt: "Provide a concise summary of the following:"Usage:
You > #/task summarize; analyze this codeSupported LLM providers:
| Provider | Default Model | Local/Cloud | Notes |
|---|---|---|---|
| openrouter | meta-llama/llama-3.2-1b-instruct:free | Cloud | Multi-provider aggregator |
| openai | gpt-4o-mini | Cloud | Full feature support |
| anthropic | claude-3-5-haiku | Cloud | Claude models |
| ollama | llama-3.2-1b-instruct | Local | Run models locally |
| lmstudio | llama-3.2-1b-instruct | Local | LM Studio integration |
| deepseek | deepseek-v3 | Cloud | DeepSeek API |
| moonshot | moonshot-v1-8k | Cloud | Think tag support |
Set API keys via environment variables or files:
# Environment variable
export AI_OPENAI_API_KEY="your-key"
# Or file
mkdir -p ~/.config/ai-gents/credentials
echo "your-key" > ~/.config/ai-gents/credentials/openai
# Custom host (optional)
export AI_OPENAI_HOST="api.openai.com"See docs/ARCHITECTURE.md for creating custom providers.
Create ~/.config/ai-gents/config:
AI_DEFAULT_PROVIDER=openai
AI_OPENAI_MODEL=gpt-4oAgent YAML files in ~/.config/ai-gents/agents/:
name: myagent
description: "My custom agent"
system:
prompt: "You are a helpful assistant"
model:
provider: openai
name: gpt-4o-mini
temperature: 0.7
tasks:
example_task:
description: "Example task"
prompt: "Additional instructions"Create ~/.config/ai-gents/command-blacklist:
# Block dangerous commands
rm[[:space:]]+-rf[[:space:]]+/
mkfs\.
dd[[:space:]]+if=.*of=/dev
.*[[:space:]]>[[:space:]]+/devPatterns are bash regex. Empty by default - users configure their own security.
Run the BATS test suite:
# Install BATS
brew install bats-core # macOS
# or: sudo apt-get install bats # Debian/Ubuntu
# Run all tests
bats tests/
# Run specific test file
bats tests/unit/validation.bats
bats tests/security/blacklist.batsSee tests/README.md for details.
- Bash 4.3+
- jq (JSON processing)
- yq (YAML processing)
- curl
- bc (calculator)
See docs/CODING_STANDARDS for naming conventions and standards.
See docs/ARCHITECTURE.md for detailed architecture documentation.
- Connection Pooling: HTTP keep-alive for reduced latency
- Streaming Optimization: 16ms batching targeting 60fps
- Lazy Loading: TTL-based caching for agents and providers
- Parallel Processing: Semaphore-based concurrency for multi-model racing
- Command blacklist system (user-configurable, empty by default)
- Input validation for all LLM parameters
- YAML safety using
yq --arg(no string interpolation) - Standardized error handling with exit codes
See docs/SECURITY.md for security details.
[Your License Here]
[Contributing Guidelines]
- Built with parseArger for CLI generation
- Inspired by the need for a simple, powerful terminal AI interface