Skip to content

Alexi5000/Bri

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

44 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

BRI (Brianna) - Video Analysis Agent

BRI is an open-source, empathetic multimodal agent for video processing that enables users to upload videos and ask natural language questions to receive context-aware, conversational responses.

πŸš€ New to BRI? Check out the Quick Start Guide to get up and running in 5 minutes!

Features

Core Capabilities

  • πŸŽ₯ Video Upload & Management: Drag-and-drop upload with library view and thumbnails
  • πŸ’¬ Conversational Interface: Chat naturally about your video content with context awareness
  • πŸ” Multimodal Analysis:
    • Frame extraction and captioning (BLIP)
    • Audio transcription with timestamps (Whisper)
    • Object detection and tracking (YOLOv8)
  • 🧠 Smart Memory: Maintains conversation history per video for seamless follow-ups
  • 🎨 Warm UI/UX: Feminine design touches with soft colors and friendly interactions
  • ⚑ Fast Responses: Intelligent Redis caching and optimized processing
  • 🎯 Intelligent Routing: Automatically determines which tools to use based on your question
  • πŸ“ Timestamp Navigation: Click timestamps in responses to jump to specific moments
  • πŸ’‘ Proactive Suggestions: Get relevant follow-up questions after each response

What You Can Ask

  • Content Questions: "What's happening in this video?"
  • Timestamp Queries: "What did they say at 2:30?"
  • Object Search: "Show me all the cats in this video"
  • Transcript Search: "Find when they mentioned 'deadline'"
  • Follow-ups: "Tell me more about that" (BRI remembers context!)

Design Philosophy

BRI is designed to be:

  • Empathetic: Warm, supportive tone throughout
  • Accessible: No technical knowledge required
  • Conversational: Like discussing content with a knowledgeable friend
  • Privacy-Focused: Local storage by default
  • Graceful: Friendly error messages and fallback strategies

Quick Start

🐳 Docker Deployment (Recommended for Testing)

The fastest way to get BRI up and running:

  1. Set your API key in .env:

    nano .env
    # Update: GROQ_API_KEY=your_actual_key
  2. Deploy with one command:

    ./deploy_test.sh
  3. Access the app: http://localhost:8501

πŸ“– Full guide: DEPLOY_TO_TEST.md | ⚑ Quick reference: QUICK_START.md


πŸ’» Local Development Setup

For development and customization:

Prerequisites

  • Python 3.9 or higher
  • Groq API key (Get one here)
  • Redis (optional, for caching)

Installation

  1. Clone the repository:
git clone <repository-url>
cd bri-video-agent
  1. Install dependencies:
pip install -r requirements.txt
  1. Set up environment variables:
cp .env.example .env
# Edit .env and add your GROQ_API_KEY
  1. Initialize the database:
python scripts/init_db.py
  1. (Optional) Validate your setup:
python scripts/validate_setup.py

Running the Application

  1. Start the MCP server (in one terminal):
python mcp_server/main.py
  1. Start the Streamlit UI (in another terminal):
streamlit run app.py
  1. Open your browser to http://localhost:8501

Project Structure

bri-video-agent/
β”œβ”€β”€ models/          # Data models and schemas
β”œβ”€β”€ services/        # Core business logic
β”œβ”€β”€ tools/           # Video processing tools
β”œβ”€β”€ mcp_server/      # Model Context Protocol server
β”œβ”€β”€ ui/              # Streamlit UI components
β”œβ”€β”€ storage/         # Database and file storage
β”œβ”€β”€ scripts/         # Utility scripts
└── tests/           # Test suite

Configuration

All configuration is managed through environment variables in the .env file. See .env.example for all available options.

Required Configuration

  • GROQ_API_KEY: Your Groq API key (required)

Optional Configuration

Groq API Settings

  • GROQ_MODEL: LLM model to use (default: llama-3.1-70b-versatile)
  • GROQ_TEMPERATURE: Response creativity (0-2, default: 0.7)
  • GROQ_MAX_TOKENS: Maximum response length (default: 1024)

Redis Caching

  • REDIS_URL: Redis connection URL (default: redis://localhost:6379)
  • REDIS_ENABLED: Enable/disable Redis caching (default: true)
    • Falls back gracefully if Redis is unavailable

Storage Paths

  • DATABASE_PATH: SQLite database location (default: data/bri.db)
  • VIDEO_STORAGE_PATH: Uploaded videos directory (default: data/videos)
  • FRAME_STORAGE_PATH: Extracted frames directory (default: data/frames)
  • CACHE_STORAGE_PATH: Processing cache directory (default: data/cache)

MCP Server

  • MCP_SERVER_HOST: Server host (default: localhost)
  • MCP_SERVER_PORT: Server port (default: 8000)

Processing Settings

  • MAX_FRAMES_PER_VIDEO: Maximum frames to extract (default: 100)
  • FRAME_EXTRACTION_INTERVAL: Seconds between frames (default: 2.0)
  • CACHE_TTL_HOURS: Cache expiration time (default: 24)

Memory & Performance

  • MAX_CONVERSATION_HISTORY: Messages to keep in context (default: 10)
  • TOOL_EXECUTION_TIMEOUT: Tool timeout in seconds (default: 120)
  • REQUEST_TIMEOUT: Request timeout in seconds (default: 30)
  • LAZY_LOAD_BATCH_SIZE: Images per lazy load batch (default: 3)

Application Settings

  • DEBUG: Enable debug mode (default: false)
  • LOG_LEVEL: Logging level - DEBUG, INFO, WARNING, ERROR (default: INFO)

Configuration Validation

The application validates configuration on startup and will:

  • βœ— Fail if required values are missing or invalid
  • ⚠️ Warn about suboptimal settings (e.g., Redis disabled, debug mode enabled)
  • βœ“ Create necessary directories automatically

Documentation

πŸ“š Complete Documentation Index - Find all documentation in one place

User Documentation

Developer Documentation

Deployment Documentation

Development

Running Tests

# Run all tests
pytest

# Run with coverage
pytest --cov=. tests/

# Run specific test file
pytest tests/unit/test_memory.py

# Run integration tests
pytest tests/integration/

Development Workflow

  1. Setup Development Environment:

    python -m venv .venv
    source .venv/bin/activate  # Windows: .venv\Scripts\activate
    pip install -r requirements.txt
    cp .env.example .env
    # Edit .env with your API key
  2. Run in Development Mode:

    # Terminal 1: MCP Server with auto-reload
    python mcp_server/main.py
    
    # Terminal 2: Streamlit with auto-reload
    streamlit run app.py
  3. Run Tests Before Committing:

    pytest
  4. Check Code Quality:

    # Format code
    black .
    
    # Lint code
    flake8 .
    
    # Type checking
    mypy .

Architecture

BRI uses a modular, layered architecture:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Streamlit UI Layer              β”‚
β”‚  (Chat, Library, Player, History)       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Agent Layer                      β”‚
β”‚  (Groq Agent, Router, Memory, Context)  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         MCP Server Layer                 β”‚
β”‚  (FastAPI, Tool Registry, Redis Cache)  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      Video Processing Tools              β”‚
β”‚  (OpenCV, BLIP, Whisper, YOLO)          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
              β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Storage Layer                    β”‚
β”‚  (SQLite Database, File System)         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Components

  • UI Layer: Streamlit-based interface with warm, approachable design
  • Agent Layer: Groq-powered conversational agent with intelligent tool routing
  • MCP Server: FastAPI server exposing video processing capabilities
  • Tools Layer: Open-source video processing tools (OpenCV, BLIP, Whisper, YOLO)
  • Storage Layer: SQLite for metadata and memory, file system for videos and frames

For detailed architecture documentation, see Design Document.

Technology Stack

Core Technologies

  • Frontend: Streamlit with custom CSS
  • LLM: Groq API (Llama 3.1 70B)
  • Video Processing:
    • OpenCV (frame extraction)
    • BLIP (image captioning)
    • Whisper (audio transcription)
    • YOLOv8 (object detection)
  • API Server: FastAPI with CORS support
  • Caching: Redis (optional but recommended)
  • Database: SQLite
  • Language: Python 3.9+

Why These Technologies?

  • Groq: Fast, high-quality LLM inference
  • Open-source tools: No vendor lock-in, community-driven
  • Streamlit: Rapid UI development with Python
  • SQLite: Simple, reliable, no separate server needed
  • Redis: Optional caching for performance boost

API Reference

BRI exposes a REST API through the MCP server for programmatic access to video processing tools.

Base URL

http://localhost:8000

Key Endpoints

  • GET / - Server information
  • GET /health - Health check
  • GET /tools - List available tools
  • POST /tools/{tool_name}/execute - Execute a tool
  • POST /videos/{video_id}/process - Batch process video
  • GET /cache/stats - Cache statistics

For complete API documentation, see MCP Server README.

Troubleshooting

Quick Fixes

Issue Quick Fix
Missing API key Copy .env.example to .env and add your Groq API key
Connection refused Ensure both MCP server and Streamlit are running
Redis errors Set REDIS_ENABLED=false in .env (Redis is optional)
Slow processing Reduce MAX_FRAMES_PER_VIDEO in .env
Out of memory Reduce MAX_FRAMES_PER_VIDEO and LAZY_LOAD_BATCH_SIZE

Detailed Troubleshooting

For comprehensive troubleshooting, see the Troubleshooting Guide which covers:

  • Installation issues
  • Configuration problems
  • Server startup failures
  • Video processing errors
  • Performance optimization
  • Database and cache issues
  • Complete error message reference

Getting Help

  1. Check Documentation:

  2. Run Diagnostics:

    python scripts/validate_setup.py
  3. Enable Debug Mode:

    # In .env:
    DEBUG=true
    LOG_LEVEL=DEBUG
  4. Report Issues:

    • Open a GitHub issue with:
      • Error messages and logs
      • Configuration (mask sensitive values)
      • Steps to reproduce
      • System information

Contributing

We welcome contributions to BRI! Here's how you can help:

Ways to Contribute

  • πŸ› Report Bugs: Open an issue with details and reproduction steps
  • πŸ’‘ Suggest Features: Share your ideas for new capabilities
  • πŸ“ Improve Documentation: Help make docs clearer and more comprehensive
  • πŸ”§ Submit Pull Requests: Fix bugs or implement features
  • πŸ§ͺ Write Tests: Improve test coverage
  • 🎨 Enhance UI/UX: Suggest or implement design improvements

Development Setup

  1. Fork the repository
  2. Clone your fork: git clone https://github.com/your-username/bri-video-agent.git
  3. Create a branch: git checkout -b feature/your-feature-name
  4. Make your changes
  5. Run tests: pytest
  6. Commit: git commit -m "Add your feature"
  7. Push: git push origin feature/your-feature-name
  8. Open a Pull Request

Contribution Guidelines

  • Follow existing code style and conventions
  • Write tests for new features
  • Update documentation as needed
  • Keep commits focused and atomic
  • Write clear commit messages
  • Be respectful and constructive in discussions

Code of Conduct

  • Be welcoming and inclusive
  • Respect differing viewpoints
  • Accept constructive criticism gracefully
  • Focus on what's best for the community
  • Show empathy towards others

License

[Add your license here - e.g., MIT, Apache 2.0, GPL]

Acknowledgments

BRI is built with amazing open-source technologies:

  • Groq - Fast LLM inference
  • OpenCV - Computer vision library
  • Hugging Face - BLIP image captioning model
  • OpenAI Whisper - Audio transcription
  • Ultralytics YOLOv8 - Object detection
  • Streamlit - Web UI framework
  • FastAPI - Modern API framework

Special thanks to the open-source community for making projects like BRI possible! πŸ’œ

Support


Made with πŸ’œ by the BRI community

Ask. Understand. Remember.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •