██████╗ ██████╗ ████████╗██╗ ██████╗██╗ ██╗███████╗
██╔═══██╗██╔══██╗╚══██╔══╝██║██╔════╝██║ ██║██╔════╝
██║ ██║██████╔╝ ██║ ██║██║ ██║ ██║███████╗
██║ ██║██╔═══╝ ██║ ██║██║ ██║ ██║╚════██║
╚██████╔╝██║ ██║ ██║╚██████╗╚██████╔╝███████║
╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚══════╝
Part of the Titan Protocol Initiative — System 04/300
Real-Time Object Detection • YOLOv8 Backbone • WSL Compatible • 80 Classes
# Clone the repository
git clone https://github.com/DaviBonetto/OPTICUS-L5-RealTime-Vision-Grid.git
cd OPTICUS-L5-RealTime-Vision-Grid
# Setup environment
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Run vision engine (auto-downloads YOLOv8n model on first run)
python src/main_test.pyOPTICUS intelligently adapts to your environment:
When running on a system with webcam access:
📹 Camera opened: Device 0
👁️ Frame 0001: person (95%), laptop (87%)
When camera is unavailable (common in WSL):
⚠️ Camera not available (WSL?)
⚡ Falling back to synthetic source
🎬 Synthetic source active: 640x480 @ 30fps
👁️ Frame 0012: sports ball (68%)
Why Synthetic Mode? WSL doesn't have direct USB/webcam access. OPTICUS generates synthetic frames with moving shapes that YOLOv8 can detect, validating the entire pipeline without hardware dependencies.
%%{init: {'theme': 'dark'}}%%
flowchart LR
subgraph Input ["📹 Video Source"]
CAM["🎥 Camera"]
SYN["🎬 Synthetic"]
end
subgraph Core ["🧠 Processing Unit"]
Engine["VisionEngine"]
YOLO["🦅 YOLOv8"]
end
subgraph Output ["📤 Output"]
JSON["📝 JSON Events"]
Video["📺 Annotated Stream"]
end
CAM --> Engine
SYN --> Engine
Engine --> YOLO
YOLO --> JSON
YOLO --> Video
style YOLO fill:#1e3a5f,stroke:#3b82f6,stroke-width:2px,color:#fff
style Engine fill:#3b1f5f,stroke:#8b5cf6,stroke-width:2px,color:#fff
style JSON fill:#14532d,stroke:#22c55e,stroke-width:2px,color:#fff
style Video fill:#7f1d1d,stroke:#ef4444,stroke-width:2px,color:#fff
style CAM fill:#1c1917,stroke:#78716c,stroke-width:1px,color:#fff
style SYN fill:#1c1917,stroke:#78716c,stroke-width:1px,color:#fff
Each detection returns structured JSON:
{
"class": "person",
"class_id": 0,
"confidence": 0.95,
"bbox": [x1, y1, x2, y2]
}Terminal output:
👁️ Frame 0042: person (95%), dog (87%), car (72%)
src/
├── core/
│ └── detector.py # VisionEngine class
├── inputs/
│ └── sources.py # CameraSource, SyntheticSource
├── utils/
│ └── logger.py # Pretty terminal logging
└── main_test.py # Demo script
models/ # YOLO weights (auto-download)
venv/ # Python environment
| Metric | Value |
|---|---|
| Model | YOLOv8n (Nano) |
| Inference | ~30ms/frame |
| FPS | 30+ (CPU) |
| Classes | 80 (COCO) |
| Resolution | 640x480 |
OPTICUS is part of the Titan Protocol, a collection of 300 autonomous high-performance systems.
| System | Name | Technology | Repository |
|---|---|---|---|
| 01/300 | GENESIS | Rust + Bloom Filter | GitHub |
| 02/300 | VORTEX | Python + LangGraph | GitHub |
| 03/300 | NEXUS | Rust + Vector DB | GitHub |
| 04/300 | OPTICUS | Python + YOLOv8 | You are here |
This project is licensed under the MIT License - see the LICENSE file for details.
Built with 🐍 Python + 👁️ YOLOv8 by Davi Bonetto
Part of the Titan Protocol Initiative
⭐ Star this repo if you find it useful!