- LLM in a Box implementation ✔
- Groq implementation ✔
- Ollama implementation (offline) ✔
- Save logs offline, then send it to LLM in a Box with chatid ✔
- Route between offline and online models for seamless interaction ✔
- Using websockets for ElevenLabs API # not possible
- Use Groq token streaming ✔
- Llama.cpp implementation (offline) C
- Whisper.cpp (offline) ✔
- Other offline TTS ✔
- Gather dataset for Immy offline model ✔
- Finetune a model for offline Immy ✔
- LLM in a Box streaming ? # not possible
-
Notifications
You must be signed in to change notification settings - Fork 1
Ammar-Alnagar/Immy-AI
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Magical AI teddy bear
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published