Skip to content

Ammar-Alnagar/Immy-AI

Repository files navigation

Timeline

  1. LLM in a Box implementation ✔
  2. Groq implementation ✔
  3. Ollama implementation (offline) ✔
  4. Save logs offline, then send it to LLM in a Box with chatid ✔
  5. Route between offline and online models for seamless interaction ✔
  6. Using websockets for ElevenLabs API # not possible
  7. Use Groq token streaming ✔
  8. Llama.cpp implementation (offline) C
  9. Whisper.cpp (offline) ✔
  10. Other offline TTS ✔
  11. Gather dataset for Immy offline model ✔
  12. Finetune a model for offline Immy ✔
  13. LLM in a Box streaming ? # not possible

About

Magical AI teddy bear

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages