-
Notifications
You must be signed in to change notification settings - Fork 682
Open
Labels
questionFurther information is requestedFurther information is requested
Description
Due diligence
- I have done my due diligence in trying to find the answer myself.
Topic
The PyTorch implementation
Question
Hi,
While reviewing the codebase, I noticed there's a second MimiModel instance (other_mimi) instantiated in both server.py and offline.py:
other_mimi = loaders.get_mimi(mimi_path, device=device)
other_mimi.set_num_codebooks(lm.dep_q)This instance:
- Processes the same input audio as the primary
mimiviaother_mimi.encode(chunk) - The encoded output is never used (no variable assignment, not passed to the LM)
- Consumes ~200MB additional GPU memory
Looking at handle_connection:
other_mimi.encode(chunk) # Result discarded
codes = mimi.encode(chunk) # This one is actually usedQuestions:
- What was the intended purpose of
other_mimi? - Was this meant for a separate voice prompt encoding path with isolated streaming state?
- Is this dead code that can be safely removed, or is there planned functionality?
Any clarification would be appreciated. We're working on optimizing memory usage for deployment on consumer GPUs.
Thanks!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested