Skip to content

Clarification needed: Purpose of duplicate MimiModel instance (other_mimi) in server.py #46

@asultanoff

Description

@asultanoff

Due diligence

  • I have done my due diligence in trying to find the answer myself.

Topic

The PyTorch implementation

Question

Hi,

While reviewing the codebase, I noticed there's a second MimiModel instance (other_mimi) instantiated in both server.py and offline.py:

other_mimi = loaders.get_mimi(mimi_path, device=device)
other_mimi.set_num_codebooks(lm.dep_q)

This instance:

  1. Processes the same input audio as the primary mimi via other_mimi.encode(chunk)
  2. The encoded output is never used (no variable assignment, not passed to the LM)
  3. Consumes ~200MB additional GPU memory

Looking at handle_connection:

other_mimi.encode(chunk)  # Result discarded
codes = mimi.encode(chunk)  # This one is actually used

Questions:

  1. What was the intended purpose of other_mimi?
  2. Was this meant for a separate voice prompt encoding path with isolated streaming state?
  3. Is this dead code that can be safely removed, or is there planned functionality?

Any clarification would be appreciated. We're working on optimizing memory usage for deployment on consumer GPUs.

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions