You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, according to the document, the way of using emotion model (e.g emotion2vec) is
from funasr import AutoModel
model = AutoModel(model="iic/emotion2vec_plus_large")
wav_file = f"{model.model_path}/example/test.wav"
res = model.generate(wav_file, output_dir="./outputs", granularity="utterance", extract_embedding=False)
print(res)
But what if I wish to use the model as an assistant model ( vad, spk model etc.), what should I do, or if there is any plan to add such a parameter (ser_model="iic/emotion2vec_plus_large" maybe?)
The text was updated successfully, but these errors were encountered:
Currently, according to the document, the way of using emotion model (e.g emotion2vec) is
But what if I wish to use the model as an assistant model ( vad, spk model etc.), what should I do, or if there is any plan to add such a parameter (ser_model="iic/emotion2vec_plus_large" maybe?)
The text was updated successfully, but these errors were encountered: