Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

interpreter.llm.completion = custom_language_model seems not working #1182

Open
sq2100 opened this issue Apr 7, 2024 · 2 comments · May be fixed by #1258
Open

interpreter.llm.completion = custom_language_model seems not working #1182

sq2100 opened this issue Apr 7, 2024 · 2 comments · May be fixed by #1258

Comments

@sq2100
Copy link

sq2100 commented Apr 7, 2024

Describe the bug

I attempted to run the routine for the custom model, which is supposed to just echo back what the user said, but it was not successful.

def custom_language_model(openai_message):
    """
    OpenAI-compatible completions function (this one just echoes what the user said back).
    """
    users_content = openai_message[-1].get("content") # Get last message's content

    # To make it OpenAI-compatible, we yield this first:
    yield {"delta": {"role": "assistant"}}

    for character in users_content:
        yield {"delta": {"content": character}}

# Tell Open Interpreter to power the language model with this function

interpreter.llm.completion = custom_language_model

I attempted to make modifications to interpreter.llm.completions = custom_language_model
But the parameters don't match up.

Reproduce

Run

def custom_language_model(openai_message):
    """
    OpenAI-compatible completions function (this one just echoes what the user said back).
    """
    users_content = openai_message[-1].get("content") # Get last message's content

    # To make it OpenAI-compatible, we yield this first:
    yield {"delta": {"role": "assistant"}}

    for character in users_content:
        yield {"delta": {"content": character}}

# Tell Open Interpreter to power the language model with this function

interpreter.llm.completion = custom_language_model

https://docs.openinterpreter.com/language-models/custom-models

Expected behavior

(this one just echoes what the user said back)

Screenshots

No response

Open Interpreter version

0.2.4

Python version

3.11

Operating System name and version

win 11

Additional context

No response

@Delva0
Copy link

Delva0 commented May 1, 2024

这可能是官方文档的问题。
将函数custom_language_model改为:

def custom_language_model(**params):
  """
  OpenAI-compatible completions function (this one just echoes what the user said back).
  """
  openai_message = params['messages']
  users_content = openai_message[-1].get("content")
  # To make it OpenAI-compatible, we yield this first:
  yield {"delta": {"role": "assistant"}}
    for character in users_content:
      yield {"delta": {"content": character}}

你可以打印查看params的结构。
另外这个函数在core.llm.run_text_llm.py中被调用。

@rbrisita
Copy link

rbrisita commented May 4, 2024

I ran into this problem myself. @Delva0 is correct with converting the function signature to include **params. The parameters I am getting are:

def custom_language_model(messages, model, stream, max_tokens):

But yielding is not working for me at the moment.

@rbrisita rbrisita linked a pull request May 5, 2024 that will close this issue
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants