Skip to content

Cannot run open interpreter with a local llamfile #1641

@RoseDeSable

Description

@RoseDeSable

Describe the bug

Hi,
I run the interpreter under a debian linux (KaliLinux) in a virtual environment. I start it with the command "interpreter --model rocket-3b.Q4_K_M.llamafile". But the interpreter alyways want to use it with the protocol of openai. I don't understand this, because I installed the interface "llama-cpp-python". If I start the model seperately, then I can chat with it by my firefox browser. Must I start the model at first and then start the interpreter with the base_api option ? I mean the command "interpreter --base_api http://localhost:port(of the model)".

Best Regards
Rose

Reproduce

Yes

Expected behavior

I believe, that the interpreter start the model file in the background and chats by lama-cpp-python with it. So like firefox does.

Screenshots

No response

Open Interpreter version

Version: 0.4.3

Python version

Python 3.11.9

Operating System name and version

Debian

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions