Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Groq Support #1237

Open
fire17 opened this issue Apr 26, 2024 · 1 comment
Open

Adding Groq Support #1237

fire17 opened this issue Apr 26, 2024 · 1 comment

Comments

@fire17
Copy link

fire17 commented Apr 26, 2024

Is your feature request related to a problem? Please describe.

Hi there :)
I have not seen any other issue or pr on this

I prefer not to use openai api, i dont trust my data with them.

I think if we can get mixtral or llama3-70b (or future 400b) to work with OpenInterpreter
it will be a much needed speed improvement and cost reduction.

Describe the solution you'd like

I would like to be able to use Groq's API instead of OpenAI's

They offer the best-in-the-world inference speeds
They are not yet affiliated with any deep corp (that i am aware of)

plus, their api is free, atleast for now

Describe alternatives you've considered

Honestly, since it is not mentioned at all (not in the readme, not in the docs, issues, or prs)
i thought this wasn't already supported, but reading litellm docs i can see that they do support groq already

Additional context

I have already played with the official groq python api quite a bit, using it in other advanced pipelines, and it seems it will fit well with oi,
but when using --model groq/mixtral-8x7b-32768
i am getting errors:
Screen Shot 2024-04-26 at 5 40 35

I have created a new branch groq to try to fix this, bypassing litellm and using the official api directly
I've managed to hook it well into the existing oi flow - no errors :)
but also not getting any code executed...

Screen Shot 2024-04-26 at 5 31 21
its doing everything right, but seems to halucinate the result intead of actually running it the code it wrote

i understand that it has to output 'execute' and 'code' but not sure in which format,
can anyone tell me what format oi is expecting ( i currently dont have openai api to test against it )
example for a working output json would be great
seems it's just a matter of making mixtral and other models work well with oi expected output

(ill do more reseach, maybe someone solves this in another issue)

Really hoping to get this to work
Thanks a lot and all the best!

@fire17 fire17 mentioned this issue Apr 26, 2024
3 tasks
@fire17
Copy link
Author

fire17 commented Apr 26, 2024

With @aj47's help ❤️
The key is to use api_base url --api_base "https://api.groq.com/openai/v1" --api_key $GROQ_API_KEY --model "mixtral-8x7b-32768"

This is working

export GROQ_API_KEY='<your-key-here>'
poetry run interpreter --api_base "https://api.groq.com/openai/v1" --api_key $GROQ_API_KEY  --model "mixtral-8x7b-32768" --context_window 32000

This is NOT working

export GROQ_API_KEY='<your-key-here>'
poetry run interpreter  --model "groq/mixtral-8x7b-32768" --context_window 32000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant