You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is just a small output, I've let this thing run for about an hour, same issue. There's no output to see exactly what the model is producing for a response either, so I guess question one would be how can I enable verbose?
I used an OpenAI API key and the program did work, however I don't want to pay to use their API anymore and want to use the local solutions.
I've attempted increasing the MAX_TOKENS parameter in the .env file too, same issue. How to debug?
The text was updated successfully, but these errors were encountered:
besides that we officially only support the pythagora subscription model and openai by now. It is simply impossible to support all models at the same time since prompts might differ. Therefore i am closing this question.
Version
Command-line (Python) version
Operating System
Windows 10
Your question
Running local Ollama server using llama3 model but have also tested llama2 and mistral, the same issue persists.
I've created a new project, and am receiving a never ending loop of incomplete json response:
This is just a small output, I've let this thing run for about an hour, same issue. There's no output to see exactly what the model is producing for a response either, so I guess question one would be how can I enable verbose?
I used an OpenAI API key and the program did work, however I don't want to pay to use their API anymore and want to use the local solutions.
I've attempted increasing the
MAX_TOKENS
parameter in the.env
file too, same issue. How to debug?The text was updated successfully, but these errors were encountered: