New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Howto]: reduce token #840
Comments
Hi, I'm using Anthropic as LLM-Server. It works fine untill I get the Error 429. "...Number of request tokens has exceeded your rate limit (https://docs.anthropic.com/claude/reference/rate-limits). Please reduce the the prompt length or the maximum tokens requested, or try again later..." I set MAX_TOKENS=4096. Any ideas or suggestions? |
You may have to rewrite a couple of prompts for this. Do not reduce MAX_TOKENS to 4096 as this makes gpt-pilots prompts nearly unusable. |
it was having python enviorment. made sude to root p to the project root directory. not one slip out of there? noh worries looking thru manuals to an pearl command sorted sciprts etc =) poetry scripts/setup | from ur gpt`d dir |
Hi @Wladastic , |
if ur using apple hardware.... ive having issues with preinstalled plingdows to get elevated pric for scrape install etc |
try getting an workstation with none preinstalled like anything rly. and build an linux os env =) |
Hi, Pythagora works best on my PC with the Anthropic API when I'm using the following .env settings:
|
Version
VisualStudio Code extension
Operating System
MacOS
Your question
It always get stuck and give me that error. Is there a way to increase the tokens taht can be used for this step ?
Is Gpt-pilot always sending the full code including all files with every step end every request? Can i somehow tell Gpt-pilot to only send the needed information for the present task?
Error calling LLM API: The request exceeded the maximum token limit (request size: 8237) tokens.
--------- LLM Reached Token Limit ----------
Can I retry implementing the entire development step?
The text was updated successfully, but these errors were encountered: