Skip to content
Discussion options

You must be logged in to vote

I hit this same issue. The trick is understanding how AutoGen handles message context. Here's what worked for me: For automatic context sharing in sequential conversations, AutoGen passes previous messages by default but you need to manage the token limit. The best approach is setting max_consecutive_auto_reply to control conversation depth. Set it to something like 10-15 exchanges which prevents runaway token usage. If you need more control, you can implement custom message trimming where you keep the system message plus the last 10-15 messages and drop everything in between. I also found that using GPT-3.5 for some agents instead of GPT-4 helps a lot since it has better rate limits and …

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by ssb30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants