-
Notifications
You must be signed in to change notification settings - Fork 744
Description
Hello,
I’m developing a Telegram bot using aiogram (version 3.20.0.post0). I'm also using the limited_aiogram library to throttle outgoing requests to the Telegram API according to official rate limits.
Right now, I’m working on optimizing the bot for high concurrent load. Imagine a situation where 1000+ users press an inline button at the same time. This button triggers a handler that does the following:
Fetches user data from the database
Calls callback.message.edit_text(...) to update the message
Sometimes calls callback.message.answer(...) to send a reply
Here’s my hypothesis:
If I understand the Telegram documentation correctly, both edit_text and answer count as outgoing messages and are subject to the 30 messages per second limit per bot (not per user).
So if 1000 users press the button at once, the bot will need approximately:
1000 / 30 = ~33 seconds
That means the last user might get a response only after 30+ seconds, which is unacceptable — users will start dropping off due to slow response.
Here are my questions:
Am I right in assuming that edit_text and answer count as outgoing messages under the 30 messages/sec bot-wide limit?
How do large bots (with 50,000–100,000+ users) handle this limitation?
Do they use queues, caching, aggregation strategies?
What are the best practices for cases when thousands of users are interacting at the same time, and each one needs personalized data?
I’d really appreciate any tips, examples, articles, or explanations on how large-scale Telegram bots manage to handle this kind of load while staying responsive.
Thanks in advance!