Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: dispatcher #668

Open
wants to merge 2 commits into
base: develop
Choose a base branch
from
Open

Conversation

usagi32
Copy link

@usagi32 usagi32 commented Oct 25, 2024

addresses #406

Since chainhooks is intended to be used as a service, which is heavy in both computational and io/network related processes, and both of these processes do not overlap at all at either hardware or software level. Hence performing both sequentially is quite a bottleneck for chainhook to reach its full potential.

This pr helps to be able to use chainhooks at scale and it does this in two main ways:

  • Decoupling the cpu-bound compute tasks from the io-bound/network related tasks.

  • The io-bound tasks are further processed asynchronously using tokio ( I don't know why tokio was not getting utilized this way before) rather than sequentially like before.

This will make chainhooks capable to process thousands of more requests at the same time.
In theory, this could bring speed-ups of orders of magnitude greater than before.

The heart of this feature is a dispatcher module with two flavors: a single threaded/light weight one and a multi threaded one.
The dispatcher module is free to be customized as per individual use cases as it offers great flexibility and adaptability for different kinds of workloads at any scale desired.

The single-threaded one should suffice most of the use cases. This mode can either be use centrally in the service or individually like this:

single

depending on the various case-by-case basis like for eg. if service is more bitcoin runloop heavy or more observer heavy depending on the probable range of blocks, that part can be made to possess its own instance of dispatcher or given more resources to. While the less heavy/light parts can be given less resources or can share a instance.

The multi threaded one is more intended to be used as a central entity processing and dispatching requests from different parts of the codebase like so:

multi

the number of threads can be configured as desired by the workload.

Finally, even a simple central single threaded dispatcher is leagues better than previous implementations. Providing almost similar performance as above with minimal resources.

As for the integration, I have provided a reference one in the second commit.
I was in a haste, so this is just for the reference for now as I had quite different ideas for the integration part and I believe this could have been approached and integrated in a much better way.

@usagi32
Copy link
Author

usagi32 commented Oct 25, 2024

@rafaelcr

@smcclellan smcclellan requested a review from rafaelcr October 28, 2024 16:06
@rafaelcr
Copy link
Collaborator

thanks for this PR @usagi32 !! looks super promising, I'll be taking a look and running some tests both locally and in our dev clusters to see how it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: 👀 In Review
Development

Successfully merging this pull request may close these issues.

2 participants