Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
addresses #406
Since chainhooks is intended to be used as a service, which is heavy in both computational and io/network related processes, and both of these processes do not overlap at all at either hardware or software level. Hence performing both sequentially is quite a bottleneck for chainhook to reach its full potential.
This pr helps to be able to use chainhooks at scale and it does this in two main ways:
Decoupling the cpu-bound compute tasks from the io-bound/network related tasks.
The io-bound tasks are further processed asynchronously using tokio ( I don't know why tokio was not getting utilized this way before) rather than sequentially like before.
This will make chainhooks capable to process thousands of more requests at the same time.
In theory, this could bring speed-ups of orders of magnitude greater than before.
The heart of this feature is a dispatcher module with two flavors: a single threaded/light weight one and a multi threaded one.
The dispatcher module is free to be customized as per individual use cases as it offers great flexibility and adaptability for different kinds of workloads at any scale desired.
The single-threaded one should suffice most of the use cases. This mode can either be use centrally in the service or individually like this:
depending on the various case-by-case basis like for eg. if service is more bitcoin runloop heavy or more observer heavy depending on the probable range of blocks, that part can be made to possess its own instance of dispatcher or given more resources to. While the less heavy/light parts can be given less resources or can share a instance.
The multi threaded one is more intended to be used as a central entity processing and dispatching requests from different parts of the codebase like so:
the number of threads can be configured as desired by the workload.
Finally, even a simple central single threaded dispatcher is leagues better than previous implementations. Providing almost similar performance as above with minimal resources.
As for the integration, I have provided a reference one in the second commit.
I was in a haste, so this is just for the reference for now as I had quite different ideas for the integration part and I believe this could have been approached and integrated in a much better way.