-
Notifications
You must be signed in to change notification settings - Fork 232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Draft] Add support for custom allocators #366
base: master
Are you sure you want to change the base?
[Draft] Add support for custom allocators #366
Conversation
You go from 316.4 ns/op to 220.4 ns/op. Can you provide a rationale as to why your code (which defers to the Go |
@lemire It's not defering to the Go Before
After
The reason for this is that the following branches are always true: if size <= cap(a.buf) && capacity <= cap(a.buf) { and if size <= cap(a.uint16s) && capacity <= cap(a.uint16s) { and I only have a single allocator so its just reusing the single instance of |
(The benchmark is using a custom allocator that always returns the same slices basically, which is an overly simplified version of what we would do in our production code as well) |
@lemire Ah ok I think I see the source of the confusion. My proposal is not to implement a "default" thread-safe allocator for everyone to use by default. My proposal is to support users injecting their own allocator, and users will write whatever type of allocator makes sense for them. They could write an allocator that uses manual refcounting and is globally shared with locks, or they can do what I plan to do which is to spawn a dedicated allocator in critical path single-threaded loops that allows each iterator of a loop to reuse datastructures over and over again. |
TODO: Its error prone to require
allocator
to be non-nil. Could just do a nil check on each usage so that users instantiating bitmap like&Bitmap{}
aren't broken. Will address if we decide to move forward with this approach.Benchmark Before
Benchmark After