Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConnectionPool adapter #835

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

ConnectionPool adapter #835

wants to merge 4 commits into from

Conversation

bkeepers
Copy link
Collaborator

@bkeepers bkeepers commented Feb 3, 2024

This is an attempt at creating a generic ConnectionPool adapter that can be composed with other adapters as discussed in #826 (review).

It just wraps each adapter method call in a pool.with { } call, but I think this mostly makes sense since the adapter API already represents atomic actions. It's not a lot of code and can be used with any adapter that requires a connection pool (Redis, RedisCache, and probably others).

ConnectionPool doesn't expose a mechanism for extending an existing pool instance, caching objects that depend on connection, or obsevering lifecycle events, so this adapter has to keep a cache of adapter instances for each connection. I'm not 100% sure this is kosher.

Usage as primary storage adapter:

pool = ConnectionPool.new(size: 5, timeout: 5) { Redis.new }
adapter = Flipper::Adapters::ConnectionPool.new(pool) do |client|
  Flipper::Adapters::Redis.new(client)
end

# Read about the `config.adapter` problem below
Flipper.configure do |config|
  config.adapter { adapter }
end

Usage with cache adapter:

pool = ConnectionPool.new(size: 5, timeout: 5) { Redis.new }
storage = Flipper::Adapters::ActiveRecord.new
adapter = Flipper::Adapters::ConnectionPool.new(pool) do |client|
  Flipper::Adapters::RedisCache.new(storage, client)
end

# Read about the `config.adapter` problem below
Flipper.configure do |config|
  config.adapter { adapter }
end

The config.adapter problem

Currently, Flipper returns a new adapter instance (it calls the config.adapter block) for each thread. This is to guard against some adapters not being thread safe. Initializing this adapter inside config.adapter block would probably work fine, but it would result in a separate adapter cache for each thread (n threads * x pool size).

I'm thinking we should add threadsafe? to the adapter API and change Flipper to reuse adapter instances of threadsafe adapters.

cc @cymen

@cymen
Copy link

cymen commented Feb 3, 2024

@bkeepers Sounds good! I'm taking some time off work so I won't be able to keep my PR going forward but this sounds like a good way to go (and makes my PR not necessary).

def initialize(pool, &initializer)
@pool = pool
@initializer = initializer
@instances = {}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this hang onto resources for too long in some cases, preventing them from being garbage collected? ConnectionPool has a reload capability that will release its hold on the existing resources, but this @instances hash map will retain them and then grow larger as the pool offers up new resources.

I was initially thinking that ObjectSpace::WeakKeyMap would solve the problem, but this is a funny situation in which the map keys and values both have references to the same resource.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good catch. I just pushed 18b2a1b which makes ConnectionPool observable. I might open an issue on the uptream repo and see if a change like this would be accepted.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems like a good solution. The only other thing that comes to mind is an LRU cache, but I don't know if there's a solid Ruby implementation.

@maleksiuk
Copy link

Thanks for working on this, @bkeepers! I started down this same path this morning and then found this PR. I'm looking forward to using it with the Redis adapter (and ideally an existing Redis pool). If it doesn't get merged for whatever reason, maybe I can contribute a less generic version for Redis.

Copy link

@maleksiuk maleksiuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, Flipper returns a new adapter instance (it calls the config.adapter block) for each thread.

Thanks for pointing that out. I have a codebase that uses a single Redis object for Flipper (it does not instantiate a new one inside the block). I'm aware of the mutex inside of the Redis object which is the main reason I'd like Flipper to have pool support. It sounds like I can achieve "pooling" if I instantiate a new Redis in that block, as long as my Redis server has enough connections available to support one per thread. I'd still prefer to pass in my existing Redis pool (used for non-Flipper concerns) but this is good to know in the meantime.

def initialize(pool, &initializer)
@pool = pool
@initializer = initializer
@instances = {}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems like a good solution. The only other thing that comes to mind is an LRU cache, but I don't know if there's a solid Ruby implementation.

end

def reset
@instances.clear

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is threadsafe on MRI due to the GIL, but on other interpreters I could imagine a new object being added to @instances halfway through the clear and then getting wiped out immediately. I'm not sure that's a real problem here, though, since this adapter caching is just for performance purposes and if an extra one is initialized here or there it shouldn't matter. Am I thinking about this correctly?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants