You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following сomment relates to Envoy proxy (I don't know much about other proxy-wasm implementations).
Multi-threaded servers may use separate WASM runtime for each worker thread. At least in Envoy identifiers, provided in proxy_on_context_create are unique for that worker thread (that is, in simplest scenario, all root contexts will get identifier equal to 1). Thus it impossible to distinguish instances of the HTTP filter contexts, running on different threads.
Why users may want distinguish them: Suppose system where some plugin has one context marked as singleton (registered in bootstrap_extensions config section), and multiple contexts registered as HTTP filter. If singleton context wants send data to worker contexts, then each worker context must have some unique shared queue name (since as far as I can see, proxy_on_queue_ready notifications will arrive to those instance that made most recent call to proxy_register_shared_queue ). There are no simple way for generating such unique queue names for HTTP filter context instance in each worker thread.
I think providing identifier unique for entire proxy process will help solve this issue.
If I missed something and proxy-wasm API already have ways for accomplishing this task, I will be very grateful is somebody points to them.
The text was updated successfully, but these errors were encountered:
While adding unique identifiers is definitely possible (at least in the trivial case, within a single proxy instance that's using monotonically increasing IDs), it would require introducing synchronization across threads and/or processes, which isn't desirable, and would most likely open doors to people asking for unique identifiers for request IDs and other resources.
Is there any reason why your plugin cannot simply generate and use a random ID for each worker/queue?
Yes, random IDs is the way that we used in our workaround. I just thought that it looks like "cumbersome hack", thats why I made this issue.
unique identifiers for request IDs and other resources
Not sure this is the same thing - plugin can easily generate request identifiers of their own by just incrementing counter in global variable (since VM remains persistent across requests). Whereas for worker threads there are no way to distinguish them besides generating sufficiently long random data.
In any case - I don't insist on resolving this issue - you can close it as "won't fix" if you think that random IDs is recommended way of resolving this problem.
Following сomment relates to Envoy proxy (I don't know much about other proxy-wasm implementations).
Multi-threaded servers may use separate WASM runtime for each worker thread. At least in Envoy identifiers, provided in
proxy_on_context_create
are unique for that worker thread (that is, in simplest scenario, all root contexts will get identifier equal to 1). Thus it impossible to distinguish instances of the HTTP filter contexts, running on different threads.Why users may want distinguish them: Suppose system where some plugin has one context marked as singleton (registered in
bootstrap_extensions
config section), and multiple contexts registered as HTTP filter. If singleton context wants send data to worker contexts, then each worker context must have some unique shared queue name (since as far as I can see,proxy_on_queue_ready
notifications will arrive to those instance that made most recent call toproxy_register_shared_queue
). There are no simple way for generating such unique queue names for HTTP filter context instance in each worker thread.I think providing identifier unique for entire proxy process will help solve this issue.
If I missed something and proxy-wasm API already have ways for accomplishing this task, I will be very grateful is somebody points to them.
The text was updated successfully, but these errors were encountered: