Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Align togglz with virtual threads #1172

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

wyhasany
Copy link
Contributor

Calling togglz from virtual threads causing pinning to the platform (aka carrier thread):

Thread[#457,ForkJoinPool-1-worker-13,5,CarrierThreads]
    java.base/java.lang.VirtualThread$VThreadContinuation.onPinned(VirtualThread.java:183)
    java.base/jdk.internal.vm.Continuation.onPinned0(Continuation.java:393)
    java.base/java.lang.VirtualThread.parkNanos(VirtualThread.java:621)
    java.base/java.lang.System$2.parkVirtualThread(System.java:2652)
    java.base/jdk.internal.misc.VirtualThreads.park(VirtualThreads.java:67)
    java.base/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:267)
    java.base/java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1866)
    java.base/java.util.concurrent.ForkJoinPool.unmanagedBlock(ForkJoinPool.java:3780)
    java.base/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3725)
    java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1939)
    java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2095)
    io.lettuce.core.protocol.AsyncCommand.await(AsyncCommand.java:83)
    io.lettuce.core.internal.Futures.awaitOrCancel(Futures.java:244)
    io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:75)
    io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
    jdk.proxy2/jdk.proxy2.$Proxy182.hgetall(Unknown Source)
    java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
    java.base/java.lang.reflect.Method.invoke(Method.java:580)
    io.lettuce.core.support.ConnectionWrapping$DelegateCloseToConnectionInvocationHandler.handleInvocation(ConnectionWrapping.java:215)
    io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
    jdk.proxy2/jdk.proxy2.$Proxy182.hgetall(Unknown Source)
    org.togglz.redis.RedisLettuceStateRepository.getFeatureState(RedisLettuceStateRepository.java:52)
    org.togglz.core.repository.cache.CachingStateRepository.reloadFeatureState(CachingStateRepository.java:81) <== monitors:1
    org.togglz.core.repository.cache.CachingStateRepository.getFeatureState(CachingStateRepository.java:73)
    org.togglz.core.manager.DefaultFeatureManager.isActive(DefaultFeatureManager.java:67)

To avoid it I replaced synchronized blocks with ReentrantLock.
https://openjdk.org/jeps/444

Calling togglz from virtual threads causing pinning to the platform
(aka carrier thread):
```
Thread[togglz#457,ForkJoinPool-1-worker-13,5,CarrierThreads]
    java.base/java.lang.VirtualThread$VThreadContinuation.onPinned(VirtualThread.java:183)
    java.base/jdk.internal.vm.Continuation.onPinned0(Continuation.java:393)
    java.base/java.lang.VirtualThread.parkNanos(VirtualThread.java:621)
    java.base/java.lang.System$2.parkVirtualThread(System.java:2652)
    java.base/jdk.internal.misc.VirtualThreads.park(VirtualThreads.java:67)
    java.base/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:267)
    java.base/java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1866)
    java.base/java.util.concurrent.ForkJoinPool.unmanagedBlock(ForkJoinPool.java:3780)
    java.base/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3725)
    java.base/java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1939)
    java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2095)
    io.lettuce.core.protocol.AsyncCommand.await(AsyncCommand.java:83)
    io.lettuce.core.internal.Futures.awaitOrCancel(Futures.java:244)
    io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:75)
    io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
    jdk.proxy2/jdk.proxy2.$Proxy182.hgetall(Unknown Source)
    java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
    java.base/java.lang.reflect.Method.invoke(Method.java:580)
    io.lettuce.core.support.ConnectionWrapping$DelegateCloseToConnectionInvocationHandler.handleInvocation(ConnectionWrapping.java:215)
    io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
    jdk.proxy2/jdk.proxy2.$Proxy182.hgetall(Unknown Source)
    org.togglz.redis.RedisLettuceStateRepository.getFeatureState(RedisLettuceStateRepository.java:52)
    org.togglz.core.repository.cache.CachingStateRepository.reloadFeatureState(CachingStateRepository.java:81) <== monitors:1
    org.togglz.core.repository.cache.CachingStateRepository.getFeatureState(CachingStateRepository.java:73)
    org.togglz.core.manager.DefaultFeatureManager.isActive(DefaultFeatureManager.java:67)
```
To avoid it I replaced synchronized blocks with ReentrantLock.
https://openjdk.org/jeps/444
@codecov-commenter
Copy link

Codecov Report

Attention: Patch coverage is 80.24691% with 16 lines in your changes are missing coverage. Please review.

Project coverage is 55.44%. Comparing base (fda0e51) to head (d48aa8b).
Report is 10 commits behind head on master.

Files Patch % Lines
...core/repository/file/ReloadablePropertiesFile.java 70.83% 6 Missing and 1 partial ⚠️
.../togglz/console/security/TogglzCSRFTokenCache.java 0.00% 6 Missing ⚠️
.../core/repository/cache/CachingStateRepository.java 77.77% 1 Missing and 1 partial ⚠️
...ository/property/PropertyBasedStateRepository.java 97.61% 0 Missing and 1 partial ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@             Coverage Diff              @@
##             master    #1172      +/-   ##
============================================
+ Coverage     55.40%   55.44%   +0.03%     
  Complexity      930      930              
============================================
  Files           180      180              
  Lines          4606     4610       +4     
  Branches        603      604       +1     
============================================
+ Hits           2552     2556       +4     
  Misses         1896     1896              
  Partials        158      158              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@wyhasany
Copy link
Contributor Author

@bennetelli can you take a look?

Assume following use case:
If service serves for example 10k req/s and every request checks 10
feature flags. Feature flags has ttl 60s. Access to remote cache takes
10ms. If refresh of all featured flags happens in the very same moment
all requests slows down 10 flags * 10 ms/flag = 100ms. It will be
visible as increase of latency. With changes from this commit they will
be locked separately, so we can update them parallel.
@bennetelli bennetelli self-requested a review March 21, 2024 07:24
@wyhasany
Copy link
Contributor Author

@bennetelli I'm going to add another commit here to add async fetch from remote repo to not affect latency of requests which checks feature flags

Assume the following use case: If a service serves, for example, 10k
req/s and every request checks 10 feature flags with a TTL of 60s,
access to the remote cache takes 10ms. When a user passes a custom
ExecutorService to the CachingStateRepository, features will be updated
asynchronously. The client thread won't be blocked; it receives either
null when a feature is unavailable in the cache, the value when it is
available and not expired, or an expired value and scheduled job
reloads it asynchronously.

Returning null is not issue as DefaultFeatureManager replaces it with
default value.

If client doesn't pass Executor CachingStateRepository will work with
old blocking approach.
@wyhasany
Copy link
Contributor Author

@bennetelli I added part of code responsible for fetching asynchronously feature flags, feel free to review the code :)

@wyhasany
Copy link
Contributor Author

@bennetelli kindly remainder 🙏🏼

@wyhasany
Copy link
Contributor Author

@bennetelli can you take a look on this PR? 🥺

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants