-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Async cache to avoid cache contention #7156
base: develop
Are you sure you want to change the base?
Conversation
Several stack trace from our service shows blocked threads on this cache when trying to getRows. Generally these are during service restart as the cache is cold. The async loading cache will have internally store a CompletableFuture in the ConcurrentHashMap to avoid locking inside the ConcurrentHashMap and blocking concurrent accesses to nodes in the same bucket. Initial capacity impacts the number of buckets in the internal ConcurrentHashMap used by the cache. We want to increase the minimum to reduce contention under heavy concurrent access. This will avoid situations where threads that ask for cache values are blocked due to resizing of the underlying hashMap
Generate changelog in
|
@@ -39,7 +39,11 @@ public class ConflictDetectionManager { | |||
* (This has always been the behavior of this class; I'm simply calling it out) | |||
*/ | |||
public ConflictDetectionManager(CacheLoader<TableReference, ConflictHandler> loader) { | |||
this.cache = Caffeine.newBuilder().maximumSize(100_000).build(loader); | |||
this.cache = Caffeine.newBuilder() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the tests expects the loader to even load null values while async cache does not allow that, because of which the test is failing.
Throws:
NullPointerException – if the specified key is null or if the future returned by the AsyncCacheLoader is null
Having null ConflictHandler is an expected state ?
this.cache = Caffeine.newBuilder() | ||
.initialCapacity(256) | ||
.maximumSize(100_000) | ||
.buildAsync(loader) | ||
.synchronous(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you share the original JFR or other traces indicating we're seeing contention here? I'm a little surprised as I would expect most writes to these caches occur via warmCacheWith
in com.palantir.atlasdb.transaction.impl.ConflictDetectionManagers#create(com.palantir.atlasdb.keyvalue.api.KeyValueService, boolean)
via a single thread for that ConflictDetectionManager
instance, those writes are all serialized so there should not contend.
If we want the async loading, are we concerned that not providing an executor will cause saturation of the default executor ForkJoinPool.commonPool()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@priyanshukm are you able to provide the above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey provided the JFR in the slack thread as it could be unsafe. Will tag you there too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@schlosna @priyanshukm @ergo14 - what's the status on this one? Do you believe it valuable to continue reviewing, or is the issue solved some other way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Main thing I had in mind was to check if things improved after #7224
Several stack trace from our service shows blocked threads on this cache when trying to getRows. Generally these are during service restart as the cache is cold.
As per caffeine documentation:
The async loading cache will have internally store a CompletableFuture in the ConcurrentHashMap to avoid locking inside the ConcurrentHashMap and blocking concurrent accesses to nodes in the same bucket.
Initial capacity impacts the number of buckets in the internal ConcurrentHashMap used by the cache. We want to increase the minimum to reduce contention under heavy concurrent access. This will avoid situations where threads that ask for cache values are blocked due to resizing of the underlying hashMap
General
Before this PR:
After this PR:
==COMMIT_MSG==
Use Async cache to avoid cache contention
==COMMIT_MSG==
Priority:
Concerns / possible downsides (what feedback would you like?):
Is documentation needed?:
Compatibility
Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:
Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:
The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):
Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:
Does this PR need a schema migration?
Testing and Correctness
What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:
What was existing testing like? What have you done to improve it?:
If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:
If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:
Execution
How would I tell this PR works in production? (Metrics, logs, etc.):
Has the safety of all log arguments been decided correctly?:
Will this change significantly affect our spending on metrics or logs?:
How would I tell that this PR does not work in production? (monitors, etc.):
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:
If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:
Development Process
Where should we start reviewing?:
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:
Please tag any other people who should be aware of this PR:
@jeremyk-91
@sverma30
@raiju