Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Async cache to avoid cache contention #7156

Open
wants to merge 3 commits into
base: develop
Choose a base branch
from

Conversation

priyanshukm
Copy link
Contributor

Several stack trace from our service shows blocked threads on this cache when trying to getRows. Generally these are during service restart as the cache is cold.

As per caffeine documentation:

Some attempted update operations on this cache by other threads may be blocked while the computation is in 
progress, so the computation should be short and simple, and must not attempt to update any other mappings
of this cache.

The async loading cache will have internally store a CompletableFuture in the ConcurrentHashMap to avoid locking inside the ConcurrentHashMap and blocking concurrent accesses to nodes in the same bucket.

Initial capacity impacts the number of buckets in the internal ConcurrentHashMap used by the cache. We want to increase the minimum to reduce contention under heavy concurrent access. This will avoid situations where threads that ask for cache values are blocked due to resizing of the underlying hashMap

General

Before this PR:

After this PR:

==COMMIT_MSG==
Use Async cache to avoid cache contention
==COMMIT_MSG==

Priority:

Concerns / possible downsides (what feedback would you like?):

Is documentation needed?:

Compatibility

Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:

Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:

The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):

Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:

Does this PR need a schema migration?

Testing and Correctness

What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:

What was existing testing like? What have you done to improve it?:

If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:

If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:

Execution

How would I tell this PR works in production? (Metrics, logs, etc.):

Has the safety of all log arguments been decided correctly?:

Will this change significantly affect our spending on metrics or logs?:

How would I tell that this PR does not work in production? (monitors, etc.):

If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?:

If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):

Scale

Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.:

Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?:

Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?:

Development Process

Where should we start reviewing?:

If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?:

Please tag any other people who should be aware of this PR:
@jeremyk-91
@sverma30
@raiju

Several stack trace from our service shows blocked threads on this cache when trying to getRows. Generally these are during service restart as the cache is cold.

The async loading cache will have internally store a CompletableFuture in the ConcurrentHashMap to avoid locking inside the ConcurrentHashMap and blocking concurrent accesses to nodes in the same bucket.

Initial capacity impacts the number of buckets in the internal ConcurrentHashMap used by the cache. We want to increase the minimum to reduce contention under heavy concurrent access. This will avoid situations where threads that ask for cache values are blocked due to resizing of the underlying hashMap
@changelog-app
Copy link

changelog-app bot commented Jun 13, 2024

Generate changelog in changelog/@unreleased

What do the change types mean?
  • feature: A new feature of the service.
  • improvement: An incremental improvement in the functionality or operation of the service.
  • fix: Remedies the incorrect behaviour of a component of the service in a backwards-compatible way.
  • break: Has the potential to break consumers of this service's API, inclusive of both Palantir services
    and external consumers of the service's API (e.g. customer-written software or integrations).
  • deprecation: Advertises the intention to remove service functionality without any change to the
    operation of the service itself.
  • manualTask: Requires the possibility of manual intervention (running a script, eyeballing configuration,
    performing database surgery, ...) at the time of upgrade for it to succeed.
  • migration: A fully automatic upgrade migration task with no engineer input required.

Note: only one type should be chosen.

How are new versions calculated?
  • ❗The break and manual task changelog types will result in a major release!
  • 🐛 The fix changelog type will result in a minor release in most cases, and a patch release version for patch branches. This behaviour is configurable in autorelease.
  • ✨ All others will result in a minor version release.

Type

  • Feature
  • Improvement
  • Fix
  • Break
  • Deprecation
  • Manual task
  • Migration

Description

Use Async cache to avoid cache contention

Check the box to generate changelog(s)

  • Generate changelog entry

@@ -39,7 +39,11 @@ public class ConflictDetectionManager {
* (This has always been the behavior of this class; I'm simply calling it out)
*/
public ConflictDetectionManager(CacheLoader<TableReference, ConflictHandler> loader) {
this.cache = Caffeine.newBuilder().maximumSize(100_000).build(loader);
this.cache = Caffeine.newBuilder()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the tests expects the loader to even load null values while async cache does not allow that, because of which the test is failing.

Throws:
NullPointerException – if the specified key is null or if the future returned by the AsyncCacheLoader is null

Having null ConflictHandler is an expected state ?

Comment on lines +42 to +46
this.cache = Caffeine.newBuilder()
.initialCapacity(256)
.maximumSize(100_000)
.buildAsync(loader)
.synchronous();
Copy link
Contributor

@schlosna schlosna Jun 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you share the original JFR or other traces indicating we're seeing contention here? I'm a little surprised as I would expect most writes to these caches occur via warmCacheWith in com.palantir.atlasdb.transaction.impl.ConflictDetectionManagers#create(com.palantir.atlasdb.keyvalue.api.KeyValueService, boolean) via a single thread for that ConflictDetectionManager instance, those writes are all serialized so there should not contend.

If we want the async loading, are we concerned that not providing an executor will cause saturation of the default executor ForkJoinPool.commonPool()?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@priyanshukm are you able to provide the above?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey provided the JFR in the slack thread as it could be unsafe. Will tag you there too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@schlosna @priyanshukm @ergo14 - what's the status on this one? Do you believe it valuable to continue reviewing, or is the issue solved some other way?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Main thing I had in mind was to check if things improved after #7224

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants