Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid virtual thread pinning in RestClientsConfig #41948

Closed
wants to merge 1 commit into from

Conversation

mariofusco
Copy link
Contributor

This is the alternative solution suggested by @franz1981 for the problem reported here. I see pros and cons in both implementations, so I'm not strongly opinionated on any of the 2.

Please let me know which one of the 2 you want to move forward and I will close the other. /cc @radcortez

@@ -327,19 +328,27 @@ public RestClientConfig getClientConfig(String configKey) {
if (configKey == null) {
return RestClientConfig.EMPTY;
}
return configs.computeIfAbsent(configKey, RestClientConfig::load);
CompletableFuture<RestClientConfig> futureConfig = configs.computeIfAbsent(configKey, name -> new CompletableFuture<>());
if (!futureConfig.isDone()) {
Copy link
Contributor

@franz1981 franz1981 Jul 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This won't work as expected.
You need to make sure that if there is competition, the load is performed just once; while here you risk 2 threads to try complete whatever they found or compute in the CHM.

Using a mix of get and putIfAbsent comparing on what is found would work better, unless you use a capturing lambda which make it possible to compare the computeIfAbsent result with the submitted one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it is possible that load could be performed more than once by concurrent threads (which is indeed suboptimal), but in this case the CompletableFuture will act as a sort of AtomicReference and only the first thread completing the config loading will win and set that value inside the CompletableFuture.

the only way to to make sure they who performed the call has been the race to set the completable future as first, while here you risk 2 threads to try complete whatever they found or compute in the CHM

Sorry, but I'm not understanding what you're suggesting here. Can you please clarify, possibly with a practical example, how I could avoid that 2 threads could concurrently call the load method?

Copy link
Contributor

@franz1981 franz1981 Jul 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thing is that we shouldn't introduce unwanted behaviour from what it was before, or we risk to introduce bugs because, being concurrent is not always due that is well tested/have coverage.

My suggestion is to preserve the original load behaviour and to do it you need both some atomic way to set the completable future (which is trivial) and be aware that we have successfully published it/or not.

There are different ways to do it but I have no computer on my hand now to give you a detailed example.
Before CHM::computeIfAbsent was a thing it was used a fast path get and a slow path putIfAbsent which create a completable future upfront (not completed yet): iirc part of the putIfAbsent contract is the ability to verify if it has landed (or there must be another method which allow to do it, for sure) and who won the race to set it, will perform the load and complete it, who loose will just perform a waiting get. I hope it helps to clarify

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@franz1981 thanks for clarification, I wasn't thinking to use the putIfAbsent to make sure that the load is performed only once. I modified this pull request according to your suggestion, please review again.

@radcortez feedback is welcome.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All good!

@mariofusco mariofusco marked this pull request as draft July 17, 2024 10:55
@mariofusco mariofusco marked this pull request as ready for review July 18, 2024 07:40
@geoand
Copy link
Contributor

geoand commented Jul 18, 2024

For me this looks very weird and would prefer #41926

@mariofusco
Copy link
Contributor Author

For me this looks very weird and would prefer #41926

I agree that this looks weird at a first sight (and as I wrote I'm not strongly opinionated between these 2 implementations), but I adopted a similar solution when I made the RunnerClassLoader virtual thread friendly. Moreover this implementation has the advantage of avoiding to lock the whole ConcurrentHashMap while adding a new entry.

@geoand
Copy link
Contributor

geoand commented Jul 18, 2024

Sure yeah, I'm not arguing that the impl is not good, I am just saying that it looks very weird for someone who has been used to the Quarkus config system

@radcortez
Copy link
Member

It should be temporary :)

@franz1981
Copy link
Contributor

Yep and at the same time; it is pushing further the idea that maybe it needs some concurrent structure...while maybe it doesn't need it.
As said by Roberto, this is a temporary fix (last famous words...eheh)

@geoand
Copy link
Contributor

geoand commented Jul 18, 2024

Please please please make it actually temporary :)

Asking because I am the one that ends up with almost all the REST Client issues (not by design)

@quarkus-bot

This comment has been minimized.

@mariofusco
Copy link
Contributor Author

Made obsolete and unnecessary by 4f31e0d

@mariofusco mariofusco closed this Aug 2, 2024
@quarkus-bot quarkus-bot bot added the triage/invalid This doesn't seem right label Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants