-
Notifications
You must be signed in to change notification settings - Fork 1.2k
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a standard way to solve blocking that must happen. #1756
Comments
A few things to note here:
Why would you want to move it back to the main thread? Is there any specific requirement?
It will subscribe (as requested by
It solves the blocking problem by running the blocking call in a separate thread pool that does not suffer from thread starvation. Everything emitted from your inner will be emitted on the same thread (elastic, in your case). |
Based on your code: Mono<User> getUser(String id) {
return Mono
.fromCallable(() -> userRepository.findOne(id))
.subscribeOn(Schedulers.elastic());
}
// ...
sourceFlux
.map(Request::getUserId)
.flatMap(this::getUser)
.map(User::getName)
.subscribe(......) The reactive type (Mono/Flux/etc) returned from anywhere must be non-blocking, and consumer is safe to assume that. This is why we have |
My team is having the exact same discussion. This example above is what I keep coming back to (e.g. flatMapping over a blocking call). People expressed concern over thread explosion; however, that is constrained by the concurrency factor (256 by default) on the flatMap. Anyhow, if any other light can be shed here, it would be greatly appreciated. |
@crankydillo instead of using |
Thanks @bsideup. In our case, we are running a JAX-RS service that creates Mono/Fluxes to process work. Some of this processing will involve blocking calls. It ultimately looks something like this:
Since the requests come in concurrently, the best idea we have so far is to use a shared, fixed size scheduler. Does this seem like the best option for you? @ZhangSanFengByGit apologies if you feel I've hijacked your question. I can start another question if you aren't getting the answers you need. Just let me know. |
@crankydillo I would definitely suggest moving this conversation to Gitter/StackOverflow or at least a separate issue :) |
@bsideup I created this stackoverflow post. Any comments are appreciated! |
I'm closing the issue since the question is answered. |
@crankydillo FYI we're also considering either limiting the number of elastic threads of creating a new blocking-friendly but pool-limited scheduler: |
@bsideup Thanks. Please consider updating the SO post:) I think we will mark it accepted if you propose a pool-limited 'elastic' scheduler. |
@crankydillo updated with |
Expected behavior
In the reference guide, it mentioned the way to wrap the blocking part:
At a glance, this wrapper deliver the part of blocking code to a elastic thread.
However, there is a situation that makes me confused:
Imagine one of the flatmap operator in flux-operators-subscriber chain uses this blocking wrapper, what happens when this flatmap's FlapMapInners subscribe to the blockingWrapper?
It seems to be that those inners also have to wait for the results provided by the process in the elastic thread to move on to onNext in the afterward operator in the main chain. (which actually does not solve the blocking in fact?)
So, how could this solve the actual blocking scenarios in programming?
Actual behavior
After being confused for days, it occurs to me that one way may solve the must-blocking scenarios:
In one word, I inject the blocking part into two PublishOn operators. This seems to be little problematic, but is there any better way to solve standard blocking situations? thx a lot!
The text was updated successfully, but these errors were encountered: