-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed connection re-establishement behaviour #149
Comments
Hi @FZambia,
Currently, rueidis only does periodical PINGs after it starts pipelining because they are treaded as alternatives to In other words, currently, if there were no concurrent requests, client-side caching requests or pubsub subscriptions, then rueidis will just write the solely request to the
I agree. This should be better handled in this library and the combining both ways you proposed is the only solution, I think. |
I tried 3748f2e which looks close to make automatic re-establishement. It did not worked for me initially though, it seems that for general pipeline connections + p.background()
if !r2 && !r2ps {
if ver, ok := p.info["version"]; ok {
if v := strings.Split(ver.string, "."); len(v) != 0 {
@@ -169,9 +173,6 @@ func _newPipe(connFn func() (net.Conn, error), option *ClientOption, r2ps bool)
p.version = int32(vv)
}
}
- if p.onInvalidations = option.OnInvalidations; p.onInvalidations != nil {
- p.background()
- } All works fine - background routines start and connection closing/re-establishement works as expected. It seems I am still lacking some understanding of |
Hi @FZambia, 3748f2e only addresses the first part of your proposal:
I am still working on the second part:
When the second part is finished,
You are right. |
I see, thanks for explaining! Do you think it's beneficial though? I mean always starting a background reading/pinging goroutine seems sufficient – it will anyway start in the hot path as we have concurrent requests, so trying to avoid goroutines while request rate is small or sequential should not produce big difference. Also connection closing may be detected immediately in many cases. Or am I missing sth here? |
Indeed, it won’t have big difference. The only benefit is per sequential request latency will be smaller than pipelining in the background due to less goroutine switches. |
Then do you think it's a good idea to reduce code complexity and reduce time to detect closed connections by always running background read and background ping? For me it seems as a logical thing as in concurrent usage case application moves to such state anyway. Plus it seems coming to this state as early as possible will make behaviour slightly more predictable right from the app start. |
Hi @FZambia,
Yes, application can trigger the underlying Therefore, synchronous mode can only be considered as a tiny latency optimization for sparse traffic or sequential usage. Currently, the code complexity of the optimization is acceptable, I believe. It just use the Actually I am wondering if it is possible to go back to synchronous mode to provide lower latencies automatically. I haven't have idea how to implement such switching and it might be then really not worth to have in terms of code complexity. |
Actually, I am not worried a lot about code complexity, it could be just a bonus from the change. I'd say my main concern is possibility to detect closed conns immediately – what we can get when reading goroutine always works. So that quick connection glitch was just unnoticeable by the app in most cases reducing errors.
I see your point about optimizing the case - but probably we could add an option to control this - so we could inherit the closing behaviour and still have fast sequential path? Sth like |
Sure, maybe |
Thanks, works for me! May be also |
Hi @FZambia,
And v0.0.88 is released with a fix to the performance issue introduced in #151. Please use the latest version if you have downloaded v0.0.87.
Many redis features rely on pipelining, such client-side caching and pubsub. So, I believe turning off pipelining completely is not an option. Therefore, there is probably no more modes I can imagine. |
Awesome, many thanks - I updated |
Hello @rueian!
I just came across one thing which I'd like to solve somehow. Let's imagine we have a
Client
and then the connection with Redis lost. For example, we can stop Redis server. Then quickly start it again.In this case currently we get a behaviour that even though Redis is back, our next command issued to Redis over
rueidis.Client
(for some specificslot
, which may be issued an hour after Redis restart happened for example) always returnsEOF
error. And only second command issued to Redis is successful.It seems to me that connection re-establishement mark happens inside
mux.pipeline
method after we issue a command:At the same time in the
pipe._backgroundRead
:– we get error from read when connection is closed (upon Redis stop) but it seems it does not mark it for re-establishement, so we will still get
EOF
when trying to issue next command overrueidis.Client
, and no errors on subsequent commands.One more thing I do not understand: it seems that
rueidis
does not PING all connections, only some of them. So the fact we have closed connection can not be discovered automatically soon after Redis available again. Dedicated connections are periodically PINGed, but general pipeline connections seems not. So connections are not re-established automatically.This makes me think there are two possible ways to solve this:
readNextMessage
mark the connection in a way so that the next attempt to write a command over it will result in re-establishing the connection, so that we could avoid extra error on first command from application after some time.rueidis
quickly come to live if Redis available again.Theoretically I could add retries on app level if I am getting EOF from
rueidis
– but not sure this really required if can be handled internally by the library.The text was updated successfully, but these errors were encountered: