-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Distributor: Use pooled buffers for reading/decompressing request body #6836
Conversation
82e8ec6
to
fa84566
Compare
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
fa84566
to
9aaab58
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. I'm not sure if tying the two buffers (body / decoded) into single struct is a good idea, and I'm wondering if we should use a struct that would track all buffers used by request. Something like:
type requestBuffers struct {
p *sync.Pool
buffersFromThePool []*bytes.Buffer
}
func (rb *requestBuffers) getBuffer() *bytes.Buffer {
b := rb.p.Get().(*bytes.Buffer)
rb.buffersFromThePool = append(rb.buffersFromThePool, b)
return b
}
func (rb *requestBuffers) cleanup() {
for _, b := range rb.buffersFromThePool {
rb.p.Put(b)
}
rb.buffersFromThePool = rb.buffersFromThePool[:0]
}
Then we could call cleanup to return all buffers.
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Thanks for the advice @pstibrany - I think I agree it could be beneficial to avoid coupling the two buffers together in the pool, so I implemented your suggested design instead (type The revised design fares slightly worse in the distributor push benchmark, with one extra allocation, but I guess it makes little practical difference?
|
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, changes make sense to me. I left some non-blocking suggestions.
Co-authored-by: Peter Štibraný <pstibrany@gmail.com>
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
Signed-off-by: Arve Knudsen <arve.knudsen@gmail.com>
What this PR does
The distributor currently uses pooled buffers for decompressing snappy compressed remote write requests. However, it doesn't use pooled buffers when reading/gzip decompressing (OTLP requests can be gzip compressed) request bodies. In this PR I propose to use buffer pooling also for reading uncompressed/compressed write request bodies, which should benefit OTLP requests as they're not snappy compressed.
Benchmarks
Benchmarking shows some speedup and reduction in memory usage for OTLP. I'm currently testing these changes in a dev cell, with no problems detected.
BenchmarkPushHandler
:BenchmarkOTLPHandler
:Which issue(s) this PR fixes or relates to
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]
.about-versioning.md
updated with experimental features.