Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Receiver: reduce network bandwidth when ingesting data #5572

Closed
fpetkovski opened this issue Aug 5, 2022 · 4 comments · Fixed by #5575
Closed

Receiver: reduce network bandwidth when ingesting data #5572

fpetkovski opened this issue Aug 5, 2022 · 4 comments · Fixed by #5575

Comments

@fpetkovski
Copy link
Contributor

Is your proposal related to a problem?

We noticed that our intra-AZ egress costs tripled when we migrated from HA Prometheus pairs to receivers. We would have expected costs to go up by around 50% only due to the added replication factor.

Describe the solution you'd like

Looking at the code, it seems that forwarding requests between receivers is done by encoding remote-write requests in protobuf. This could be the potential reason and we might be able to reduce bandwith by encoding requests with snappy.

Describe alternatives you've considered

No alternatives found so far

@fpetkovski
Copy link
Contributor Author

fpetkovski commented Aug 5, 2022

Cortex seems to support this already, so adding it to Thanos could be straightforward: cortexproject/cortex#2940

@bwplotka
Copy link
Member

bwplotka commented Aug 5, 2022

LGTM. thanks for noticing!

@squat
Copy link
Member

squat commented Aug 5, 2022

cool. let's see. I didn't really understand how to read the network ingester graph from that issue as it has negative values for the bandwidth.

@fpetkovski
Copy link
Contributor Author

Yeah the graphs are a bit strange. I'll give this thing a shot next week and measure changes in transmitted bytes from receivers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants