Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate mechanisms to reduce scatter gather #2

Closed
jacksontj opened this issue Mar 8, 2018 · 6 comments
Closed

Investigate mechanisms to reduce scatter gather #2

jacksontj opened this issue Mar 8, 2018 · 6 comments

Comments

@jacksontj
Copy link
Owner

Right now each query must be sent to each server_group to see if they have data that matches. We could (for example) maintain a bloom filter of metric names that exist on the remote server_groups -- then only send queries there when a name match exists (this is only helpful if the metric isn't in all of them-- since doing this with labels is probably impactical).

@jacksontj
Copy link
Owner Author

Exact matching based on the labels for the SG was added with #37

@raffraffraff
Copy link

raffraffraff commented Oct 29, 2018

Edit 2: Ignore me. It exists! #88 and also, this would rock #41

There's another approach that might be useful (and possibly easier to add) and is similar to the idea of limiting queries by label... Limit by the date range in the query! Many people run large Prometheus instances to hold metrics at a lower resolution for months or more, and lots of smaller instances that function as 'scrapers', storing full-resolution data for a few days. You could reduce the scatter by enabling configuration options for 'min-age' and 'max-age'.

This could have benefits for other systems too - I haven't used Promxy with Thanos yet, but it seems that it should be possible to set up 2x groups of Thanos Query pools, one for sidecars (live data) and one for Thanos Store (S3 lookup). Putting Promxy in front of these, and sending the queries based on date range could eliminate a lot of unnecessary Thanos Store lookups. Edit: Now that I think of it, this would let us avoid querying via sidecars entirely, and proxy the request straight to the Prometheus instances that they're attached to - so Thanos sidecars would only be responsible for uploading metrics to S3 and providing a store API for retrieving them on request

Although... if you wanted to make that super beneficial, you could inspect the query date range and send sub-queries to different backends based on their min-age / max-age, and then aggregate the results ... the way Trickster does it.

@jacksontj
Copy link
Owner Author

@raffraffraff Thanks for the feedback!

Edit 2: Ignore me. It exists! #88 and also, this would rock #41

No worries, good feedback though-- always nice to hear when they line up. #41 I started working on a little bit, but requires a bunch more work as the trickster code isn't set up as a library, so this effectively means I need to either (1) refactor all that code or (2) re-implement. Right now this has been somewhat low on my priority list because trickster in front of promxy works great, but definitely on the radar.

@raffraffraff
Copy link

raffraffraff commented Apr 16, 2019

FYI: I'm now use Trickster->Promxy->Thanos Query and find that the Thanos Sidecar's use of remote_read API might be an issue. I'm experimenting with the idea of taking Thanos Sidecar out of the query path entirely, and getting querying the Prometheus scraper directly from Promxy (along with the Thanos Store). To do this, I'm also removing the Thanos Sidecar from the Thanos Query config. I'll test this and if it's interesting, I'll post my results here. I'm hoping that removing the Thanos Sidecar hop and avoiding its use of the remote_read API (which cannot stream!) will reduce memory utilization on the scrapers and return results faster.

If this works, then my only request would be the ability to modify the start/end times of Promxy queries to each endpoint based on a configuration option. For example, I'd like to limit queries to Prometheus scrapers for data in the range 'now' to 'now+$duration' and limit queries to Thanos Store for data in the range 'now+$duration' to 'query end time'

BTW: I think that Trickster have refactored their code to make it "easier to contribute" but that may also mean that it's easier to use as a library. But as you said, Trickster is doing a good job...

@jacksontj
Copy link
Owner Author

find that the Thanos Sidecar's use of remote_read API might be an issue.

I would expect that the results should be correct but they may be fairly inefficient at responding (as you mention later, memory churn etc.). For the Thanos chunks pushed to S3 this will be required (using remote_read) but for all the data in the local prom you can query direct-- and it seems that you are on the right track there.

If this works, then my only request would be the ability to modify the start/end times of Promxy queries to each endpoint based on a configuration option.

This is on the TODO list :) #88

BTW: I think that Trickster have refactored their code to make it "easier to contribute" but that may also mean that it's easier to use as a library. But as you said, Trickster is doing a good job...

Definitely, I've actually contributed a variety of bugfixes to the product upstream. I'm hopeful that the 1.x series will solve most of the caching issues there-- long-term I'd still like to integrate the caching with promxy (#41) so we can have complete control over cache busting.

@jacksontj
Copy link
Owner Author

#560 is more-or-less the solution for this for now; so I'm going to close out this old issue. If more ideas come up we can capture those in another issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants