Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Easy scaling when using non push based receivers #32869

Closed
gouthamve opened this issue May 6, 2024 · 4 comments
Closed

Easy scaling when using non push based receivers #32869

gouthamve opened this issue May 6, 2024 · 4 comments
Labels
closed as inactive discussion needed Community discussion needed enhancement New feature or request needs triage New item requiring triage Stale

Comments

@gouthamve
Copy link
Member

Component(s)

No response

Is your feature request related to a problem? Please describe.

When using the OTel Collector receivers that are not push based, scaling out the Collectors becomes complicated.

For example, if you are using a mysqlreceiver, and scale up the Collector replicas to 2, then you’ll end up collecting the same metrics twice.

To handle this, we need to have multiple collector deployments, one with the receivers and one with just OTLP receiver. And when a single Collector cannot handle the load from the receivers, you need to then split the receivers into multiple receivers manually.

Describe the solution you'd like

A solution like the target allocator which automatically spreads the receivers within a cluster and makes sure that only one instance of a receiver is running at any one moment.

Describe alternatives you've considered

Config management to scale things out. But this is not easy to build or maintain.

Additional context

No response

@gouthamve gouthamve added enhancement New feature or request needs triage New item requiring triage labels May 6, 2024
@crobert-1 crobert-1 added the discussion needed Community discussion needed label May 6, 2024
@jaronoff97
Copy link
Contributor

This is a great idea overall, I've had similar thoughts about the k8s cluster receiver. Through a few discussions with @swiatekm-sumo we were thinking it would be best if the collector had generic support for a hash or shard key that the operator could automatically fill in. This would make it easier for receiver authors to take advantage of sharding when present.

We could also look into more generic target support in the target allocator, but I worry that not all receivers want to separate their concerns in that way. Prometheus' native discovery mechanism is one thats easy to act as a middleman for, however, most receivers do not have that same type of discovery and work off of API calls instead. For endpoint based receivers, we could conceivably have the target allocator work for them by having the calls proxy through the TA. However, this may require the TA to import collector components which would result in a bad cycle. I'd love to hear other people's thoughts here.

@jpkrohling
Copy link
Member

Copy link
Contributor

github-actions bot commented Jul 8, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Copy link
Contributor

github-actions bot commented Sep 6, 2024

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closed as inactive discussion needed Community discussion needed enhancement New feature or request needs triage New item requiring triage Stale
Projects
None yet
Development

No branches or pull requests

4 participants