Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed collector configuration #1906

Open
swiatekm opened this issue Jul 11, 2023 · 9 comments
Open

Distributed collector configuration #1906

swiatekm opened this issue Jul 11, 2023 · 9 comments
Assignees
Labels
area:collector Issues for deploying collector question Further information is requested

Comments

@swiatekm
Copy link
Contributor

swiatekm commented Jul 11, 2023

Note: This issue is intended to state the problem and collect use cases to anchor the design. It's neither a proposal nor even a high-level design doc.

Currently, configuration for a single Collector CR is monolithic. I'd like to explore the idea of allowing it to be defined in a distributed way, possibly by different users. It would be the operator's job to collect and assemble the disparate configuration CRs and create an equivalent collector configuration - much like how prometheus-operator creates a Prometheus configuration based on ServiceMonitors.

Prior art for similar solutions are prometheus operator with its Monitor CRs, or logging-operator.

Broadly speaking, the benefits of doing this could be:

  • Decoupling operational aspects of running the collector from functional aspects of configuring it.
    Application developers could only write some piece of the configuration for their application, whereas a platform team would be responsible for running the collector.
  • Allowing users to share pieces of the configuration, for example exporters which might depend on some global set of Secrets.
  • Through the above two points, allowing the collector to be configured in a decentralized way, making it much easier to scale to a large amount of cluster users with different telemetry needs.

Potential problems doing this that are unique to the otel operator:

  • The Otel collector does way more than either Prometheus or Fluent-Bit. Our solution should at minimum support all three signal types.
  • Depending on signal type, the collector mode of operation is different. We want a DaemonSet for logs, but a StatefulSet for Prometheus metrics.
  • It may be difficult to create configuration CRs which guarantee validity of the generated collector configuration, given the number of possible components.

Somewhat related issues regarding new CRs for collector configuration: #1477

I'd like to request that anyone who would be interested in this kind of feature, post a comment in this issue describing their use case.

@frzifus frzifus added the question Further information is requested label Jul 11, 2023
@rupeshnemade
Copy link

rupeshnemade commented Jul 13, 2023

Based on our products, I feel this would be a much needed feature.

Our setup has 30 kubernetes clusters as of today with more than 4000 nodes and 70K pods.
We have a multiple use case which are difficult to implement as of now but will be easier if OTEL has ability to have distributed configuration -

  1. We need dynamic Kafka exporter configuration but as OTEL is purely static config it is very difficult to update the OTEL config dynamically based on different value of Kafka brokers.
  2. Right now OTEL static config makes it tightly coupled to single set of config rules. If other team needs to add their own OTEL rule in different namespace then its not possible as there is no option of distributed config option in OTEL like Prometheus has ServiceMonitor feature with service discovery.

Our teams have growing needs of forwarding logs to their own destination for analysis and reporting and filtering out logs. They need to frequently add/remove the destinations from the pipeline and therefore dynamic configuration is really required to enable it at large scale.

@wreed4
Copy link

wreed4 commented Jul 14, 2023

This feature would be very advantageous to us. As we grow as a company, it is our desire to move away from a central team needing to know about the many hundreds of other services running on our clusters. Each team that writes a service is responsible for deploying their service and exposing any custom metrics or logs they want to pull off-cluster. We want a central team to manage the pipeline of how those metrics and logs get pushed to our central observability platform, but we do not want the owner of that pipeline to have to know about which endpoint or which logs or which metrics should be forwarded off cluster and which should not.. or what services exist in the first place. As stated in the initial problem statement of this issue, this is very similar to how the prometheus operator works today, and in fact that is what we use today. In order to move to an OTEL based solution and replace prometheus as a forwarding agent, we really require this decentralization ability.

@jaronoff97
Copy link
Contributor

jaronoff97 commented Aug 16, 2023

Thanks everyone for your feedback here. I've come around to this idea and think it would be beneficial to the community @swiatekm-sumo i'm going to self assign and work on this after #1876 is complete. Do you want to collaborate on the design?

@lsolovey
Copy link

lsolovey commented Sep 6, 2023

I totally support this initiative and agree with use-cases already mentioned above.

Another use-case that I'd like to add is ability for developers to manage Tail Sampling configuration. We run hundreds of applications in cluster, with all observability data collected into the centralized platform. We want application developers to be able to configure Tail Sampling policies for their applications without touching OpenTelemetryCollector CRD (which contains a lot of infrastructure-related settings and is managed by the platform team).

@frzifus
Copy link
Member

frzifus commented Sep 11, 2023

@lsolovey Could you give an example what way of configuration you would expect? Since I am working on a proposal.

@frzifus
Copy link
Member

frzifus commented Sep 15, 2023

In summary, a good first step would be to separate the configuration of exporters from the collector configuration.
I had a conversation about this with @jaronoff97 yesterday. One possibility would be to start with a gateway and exporter CR. Here is an example of how these CRDs relate to each other.

graph TD;
    OpenTelemetryKafkaExporter-->OpenTelemetryExporter;
    OpenTelemetryOtlpExporter-->OpenTelemetryExporter;
    OpenTelemetryExporter-->OpenTelemetryGateway;
    OpenTelemetryExporter-->OpenTelemetryAgent;
    OpenTelemetryAgent-->OpenTelemetryCollector;
    OpenTelemetryGateway-->OpenTelemetryCollector;
Loading

Since all these CRDs are based on the OpenTelemetryCollector definition, it seems to me a requirement to support a native yaml configuration.

Once this is done, we can start prototyping the gateway and exporter CRD.

@luolong
Copy link

luolong commented Sep 15, 2023

My attempts so far at setting up and configuring OTel Collector Operator have lead me to somewhat similar thoughts mentioned here and #1477.

The Prometheus Operator has the correct idea here, I believe.

There are basically two or three concerns here that would be useful to separate:

  • Running Operating OpenTelemetry collector instances with all the best practice boilerplate of monitoring a Kubernetes cluster baked in.
    • Running agent and gateway instances
    • Provisioning "preset" K8s telemetry collection (nodestat receiver, k8s_cluster receiver, k8s_attributes processr, etc)
    • Perhaps a UI component to visualize and edit OTel configuration (low priority).
  • Collector configuration:
    • OpentelemetryReceiver resources for declaratively configuring telemetry sources
    • OpenTelemetryExporter resources for declaratively configuring telemetry destinations
    • OpenTelemetryPipeline resources for binding it all together
  • AutoInstrumentation resources

@pavolloffay
Copy link
Member

pavolloffay commented Mar 11, 2024

I would like to restart this thread with a very simple proposal. The foundation for distributed collector configuration is the config merging feature of the collector. However, merging overrides arrays - proposal for append merging flag open-telemetry/opentelemetry-collector#8754.

Merge of configuration is order dependent (e.g. the order of processors in the pipeline matters). Therefore the proposal is to introduce a new CRD collectorgroup.opentelemetry.io. The CollectorGroup and collector CRs would need to be initially in the same namespace to play well with the k8s RBAC model.

apiVersion: opentelemetry.io/v1beta1
kind: CollectorGroup
metadata:
  name: simplest
spec:
  root: platform-collector
  collectors:
    - name: receivers
    - name: pii-remove-users
    - name: pii-remove-credit-cards
    - name: export-to-vendor
---
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: platform-collector
spec:
  collectorGroup: true
  config:
  • spec.root defines root collector that defines deployment mode, scaling..
  • spec.collectors defines list of collector CRs which configs will be merged (maybe other fields as well e.g. env vars)
  • operator deploys a single collector per CollectorGroup
  • OpenTelemetryCollector's spec.collectorGroup indicates that the collector is part of the group and should not be deployed independently

The operator could do some validation of the collector configs to make sure each config contains only unique components to avoid overrides.

@frzifus
Copy link
Member

frzifus commented Mar 12, 2024

I like the idea, but Ive a few open questions / thoughts:

  • What would happen if export-to-vendor is based on a different image/version then platform-collector? Maybe it uses a component that does not exist in the image used by the platform collector.
  • How would we handle env variable, volume, ... conflicts? We could limit this to the platform-collector. But this could become weird when only export-to-vendor requires for example specific TLS certs.
  • If the CollectorGroups are limited by a namespace, what is the benefit compared to a single collector configuration?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:collector Issues for deploying collector question Further information is requested
Projects
None yet
Development

No branches or pull requests

8 participants