Skip to content
This repository has been archived by the owner on Oct 3, 2023. It is now read-only.

agent/v1: perhaps make instance a field instead of a key in the attributes map #108

Open
odeke-em opened this issue Sep 14, 2018 · 4 comments

Comments

@odeke-em
Copy link
Member

Migrating here from census-ecosystem/opencensus-go-exporter-ocagent#11.

If an application decides to make more than 1 exporter, the agent should be able to distinguish between each instance of the exporter. In fact a scenario that can apply here is if one library using application acts as a proxy and then forks n children that aren't allowed to talk to networks except itself(firewalled), the proxy will need more than 1 application.

The current remedy is to add "instance_id" to the attributes map, but that then requires every language implementor to memorize that key, and anyone forgetting could abuse the specs and result in unexpected behavior, hence this might warrant us considering whether to make InstanceID a field for the Node instead of just a key

@odeke-em
Copy link
Member Author

/cc @bogdandrutu @songy23

@songy23
Copy link
Contributor

songy23 commented Sep 14, 2018

In fact a scenario that can apply here is if one library using application acts as a proxy and then forks n children that aren't allowed to talk to networks except itself(firewalled), the proxy will need more than 1 application.

Agree this is a legitimate scenario though I have a few questions towards this:

  1. Is it really necessary to distinguish different exporters within the same process? For the case you mentioned, would it make more sense for all the forked children to export as one Node? Especially, how do we deal with multiple Config streams to the forks? AFAIS the Config sent from Agent should apply to all the forks, in that sense it seems the forks should belong to one Node.

  2. Is this use case generic enough that we have to add one more identifier message? If these cases are relatively rare, it may be enough to just look at the process_id field, and only look for additional info in the attributes map when needed. (I brought this up just because I'm afraid we may ending up adding more and more fields to Node message, which could have just been part of the attributes map.)

  3. instance_id may not be a good name since it overlaps with a concept in monitored resource metadata. Maybe use thread_id instead? I think each exporter within a process is guaranteed to be running in different threads.

I don't object to this proposal but want to get a better understanding on the use cases.

@odeke-em
Copy link
Member Author

tl;dr: please let's not waste time on this issue because making more than 1 exporter also means that currently if one of them receives a config and applies it, the singleton tracer's config changes for every other exporter

Is it really necessary to distinguish different exporters within the same process? For the case you mentioned, would it make more sense for all the forked children to export as one Node?

Yes! A scenario close to home has been a media processing service that has a frontend in Go and has to fork out to multiple instances of a program via a shell since the algorithms and engine are written in C++, and each fork/exec has stdout, stderr and stdin connected to the same program. To ensure resource utilization, videos/images are streamed to each forked process and exec'd programs are reused even after completion. Multiple processes are here forked out by the same program/PID and with tracing, we need to be able to change what the different trace configs are per exec.

AFAIS the Config sent from Agent should apply to all the forks, in that sense it seems the forks should belong to one Node.

Not really because each exec doesn't get equal work.

Is this use case generic enough that we have to add one more identifier message? If these cases are relatively rare, it may be enough to just look at the process_id field, and only look for additional info in the attributes map when needed. (I brought this up just because I'm afraid we may ending up adding more and more fields to Node message, which could have just been part of the attributes map.)

In deed, I raised this with

The current remedy is to add "instance_id" to the attributes map, but that then requires every language implementor to memorize that key,

the danger here is that if a language implementer forgets the capitalization of the key, their lookup is botched for all agent-exporter programs in that language e.g. if the key is "instance_id" for PHP, Python, Java, Node.js yet the Go implementation uses "instanceID", all Go users will have clashing nodes and the same config will be sent to every single user. This is a very subtle bug to catch, which is why I filed this issue and also mentioned this danger in my original posting. A field index in the proto definition is forever while a name is ephemeral.

instance_id may not be a good name since it overlaps with a concept in monitored resource metadata. Maybe use thread_id instead? I think each exporter within a process is guaranteed to be running in different threads.

I don't think this assumption is cross language, maybe for Java but not the others. Threads aren't a user-level concept in Go and its runtime doesn't expose functionalities for getting thread_id, but even trickier, goroutines can be migrated to different threads due to the "work stealing" runtime.

However, I think I might be wasting valuable time with this issue since if there is more than 1 exporter, if the agent sends a config to either one of them, currently with our singleton tracer implementations, the config will have to be applied globally hence no point in having more than 1 exporter until that's sorted out.

@odeke-em
Copy link
Member Author

A nicer option would be to use a field say "nonce" which is generated randomly and that should solve this problem without requiring locks and tracking instance counts.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants