Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log record mutations are visible in next registered processors #4067

Merged
merged 35 commits into from
Sep 4, 2024

Conversation

pellared
Copy link
Member

@pellared pellared commented Jun 4, 2024

Current description

Closes #4065

After few SIG meetings, many discussions, the Go SIG decided to follow the intention of the specification:

I propose to clarify the specification to avoid confusion of future readers.

Previous description

Per #4065 (comment)

Allow chaining processors and having the log record modifications local.
This is how currently C++ (stable), Go (beta) Logs SDK are designed.
For reference, the following SDKs implements shared mutations for log record processors: Java, .NET, Python, JavaScript, PHP.

For Go, it allows high performance by minimizing the amount of heap allocations while keeping the design clean and Go idiomatic (similar to structured logging in Go standard library).
open-telemetry/opentelemetry-go#5470 is a PoC of making log record mutation visible in next registered processors but suffers from very awkward design (still it is not compliant with the way the specification is written right now).
open-telemetry/opentelemetry-go#5478 is another PoC of making log record mutation visible in next registered processors, but it suffers more heap allocations and adds synchronization (in my opinion, unacceptable performance overhead).

EDIT:
A shared log record means that consecutive processors receive the same instance of the log record.
Not shared means that each log record record receives a copy of log record.

Some implementations already use a shared record (Java, .NET, Python, JavaScript, Python, Rust). However, in Go (C++ probably as well) we prefer to use copies (shallow) to reduce the amount of heap allocations (makes it possible to have a zero allocation log processing). E.g. it can drastically improve the performance as e.g. trace and debug logs (if info log level is set) would reduce the memory pressure and make the garbage collection a lot more efficient (for 95% of the cases it would not cause any heap allocation).

Even if the other implementations would benefit from using copies as well may not to want to change it as it would be a breaking change.

In short, stack allocated log records can be more performant. Having log records shared causes the Go escape analysis to make a heap allocation more often.

@pellared

This comment was marked as outdated.

@pellared pellared marked this pull request as ready for review June 4, 2024 16:46
@pellared pellared requested review from a team June 4, 2024 16:46
specification/logs/sdk.md Outdated Show resolved Hide resolved
@pellared pellared requested a review from jmacd June 18, 2024 11:24
@pellared
Copy link
Member Author

@open-telemetry/go-approvers @open-telemetry/cpp-approvers @open-telemetry/rust-approvers PTAL

@pellared pellared added the spec:logs Related to the specification/logs directory label Jun 18, 2024
XSAM

This comment was marked as resolved.

@pellared

This comment was marked as resolved.

Copy link
Member

@XSAM XSAM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can leave the definition of shared logRecord in the doc, so future viewers don't need to check the PR to know the definition.

specification/logs/sdk.md Outdated Show resolved Hide resolved
specification/logs/sdk.md Outdated Show resolved Hide resolved
specification/logs/sdk.md Outdated Show resolved Hide resolved
pellared and others added 2 commits June 21, 2024 08:23
Co-authored-by: Sam Xie <sam@samxie.me>
@pellared pellared requested a review from cijothomas June 24, 2024 10:54
Copy link
Contributor

@jsuereth jsuereth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given all the discussions and options here, I think this is the right path forward with what we have today.

Copy link

This PR was marked stale due to lack of activity. It will be closed in 7 days.

@github-actions github-actions bot added the Stale label Aug 14, 2024
@cijothomas
Copy link
Member

@pellared Can you resolve the conflict? This looks like has enough approvals, so should be good to merge.

@github-actions github-actions bot removed the Stale label Aug 17, 2024
@tigrannajaryan
Copy link
Member

@pellared Can you resolve the conflict? This looks like has enough approvals, so should be good to merge.

I think @pellared is out. We may need someone else to take over the PR.

@cijothomas
Copy link
Member

@pellared Can you resolve the conflict? This looks like has enough approvals, so should be good to merge.

I think @pellared is out. We may need someone else to take over the PR.

Looks like he just appeared to fix conflict :)

@jmacd @arminru Tagging to request review as PR seems assigned to you both!

@cijothomas
Copy link
Member

@open-telemetry/technical-committee Could you suggest next steps for moving this forward? There are approvals already, but no further movement and Bot will mark this stale again soon.
It is totally fine if this is just waiting for more eyes to review and/or people are busy.

@carlosalberto
Copy link
Contributor

Let's mention this in our Spec call today and otherwise we should merge this in the next 1-2 days, as we have left it open for a little while and no concerns have been raised.

@cijothomas
Copy link
Member

Let's mention this in our Spec call today and otherwise we should merge this in the next 1-2 days, as we have left it open for a little while and no concerns have been raised.

Thanks!
I won't be able to join the spec call, most likely:(
Could you help mention this in the call today?

@carlosalberto
Copy link
Contributor

Just to be super clear: what's the final opinion from the @open-telemetry/cpp-maintainers on this topic?

Copy link
Member

@trask trask left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think LogRecordProcessors which stamp additional log attributes prior to export is a common use case (e.g. I have implemented two of them here and here)

@pellared
Copy link
Member Author

pellared commented Sep 4, 2024

@open-telemetry/technical-committee, can you please check whether this PR is good to be merged given #4067 (comment)?

@carlosalberto carlosalberto merged commit 42dd11a into open-telemetry:main Sep 4, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
clarification clarify ambiguity in specification spec:logs Related to the specification/logs directory
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Log record mutations are visible in next registered processors