Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kinesis_streams plugin truncates time_key when using millsecond precision format #7538

Closed
watkinsmike opened this issue Jun 9, 2023 · 5 comments · Fixed by #9128
Closed

Comments

@watkinsmike
Copy link

watkinsmike commented Jun 9, 2023

Bug Report

Describe the bug
kinesis_streams output plugin truncates sub-seconds when using millisecond precision with time_key and time_key_format. This only occurs when the resulting time_key milliseconds are below 100ms. e.g. .00X and .0XX.

.00X becomes .X on the time_key field
.0XX becomes .XX on the time_key field

Raw message with focus on pertinent fields:

{"time": "2023-06-09T14:53:12.088865849Z", "timestamp": "2023-06-09T14:53:12.88"}

Expected behavior
The timestamp entry to retain its precision when using millsecond format and not remove preceding 0's.

Your Environment

  • Version used: 2.1.4
  • Configuration:
      [INPUT]
          Name tail
          Path /var/log/containers/*.log
          multiline.parser cri
          Tag kube.*
          Skip_Long_Lines On
          Refresh_Interval 10

      [FILTER]
          Name kubernetes
          Buffer_Size  96k   # was 32k
          Match kube.*
          Merge_Log Off
          Merge_Log_Key structured_log
          Keep_Log Off
          K8S-Logging.Parser On
          K8S-Logging.Exclude On
          Annotations Off

      [OUTPUT]
          Name  kinesis_streams
          Match *
          region ${REGION}
          stream ${KINESIS_STREAM}
          time_key timestamp
          time_key_format %Y-%m-%dT%H:%M:%S.%3N
  • Environment name and version (e.g. Kubernetes? What version?): EKS 1.26 and 1.27
  • Filters and plugins: kubernetes filter

Additional context
Our log files end up in Loggly after Kinesis. Loggly has limitations on the timestamp entry to correctly parse logs.
https://documentation.solarwinds.com/en/success_center/loggly/content/admin/automated-parsing.htm#json

In order for Loggly to recognize the timestamp you send in with the event, follow these guidelines:
    The only timestamp format accepted is ISO 8601 (e.g., 2013-10-11T22:14:15.003Z).
    Loggly supports microseconds/seconds fraction up to 6 digits, per the spec in RFC5424.
    Must be a top level JSON field called either: Timestamp, timestamp, or eventTime.

This is why we cannot use the default time field and also why we cannot use the 9 digit precision of the kinesis_output plugin with '%9N' .
The effect of this causes any log entries between .000 and .099 to become out of order since they are indexed almost a full second after when they should be. This can make troubleshooting logs difficult since the entries are out of order from the expected output. An example would be a log entry that occurs at X.009 becomes X.9 which gets interpreted and index as X.900 in our logging aggregator.

@PettitWesley
Copy link
Contributor

Ack.

https://github.com/fluent/fluent-bit/blob/master/src/aws/flb_aws_util.c#L1003

So the problem in the code I see is that we need to ensure that millisecond_str and nanosecond_str are always the same length... milliseconds should always be 3 chars and nano should be 9 and if it isn't, we need to add leading zeroes.

I apologize for this bug, I should've noticed this during PR review.

@PettitWesley
Copy link
Contributor

I've got a fix: #7538

Copy link
Contributor

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.

@github-actions github-actions bot added the Stale label Jan 30, 2024
Copy link
Contributor

github-actions bot commented Feb 5, 2024

This issue was closed because it has been stalled for 5 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 5, 2024
@watkinsmike
Copy link
Author

Still hoping for this to be fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants