Skip to content

AWS for Fluent Bit 2.28.0

Compare
Choose a tag to compare
@zwj102030 zwj102030 released this 11 Aug 17:47
· 363 commits to mainline since this release

2.28.0

This release includes:

  • Fluent Bit 1.9.7
  • Amazon CloudWatch Logs for Fluent Bit 1.9.0
  • Amazon Kinesis Streams for Fluent Bit 1.10.0
  • Amazon Kinesis Firehose for Fluent Bit 1.7.0

AWS For Fluent Bit New Feature Announcement:

  • New Image Tags - Added init tagged images with an init process that downloads multiple config and parser files from S3 and sets ECS Metadata as env vars. Check out the docs for the new Fluent Bit ECS Init Image tags.

Compared to 2.27.0 this release adds:

  • Feature - Add gzip compression support for multipart uploads in S3 Output plugin
  • Bug - S3 output key formatting inconsistent rendering of $TAG[n] aws-for-fluent-bit:376
  • Bug - fix concurrency issue in S3 key formatting
  • Bug - cloudwatch_logs plugin fix skip counting empty events

We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit under different input load. Learn more about the load test.

plugin 20Mb/s 25Mb/s 30Mb/s
kinesis_firehose Log Loss
Log Duplication 0%(920)
kinesis_streams Log Loss
Log Duplication 0%(500) 0%(1000) 0%(500)
s3 Log Loss
Log Duplication
plugin 1Mb/s 2Mb/s 3Mb/s
cloudwatch_logs Log Loss
Log Duplication 2%(53893)

Note:

  • The green check ✅ in the table means no log loss or no log duplication.
  • Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
  • For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
  • Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.