Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

out_logdna: refactored LogDNA URI formation to support configurable endpoints #8051

Merged

Conversation

mirko-lazarevic
Copy link
Contributor

@mirko-lazarevic mirko-lazarevic commented Oct 17, 2023

Previously, the LogDNA URI was hardcoded to use the /logs/ingest endpoint. This change introduces flexibility by allowing users to specify their own endpoint through the logdna_endpoint configuration. This can be particularly useful for users with different LogDNA setups or those using Super Tenancy.


Enter [N/A] in the box, if an item is not applicable to your change.

Testing
Before we can approve your change; please submit the following in a comment:

  • Example configuration file for the change
  • Debug log output from testing the change
  • Attached Valgrind output that shows no leaks or memory corruption was found

If this is a change to packaging of containers or native binaries then please confirm it works for all targets.

  • [N/A] Run local packaging test showing all targets (including any new ones) build.
  • [N/A] Set ok-package-test label to test for all targets (requires maintainer to do).

Documentation

  • Documentation required for this feature

fluent/fluent-bit-docs#1236

Backporting

  • [N/A] Backport to latest stable release.

Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.

…ndpoints

Previously, the LogDNA URI was hardcoded to use the '/logs/ingest' endpoint.
This change introduces flexibility by allowing users to specify their own
endpoint through the 'logdna_endpoint' configuration. This can be particularly
useful for users with different LogDNA setups or those using Super Tenancy.

Signed-off-by: Mirko Lazarevic <mirko.lazarevic@ibm.com>
@mirko-lazarevic
Copy link
Contributor Author

mirko-lazarevic commented Oct 17, 2023

Testing

Example configuration file for the change

[SERVICE]
    Flush           5
    Daemon          Off
    Log_Level       debug
    Parsers_File    parsers.conf

[INPUT]
    Name    dummy
    Dummy   {"message":"This is test log statement"}

[OUTPUT]
    name            logdna
    match           *
    api_key        <your_api_key>
    logdna_host     logs.logdna.com
    logdna_port     443
    logdna_endpoint /logs/ingest
    tags            fluentbit

Debug log output from testing the change

# ./bin/fluent-bit -c ../conf/out_logdna.conf
Fluent Bit v2.2.0
* Copyright (C) 2015-2023 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2023/10/17 09:10:26] [ info] Configuration:
[2023/10/17 09:10:26] [ info]  flush time     | 5.000000 seconds
[2023/10/17 09:10:26] [ info]  grace          | 5 seconds
[2023/10/17 09:10:26] [ info]  daemon         | 0
[2023/10/17 09:10:26] [ info] ___________
[2023/10/17 09:10:26] [ info]  inputs:
[2023/10/17 09:10:26] [ info]      dummy
[2023/10/17 09:10:26] [ info] ___________
[2023/10/17 09:10:26] [ info]  filters:
[2023/10/17 09:10:26] [ info] ___________
[2023/10/17 09:10:26] [ info]  outputs:
[2023/10/17 09:10:26] [ info]      logdna.0
[2023/10/17 09:10:26] [ info] ___________
[2023/10/17 09:10:26] [ info]  collectors:
[2023/10/17 09:10:26] [ info] [fluent bit] version=2.2.0, commit=39e52bf041, pid=21381
[2023/10/17 09:10:26] [debug] [engine] coroutine stack size: 24576 bytes (24.0K)
[2023/10/17 09:10:26] [ info] [storage] ver=1.5.1, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2023/10/17 09:10:26] [ info] [cmetrics] version=0.6.3
[2023/10/17 09:10:26] [ info] [ctraces ] version=0.3.1
[2023/10/17 09:10:26] [ info] [input:dummy:dummy.0] initializing
[2023/10/17 09:10:26] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2023/10/17 09:10:26] [debug] [dummy:dummy.0] created event channels: read=21 write=22
[2023/10/17 09:10:26] [debug] [logdna:logdna.0] created event channels: read=23 write=24
[2023/10/17 09:10:26] [ info] [output:logdna:logdna.0] configured, hostname=(null)
[2023/10/17 09:10:26] [ info] [sp] stream processor started
[2023/10/17 09:10:27] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:28] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:29] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:30] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:31] [debug] [task] created task=0x7f383c0d12c0 id=0 OK
[2023/10/17 09:10:31] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:31] [debug] [upstream] KA connection #31 to logs.logdna.com:443 is connected
[2023/10/17 09:10:31] [debug] [http_client] not using http_proxy for header
[2023/10/17 09:10:32] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"0c6bc3cc-0909-4377-89a8-e9471fd89101:5887:ld82","status":"ok"}
[2023/10/17 09:10:32] [debug] [upstream] KA connection #31 to logs.logdna.com:443 is now available
[2023/10/17 09:10:32] [debug] [out flush] cb_destroy coro_id=0
[2023/10/17 09:10:32] [debug] [task] destroy task=0x7f383c0d12c0 (task_id=0)
[2023/10/17 09:10:32] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:33] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:34] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:35] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:36] [debug] [task] created task=0x7f383c0d1620 id=0 OK
[2023/10/17 09:10:36] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:36] [debug] [upstream] KA connection #31 to logs.logdna.com:443 has been assigned (recycled)
[2023/10/17 09:10:36] [debug] [http_client] not using http_proxy for header
[2023/10/17 09:10:36] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"d641c240-c09f-46f7-b9a1-9eb657928df2:21022:ld82","status":"ok"}
[2023/10/17 09:10:36] [debug] [upstream] KA connection #31 to logs.logdna.com:443 is now available
[2023/10/17 09:10:36] [debug] [out flush] cb_destroy coro_id=1
[2023/10/17 09:10:36] [debug] [task] destroy task=0x7f383c0d1620 (task_id=0)
[2023/10/17 09:10:37] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:38] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:39] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:40] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:41] [debug] [task] created task=0x7f383c0d1510 id=0 OK
[2023/10/17 09:10:41] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
[2023/10/17 09:10:41] [debug] [upstream] KA connection #31 to logs.logdna.com:443 has been assigned (recycled)
[2023/10/17 09:10:41] [debug] [http_client] not using http_proxy for header
[2023/10/17 09:10:41] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"6055713e-56f8-424b-8b5e-02a333b5ba73:52345:ld82","status":"ok"}
[2023/10/17 09:10:41] [debug] [upstream] KA connection #31 to logs.logdna.com:443 is now available
[2023/10/17 09:10:41] [debug] [out flush] cb_destroy coro_id=2
[2023/10/17 09:10:41] [debug] [task] destroy task=0x7f383c0d1510 (task_id=0)
[2023/10/17 09:10:42] [debug] [input chunk] update output instances with new chunk size diff=265, records=1, input=dummy.0
^C[2023/10/17 09:10:42] [engine] caught signal (SIGINT)
[2023/10/17 09:10:42] [debug] [task] created task=0x7f383c0d1710 id=0 OK
[2023/10/17 09:10:42] [ warn] [engine] service will shutdown in max 5 seconds
[2023/10/17 09:10:42] [ info] [input] pausing dummy.0
[2023/10/17 09:10:42] [debug] [upstream] KA connection #31 to logs.logdna.com:443 has been assigned (recycled)
[2023/10/17 09:10:42] [debug] [http_client] not using http_proxy for header
[2023/10/17 09:10:42] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"39080b63-e383-465f-9f0d-c6ecb60b56e5:12819:ld82","status":"ok"}
[2023/10/17 09:10:42] [debug] [upstream] KA connection #31 to logs.logdna.com:443 is now available
[2023/10/17 09:10:42] [debug] [out flush] cb_destroy coro_id=3
[2023/10/17 09:10:42] [debug] [task] destroy task=0x7f383c0d1710 (task_id=0)
[2023/10/17 09:10:43] [ info] [engine] service has stopped (0 pending tasks)
[2023/10/17 09:10:43] [ info] [input] pausing dummy.0

Valgrind output

# valgrind ./bin/fluent-bit -c ../conf/out_logdna.conf
==21376== Memcheck, a memory error detector
==21376== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==21376== Using Valgrind-3.18.1 and LibVEX; rerun with -h for copyright info
==21376== Command: ./bin/fluent-bit -c ../conf/out_logdna.conf
==21376==
Fluent Bit v2.2.0
* Copyright (C) 2015-2023 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2023/10/17 08:57:29] [ info] [fluent bit] version=2.2.0, commit=39e52bf041, pid=21376
[2023/10/17 08:57:29] [ info] [storage] ver=1.5.1, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2023/10/17 08:57:29] [ info] [cmetrics] version=0.6.3
[2023/10/17 08:57:29] [ info] [ctraces ] version=0.3.1
[2023/10/17 08:57:29] [ info] [input:dummy:dummy.0] initializing
[2023/10/17 08:57:29] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2023/10/17 08:57:29] [ info] [output:logdna:logdna.0] configured, hostname=(null)
[2023/10/17 08:57:29] [ info] [sp] stream processor started




[2023/10/17 08:57:36] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"3340f2f7-decf-42a0-9716-e0b87f339267:37724:ld82","status":"ok"}
[2023/10/17 08:57:39] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"cc2d3f1b-2400-4b0a-9690-66f1f23019a0:28387:ld82","status":"ok"}
[2023/10/17 08:57:44] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"cf020579-f952-4454-a139-372fca5a0883:23594:ld82","status":"ok"}
[2023/10/17 08:57:49] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"e2a38645-b04a-42aa-99ca-c3f392594f75:26025:ld82","status":"ok"}
[2023/10/17 08:57:54] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"d0bee337-6d03-4361-8746-15975e02ce74:32680:ld82","status":"ok"}
[2023/10/17 08:57:59] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"f05cf839-fb31-4fd9-aaa0-df79e03a739b:41117:ld82","status":"ok"}
[2023/10/17 08:58:04] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"cf5c456f-8ff8-4aa7-a594-3cd22b4f29da:38296:ld82","status":"ok"}
^C[2023/10/17 08:58:09] [engine] caught signal (SIGINT)
[2023/10/17 08:58:09] [ warn] [engine] service will shutdown in max 5 seconds
[2023/10/17 08:58:09] [ info] [input] pausing dummy.0
[2023/10/17 08:58:09] [ info] [task] dummy/dummy.0 has 1 pending task(s):
[2023/10/17 08:58:09] [ info] [task]   task_id=0 still running on route(s): logdna/logdna.0
[2023/10/17 08:58:09] [ info] [input] pausing dummy.0
[2023/10/17 08:58:09] [ info] [output:logdna:logdna.0] logs.logdna.com:443, HTTP status=200
{"batchID":"6e6a1364-fc96-44b2-af8d-4dc686ba5348:30685:ld82","status":"ok"}
[2023/10/17 08:58:10] [ info] [engine] service has stopped (0 pending tasks)
[2023/10/17 08:58:10] [ info] [input] pausing dummy.0
==21376==
==21376== HEAP SUMMARY:
==21376==     in use at exit: 0 bytes in 0 blocks
==21376==   total heap usage: 20,270 allocs, 20,270 frees, 6,736,563 bytes allocated
==21376==
==21376== All heap blocks were freed -- no leaks are possible
==21376==
==21376== For lists of detected and suppressed errors, rerun with: -s
==21376== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)

@ScarletTanager
Copy link

This is particularly important for us, as we need the endpoint flexibility.

@mirko-lazarevic mirko-lazarevic temporarily deployed to pr October 18, 2023 07:56 — with GitHub Actions Inactive
@mirko-lazarevic mirko-lazarevic temporarily deployed to pr October 18, 2023 07:56 — with GitHub Actions Inactive
@mirko-lazarevic mirko-lazarevic temporarily deployed to pr October 18, 2023 07:56 — with GitHub Actions Inactive
@mirko-lazarevic mirko-lazarevic temporarily deployed to pr October 18, 2023 08:27 — with GitHub Actions Inactive
@mirko-lazarevic
Copy link
Contributor Author

I noticed that not all checks passed for the pull request. Is there anything I need to address or adjust on my end? Please let me know if I can assist in moving this forward.

@mirko-lazarevic
Copy link
Contributor Author

A friendly heads-up to code owners @edsiper @leonardo-albertovich @fujimotos and @koleini to check out the pull request when they can. Many thanks.

@agup006
Copy link
Member

agup006 commented Nov 16, 2023

@cosmo0920 would you be able to check this again? Specifically in the unit tests

@cosmo0920
Copy link
Contributor

cosmo0920 commented Nov 17, 2023

The failed unit tests are core_chunk_trace and flb-it-log:

  • 2 - flb-rt-core_chunk_trace (Subprocess aborted)
  • 105 - flb-it-log (Failed)

ref: https://github.com/fluent/fluent-bit/actions/runs/6544903788/job/18323010415?pr=8051#step:4:4018
ref: https://github.com/fluent/fluent-bit/actions/runs/6544903788/job/18323010281?pr=8051#step:4:3810

They are not related to this PR and they are flaky tests...

@ScarletTanager
Copy link

@cosmo0920 Thanks for looking - but how can we move this forward when unrelated flaky tests fail?

@ScarletTanager
Copy link

@agup006 According to GH, we need a review from one of @edsiper @fujimotos @koleini @leonardo-albertovich .

@mirko-lazarevic
Copy link
Contributor Author

🏓

2 similar comments
@mirko-lazarevic
Copy link
Contributor Author

🏓

@mirko-lazarevic
Copy link
Contributor Author

🏓

Copy link
Contributor

@tchrono tchrono left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mirko-lazarevic
Copy link
Contributor Author

🏓

@agup006
Copy link
Member

agup006 commented Jul 2, 2024

@edsiper could we get this for 3.1.1?

@mirko-lazarevic
Copy link
Contributor Author

🏓

@edsiper edsiper merged commit 9e5111b into fluent:master Aug 27, 2024
48 of 50 checks passed
@mirko-lazarevic mirko-lazarevic deleted the out-logdna-configurable-endpoints branch August 29, 2024 18:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants