Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding jetty HTTP client queue size metric #17100

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

Pankaj260100
Copy link
Contributor

@Pankaj260100 Pankaj260100 commented Sep 18, 2024

Fixes #XXXX.

Description

  • Adding a new metric to expose the queue size at the jetty HTTP client.
  • Currently, we only print a log line when it exceeds the 1024 limit.
 `jetty/httpClient/threadpool/queueSize` Size of the worker queue at jetty http client. Less than or equal to 1024(default value).

Fixed the bug ...

Renamed the class ...

Added a forbidden-apis entry ...

Release note


Key changed/added classes in this PR
  • MyFoo
  • OurBar
  • TheirBaz

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • a release note entry in the PR description.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

@@ -133,6 +133,7 @@ Most metric values reset each emission period, as specified in `druid.monitoring
|`jetty/threadPool/min`|Number of minimum threads allocatable.|`druid.server.http.numThreads` plus a small fixed number of threads allocated for Jetty acceptors and selectors.|
|`jetty/threadPool/max`|Number of maximum threads allocatable.|`druid.server.http.numThreads` plus a small fixed number of threads allocated for Jetty acceptors and selectors.|
|`jetty/threadPool/queueSize`|Size of the worker queue.|Not much higher than `druid.server.http.queueSize`.|
|`jetty/httpClient/threadpool/queueSize`|Size of the worker queue at jetty http client.|Less than or equal to 1024(default value).|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you intend to use this metric?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to monitor if we have a sufficient number of Connections per destination. Let's suppose the latency is high at the broker. It might be possible the request was waiting at the jetty client-side queue, causing the latency to go high.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the jetty client module is only being used on coordinator and router though. And it's unusual that you are hitting a limit 1024.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what service did you see this log line in

 `jetty/httpClient/threadpool/queueSize` Size of the worker queue at jetty http client. Less than or equal to 1024(default value).

Copy link
Contributor Author

@Pankaj260100 Pankaj260100 Sep 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do see this logline from the router:

Max requests queued per destination 1024 exceeded for HttpDestination[$(broker)]@4b9ee947,queue=1024,pool=DuplexConnectionPool@7698916c[c=0/100/100,a=100,i=0,q=1024] 

Copy link
Contributor Author

@Pankaj260100 Pankaj260100 Sep 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the jetty client module is only being used on coordinator and router though.

ohh, But the broker also communicates with data nodes, so what does the broker use for communicating with data nodes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I am more interested in the broker-side client queue. Can you help me with which client we use there?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

public class HttpClientModule implements Module

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@abhishekagarwal87, we are not setting the thread pool executor in the HttpClientModule. In the jetty server module, we also expose queue size from the thread pool; so is there any other way we can emit queue size here in HttpClientModule?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants