-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clamp GOMAXPROCS when higher than runtime.NumCPU #8201
Conversation
cc @agardiman since you raised this at the last community call |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I'm wondering if there should me a minimum GOMAXPROCS for components using mmap (ingesters), regardless the number of CPU cores in the node, due to the "golang mmap issue".
#### Background We are trying to automatically set GOMAXPROCS based on the number of CPUs that an ingester pod requests in Kubernetes. We're going with 2x the requested cores. The reason for this is that the default values of GOMAXPROCS is NumCPU. When running on large nodes and only utilizing a small % of the underlying node results in high scheduling overhead. #### Problem Sometimes the setting of GOMAXPROCS might exceed the number of cores of the node. We also don't want to restrict the nodes on which pods run. In those cases setting GOMAXPROCS to a higher value than NumCPU has the opposite effect - it increases scheduling overhead instead of reducing it. The idea of this PR is to basically make automating the GOMAXPROCS setting in deployment tooling easier by having some support from the code. Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
a272486
to
02c5770
Compare
AFAIK we have no visibility into how much of the process capacity mmap might be using, so not sure how to make this call - is 2 enough? 10? Do you have any ideas? |
I had a look at a single ingester process via Linux's Unfortunately, this is an extremely narrow view of ingester usage. But the method with I'll merge this PR since I don't think the clamping is likely to be a negative change. Happy to continue discussing about mmap and GOMAXPROCS. |
* Clamp GOMAXPROCS when higher than runtime.NumCPU #### Background We are trying to automatically set GOMAXPROCS based on the number of CPUs that an ingester pod requests in Kubernetes. We're going with 2x the requested cores. The reason for this is that the default values of GOMAXPROCS is NumCPU. When running on large nodes and only utilizing a small % of the underlying node results in high scheduling overhead. #### Problem Sometimes the setting of GOMAXPROCS might exceed the number of cores of the node. We also don't want to restrict the nodes on which pods run. In those cases setting GOMAXPROCS to a higher value than NumCPU has the opposite effect - it increases scheduling overhead instead of reducing it. The idea of this PR is to basically make automating the GOMAXPROCS setting in deployment tooling easier by having some support from the code. Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> * Add CHANGELOG.md entry Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com> --------- Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>
Background
We are trying to automatically set GOMAXPROCS based on the number of CPUs that an ingester pod requests in Kubernetes. We're going with 2x the requested cores. The reason for this is that the default values of GOMAXPROCS is NumCPU. When running on large nodes and only utilizing a small % of the underlying node results in high scheduling overhead.
Problem
Sometimes the setting of GOMAXPROCS might exceed the number of cores of the node. We also don't want to restrict the nodes on which pods run. In those cases setting GOMAXPROCS to a higher value than NumCPU has the opposite effect - it increases scheduling overhead instead of reducing it.
What this PR does
Clamps the value of GOMAXPROCS to runtime.NumCPU. The idea of this PR is to basically make automating the GOMAXPROCS setting in deployment tooling easier by having some support from the code.
Considerations
It's possible that the original NumCPU is not correctly detected or that the operator intended to run with higher GOMAXPROCS. I didn't want to add more configuration options, so instead we're logging a warning message. If there is a real use case for GOMAXPROCS > NumCPU, then we can add the configuration option later.
Checklist
CHANGELOG.md
updated - the order of entries should be[CHANGE]
,[FEATURE]
,[ENHANCEMENT]
,[BUGFIX]
.about-versioning.md
updated with experimental features.