Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Note all resource limits in capacity planning #732

Closed
minrk opened this issue Jun 19, 2018 · 1 comment
Closed

Note all resource limits in capacity planning #732

minrk opened this issue Jun 19, 2018 · 1 comment

Comments

@minrk
Copy link
Member

minrk commented Jun 19, 2018

Recently, Binder ran into a surprising autoscaling behavior when it hit the 110 pod-per-node limit that we didn't know about. Yesterday, I ran into the fact that GKE only allows 16 persistent volumes per node. This is a bug that has a planned alpha fix for kubernetes 1.11.

We've accounted for CPU and RAM limits in our capacity planning docs, but there are other limits we aren't covering, and should include.

All the exhaustible resources I'm aware of right now:

  • CPU
  • RAM
  • pods (110 by default)
  • persistent volumes (16 by default, overridable sometimes, but not managed providers/GKE)

Some deployments may have to account for GPUs, etc.

@consideRatio
Copy link
Member

This is still relevant, I'm embedding and summarizing this task into another issue though to get down to a manageable size of issues in this repo though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants