Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't scale down ASGs when placeholder nodes are discovered #5846

Closed
wants to merge 1 commit into from

Conversation

theintz
Copy link

@theintz theintz commented Jun 9, 2023

What type of PR is this?

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #5829

Special notes for your reviewer:

Does this PR introduce a user-facing change?

[AWS] ASGs are no longer scaled down when there are capacity issues within the ASG

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 9, 2023
@linux-foundation-easycla
Copy link

CLA Missing ID CLA Not Signed

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: theintz
Once this PR has been reviewed and has the lgtm label, please assign jaypipes for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. area/provider/aws Issues or PRs related to aws provider labels Jun 9, 2023
@k8s-ci-robot
Copy link
Contributor

Welcome @theintz!

It looks like this is your first PR to kubernetes/autoscaler 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/autoscaler has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Jun 9, 2023
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Jun 9, 2023

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: theintz / name: Tobias Heintz (dfd4f23)

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jun 9, 2023
@gjtempleton
Copy link
Member

/assign @gjtempleton

@theintz
Copy link
Author

theintz commented Jun 20, 2023

We've deployed this patch on our clusters a week ago (patched into v1.26.3) and it appears to have stopped the problems outlined in #5829. Looking forward to your feedback.

@qianlei90
Copy link
Contributor

I'm not familiar with AWS's ASG, but I think one use case should be consider(correct me if I'm wrong):

If EC2 is out of stock and ASG can not create new instances, CA will delete these nodes after MaxNodeProvisionTime(default 15m). If the desired instance number of ASG is not modified, new instances will still be created and registry to cluster when stock is restored, which is not the expected behavior.

@theintz
Copy link
Author

theintz commented Aug 9, 2023

@jaypipes @gjtempleton any feedback?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2024
@@ -308,11 +304,9 @@ func (m *asgCache) DeleteInstances(instances []*AwsInstanceRef) error {

for _, instance := range instances {
// check if the instance is a placeholder - a requested instance that was never created by the node group
// if it is, just decrease the size of the node group, as there's no specific instance we can remove
// if it is, simply remove it from the cache
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By removing the node(s) in the local cache, but not actually scaling down the corresponding ASG, the ongoing/pending request for additional capacity on the provider may impact what capacity is accessible subsequent requests. Basically the ongoing request is at best a capacity spoof and capacity dead-lock at worst.

Put another way, if you request a 100 instances in ASG_1, and get 25 before running into a InsufficientInstanceCapacity, it's probably not the right answer to leave that request on-going. It's not unreasonable to think that the ongoing request for an additional 75 nodes in ASG_1 could affect subsequent requests to provision instances of that type in the same or other ASGs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for taking a look at this. I'm not very familiar with what the placeholder instances are intended to do. There is probably a good reason why they are being created, but when I wrote this patch, they were only ever checked in this one location. Maybe this has changed since then?
If you take a look at the linked bug report, I've mapped how under certain circumstances the flow of the code leads to unsafe decommisioning of instances. I see your point as well, but from our experience, once a request runs into the insufficient capacity issue, it's considered completed (or better failed) and no longer ongoing. So by sending requests to scale the ASG back down, we're not canceling the failed request, but rather removing already existing capacity.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 7, 2024
@apy-liu
Copy link

apy-liu commented Apr 2, 2024

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 2, 2024
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 27, 2024
@Shubham82
Copy link
Contributor

Closing this PR, as issue #5829 is fixed by PR #6911

Please feel free to reopen if you have any concerns.

/close

@k8s-ci-robot
Copy link
Contributor

@Shubham82: Closed this PR.

In response to this:

Closing this PR, as issue #5829 is fixed by PR #6911

Please feel free to reopen if you have any concerns.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler area/provider/aws Issues or PRs related to aws provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[AWS] Unsafe decomissioning of nodes when ASGs are out of instances
8 participants