Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cluster with reused name in different namespace fails to boot #969

Closed
liztio opened this issue Aug 7, 2019 · 17 comments
Closed

Cluster with reused name in different namespace fails to boot #969

liztio opened this issue Aug 7, 2019 · 17 comments
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@liztio
Copy link
Contributor

liztio commented Aug 7, 2019

/kind bug

What steps did you take and what happened:

  1. Boot a cluster with a given name in the default namespace
  2. Wait for that cluster to successfully come up
  3. Create a new namespace
  4. Boot a cluster with an identical name to the first in the new namespace
  5. Cluster does not create a new VPC or any associated resources

What did you expect to happen:
A new VPC, set of machines, and associated resources should have been booted

Anything else you would like to add:

Environment:

  • Cluster-api-provider-aws version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 7, 2019
@ncdc
Copy link
Contributor

ncdc commented Aug 7, 2019

The cluster name needs to be unique per <AWS account, region>.

@detiber
Copy link
Member

detiber commented Aug 7, 2019

Currently we treat the cluster name as a unique value. We have a few options here:

  • Add some type of validation that the cluster name is indeed unique
    • This would still present issues where the controllers are scoped to a single namespace or running under different management clusters
  • Prefix everywhere we use the cluster name for naming, tagging, etc with the namespace name
    • This would still present issues where multiple management clusters are in use
    • This could also lead to exceeding string lengths in various places and would need to be accounted for.
  • Generate some type of unique identifier that can be pre/post pended to the cluster name
    • This unique identifier would need to be generated and saved to the spec to persist across pivot or a backup/restore (resource UUID does not persist, nor does Status)
    • string lengths should also be accounted for
    • may need to also tag resources with the namespace to help for easier identification of resources in AWS, or some other method to easily differentiate between clusters if running on a single management cluster.

@detiber
Copy link
Member

detiber commented Aug 7, 2019

@ncdc that limitation only exists because of the current design, but we do need to have a unique "cluster name" that we set that is used for the integrated or external cloud-provider integration.

@ncdc
Copy link
Contributor

ncdc commented Aug 7, 2019

@detiber which is still per account+region, unless I'm mistaken?

@detiber
Copy link
Member

detiber commented Aug 7, 2019

@ncdc correct

@vincepri
Copy link
Member

vincepri commented Aug 8, 2019

I really like the 3rd option @detiber suggested, which might be part of a larger naming refactor. AWS resources are notoriously limited in how many chars they can use, it'd be great if we could come up with a consistent naming scheme that can be used across all resources and solve this issue as well.

@detiber
Copy link
Member

detiber commented Aug 12, 2019

/priority important-soon
/milestone v0.4

@k8s-ci-robot k8s-ci-robot added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Aug 12, 2019
@k8s-ci-robot k8s-ci-robot added this to the v0.4 milestone Aug 12, 2019
@liztio
Copy link
Contributor Author

liztio commented Oct 8, 2019

/assign
/lifecycle active

@k8s-ci-robot k8s-ci-robot added the lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. label Oct 8, 2019
@ncdc ncdc modified the milestones: v0.4.x, v0.5.0 Oct 10, 2019
@liztio
Copy link
Contributor Author

liztio commented Oct 11, 2019

/remove-lifecycle active

@k8s-ci-robot k8s-ci-robot removed the lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. label Oct 11, 2019
@ncdc ncdc unassigned liztio Dec 20, 2019
@ncdc
Copy link
Contributor

ncdc commented Jan 17, 2020

Going through open unassigned issues in the v0.5.0 milestone. We have a decent amount of work left to do on CAPI features (control plane, clusterctl, etc). While this is an unfortunately ugly bug, I think we need to defer it to v0.5.

/milestone Next
/remove-priority important-soon
/priority important-longterm

@k8s-ci-robot k8s-ci-robot modified the milestones: v0.5.0, Next Jan 17, 2020
@k8s-ci-robot k8s-ci-robot added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jan 17, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2020
@detiber
Copy link
Member

detiber commented Apr 16, 2020

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2020
@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Apr 16, 2020
@randomvariable randomvariable added the kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API label Aug 14, 2020
@randomvariable randomvariable modified the milestones: Next, v0.7.0 Mar 11, 2021
@randomvariable randomvariable modified the milestones: v0.7.0, v0.7.x Jun 28, 2021
@randomvariable randomvariable modified the milestones: v0.7.x, v1.x Nov 8, 2021
@richardcase
Copy link
Member

/remove-lifecycle frozen

@k8s-ci-robot k8s-ci-robot removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jul 8, 2022
This was referenced Jul 28, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 6, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 5, 2022
@richardcase richardcase removed this from the v1.x milestone Nov 10, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 10, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

9 participants