-
Notifications
You must be signed in to change notification settings - Fork 421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CAPZ Managed Kubernetes evolution proposal #2739
CAPZ Managed Kubernetes evolution proposal #2739
Conversation
/hold |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @richardcase @pydctw |
868f157
to
67452b6
Compare
|
||
## Conclusions | ||
|
||
Achieving durable, consistent Managed Kubernetes interfaces for the entire Cluster API provider community is our highest priority. Doing that work is necessarily a large investment in engineering time + resources, and will involve some short-term inconvenience for existing customers. If we wish to embark on that work with the greatest chance for long-term success we will want to do that work in Cluster API itself, and not across the provider community only. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
100% on this.
|
||
## AKS-CAPI API Affinity Observations | ||
|
||
As stated above, Cluster API defines a "cluster" as distinct from a "control plane". AKS, however, does not declare such a clean boundary in its own API. Using the `az aks` CLI as an example, we can see that the only current definite abstraction boundary at present (as distinct from the cluster itself is "nodepool". E.g.: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for explaining it with examples. When we wrote the initial managed k8s proposal, we also had difficulties placing some fields into two separate CRDs as it was not very clear whether they belonged to control plane or cluster. Great to understand the perspective from AKS architecture better.
- **<Provider>ManagedControlPlane**: this presents a provider's actual managed control plane. Its spec would only contain properties that are specific to the provisioning & management of a provider's cluster (excluding worker nodes). It would not contain any properties related to a provider's general operating infrastructure, like the networking or project. | ||
- **<Provider>ManagedCluster**: this presents the properties needed to provision and manage a provider's general operating infrastructure for the cluster (i.e project, networking, IAM). It would contain similar properties to **<Provider>Cluster** and its reconciliation would be very similar. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little bit confused about the goal stated here and what is written in kubernetes-sigs/cluster-api#7494 (might be it is just me struggling to keep up with different threads 😅 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Th capz document in this PR is meant to communicate to the capz managed k8s community why or why not we will implement the option #3, or keep the existing option #2, in this already merged proposal:
kubernetes-sigs/cluster-api#6988
@richardcase and @pydctw are doing a similar evaluation for how to evolve managed k8s on capa in the near term.
This document does not speak to the longer term effort (in this issue kubernetes-sigs/cluster-api#7494) to standardize managed k8s in capi itself. We will work that out as a larger capi community, on a longer time scale.
Hope that clears it up a bit!
67452b6
to
919c783
Compare
@CecileRobertMichon @nojnhuh @dtzar I know I suggested we might just close this PR during yesterday's office hours, but after a quick re-read I think it captures the CAPZ perspective that contributed to the creation of a Cluster API "Feature Group" to solve this in capi itself. I've updated the doc conclusions to incorporate that outcome, and I think it would be beneficial for the historical record to merge this PR. wdyt? |
re-reading this proposal it's unclear to me what the goals/non-goals and concrete implementation proposed are. It's a great summary of the state of the world but I don't know if it fits the "proposal" part of the CAEP process. |
@CecileRobertMichon Fair enough, we can keep it via's github seemingly indefinite data store :) |
What type of PR is this?
/kind documentation
What this PR does / why we need it:
This PR attempts to document considerations for evolving the CAPZ Managed Kubernetes implementation in response to kubernetes-sigs/cluster-api#6988.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
TODOs:
Release note: