Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to Cluster API 1.5.3 #100

Merged
merged 7 commits into from
Nov 6, 2023

Conversation

elmiko
Copy link
Contributor

@elmiko elmiko commented Nov 3, 2023

What this PR does / why we need it:

update the provider to use CAPI 1.5.3

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged):
fixes #93

Special notes for your reviewer:

Release notes:

Updated for Cluster API version 1.5.3

this change updates the dependencies and fixes a minor issue in the
code.
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Nov 3, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: elmiko

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Nov 3, 2023
@elmiko elmiko changed the title Update to capi 15 Update to Cluster API 1.5.3 Nov 3, 2023
@elmiko
Copy link
Contributor Author

elmiko commented Nov 3, 2023

it took some modifications, but i've updated everything to use kube 1.27 and capi 1.5, here is the resultant cluster that was created after i applied cni

mike@capivm:~/cluster-api-provider-kubemark$ kubectl --kubeconfig=/tmp/km.kubeconfig get nodes
NAME                        STATUS     ROLES           AGE     VERSION
km-cp-control-plane-dwl5z   NotReady   control-plane   6m8s    v1.27.3
km-wl-kubemark-md-0-9slzx   Ready      <none>          5m54s   v1.27.3
km-wl-kubemark-md-0-b4gd7   Ready      <none>          5m55s   v1.27.3
km-wl-kubemark-md-0-hsgzx   Ready      <none>          5m54s   v1.27.3
km-wl-kubemark-md-0-jm4n6   Ready      <none>          5m55s   v1.27.3
mike@capivm:~/cluster-api-provider-kubemark$ kubectl --kubeconfig=/tmp/km.kubeconfig get pods -A
NAMESPACE     NAME                                                READY   STATUS     RESTARTS   AGE
kube-system   calico-kube-controllers-784cc4bcb7-mcjrf            1/1     Running    0          44s
kube-system   calico-node-hbbpt                                   0/1     Init:0/3   0          44s
kube-system   calico-node-mbjrr                                   0/1     Init:0/3   0          44s
kube-system   calico-node-sxjh6                                   0/1     Init:0/3   0          44s
kube-system   calico-node-wnlbc                                   0/1     Init:0/3   0          44s
kube-system   calico-node-zlckl                                   0/1     Init:0/3   0          44s
kube-system   coredns-5d78c9869d-9j5bm                            1/1     Running    0          6m3s
kube-system   coredns-5d78c9869d-cqdvd                            1/1     Running    0          6m3s
kube-system   etcd-km-cp-control-plane-dwl5z                      1/1     Running    0          6m11s
kube-system   kube-apiserver-km-cp-control-plane-dwl5z            1/1     Running    0          6m11s
kube-system   kube-controller-manager-km-cp-control-plane-dwl5z   1/1     Running    0          6m11s
kube-system   kube-proxy-7h6n7                                    1/1     Running    0          5m58s
kube-system   kube-proxy-fn5qd                                    1/1     Running    0          5m58s
kube-system   kube-proxy-p6lgs                                    1/1     Running    0          5m58s
kube-system   kube-proxy-tfprt                                    1/1     Running    0          6m3s
kube-system   kube-proxy-wszl8                                    1/1     Running    0          5m59s
kube-system   kube-scheduler-km-cp-control-plane-dwl5z            1/1     Running    0          6m11s

Copy link
Contributor

@killianmuldoon killianmuldoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

/hold

(not sure if you squash commits on this repo, but feel free to unhold if it's okay)

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 6, 2023
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 6, 2023
@elmiko
Copy link
Contributor Author

elmiko commented Nov 6, 2023

thanks @killianmuldoon !

we haven't been squashing commits here, not for any specific reason other than perhaps laziness and the testing nature of this provider.

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 6, 2023
@k8s-ci-robot k8s-ci-robot merged commit 4c1cd9a into kubernetes-sigs:main Nov 6, 2023
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CAPI v1.5.0-beta.0 has been released and is ready for testing
3 participants