-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 cluster/topology: use cached Cluster get in Reconcile #8936
🌱 cluster/topology: use cached Cluster get in Reconcile #8936
Conversation
/assign @fabriziopandini @killianmuldoon @ykakarap /hold |
/hold cancel e2e tests green, also works very well in scale tests |
return s.Current.Cluster.GetResourceVersion() != cachedCluster.GetResourceVersion(), nil | ||
}) | ||
if err != nil { | ||
return errors.Wrapf(err, "failed to patch %s: failed waiting for Cluster to be updated in cache", tlog.KObj{Obj: s.Current.Cluster}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is this "failed to patch" given we are beyond l439?
Also how does a wait here impact the controller workflow at all? As soon as the the patch request above or any other is sent a new event will trigger reconciliation and another worker thread of the controller will pick it, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is this "failed to patch" given we are beyond l439?
Good point. You're right
Also how does a wait here impact the controller workflow at all? As soon as the the patch request above or any other is sent a new event will trigger reconciliation and another worker thread of the controller will pick it, right?
Controller-runtime guarantees that there is at most one worker reconciling a specific object / cluster in this case. So while the Patch call will trigger a new reconcile (to be very specific by marking the Cluster item as dirty in the queue) the next reconcile will be deferred until the current reconcile is finished
Signed-off-by: Stefan Büringer buringerst@vmware.com
d2bd005
to
abdcd98
Compare
@enxebre Thx for the review. Should be fixed / answered. I had to rebase onto main so I could also fix simlar error messages in other cases. |
/test pull-cluster-api-e2e-full-main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
LGTM label has been added. Git tree hash: 65cbcfb70ce828a00b3f40d64d6bfb268120145e
|
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: enxebre The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/area clusterclass |
Signed-off-by: Stefan Büringer buringerst@vmware.com
What this PR does / why we need it:
Basically the same optimization as in #8922. More information in the added godoc
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Related #8814