-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simplify update-vendor script #3566
Conversation
@benmoss: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
9783f39
to
1e19168
Compare
I think every other k8s release we end up having to make some last-minute fix in scheduler that has to be cherry-picked on release branch just before the release is cut. Until we figure out a process solution to this, I think I'd prefer to at least keep the old script around so we can use it in emergency. Maybe for now we can just have both scrips and use the new one when we can and the old one when we have to? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i tried out the script with the 1.20.0-alpha.1 tag and everything worked as expected. i have one small question, but it's not a blocker for me.
/lgtm
Is it possible to make it also work for a k8s commit ID? |
@feiskyer I don't think so. Technically I think you can have go mod target a commit hash in kubernetes/kubernetes, but the problem is with other k8s repos (ex. client-go). The source of truth for those is staging/ directory in kubernetes repo and, absent tag, it's hard to be sure which commit should be used across those repos to be in consistent state. This is exactly the reason for the old script vendoring in the contents of the staging/ directory instead of the actual content of those repos (via local commits). |
You can develop against it locally with |
We discussed on Slack some potential solutions if we need to switch back to an unreleased version of k8s:
|
I'm strongly against second option - I think it is very risky to let different k8s repos get out of sync. Also the actual scheduler logic is in kubernetes/kubernetes repo (pkg/scheduler directory mostly), I don't think there is any particular need to submodule kube-scheduler repo (which I think is just scheduler-related APIs). The benefit of first approach is that most of the time we could ignore the existence of that option. But, if we ever have to use it, that particular version would end up working very differently from other versions (ex. you wouldn't be able to run unittests without GO111MODULE=off, etc). @vivekbagade @towca - I think you have quite a bit of experience in this area. I wonder if you have any opinions on this? |
Indeed it's hard to get all staging repo's pseudo version by a k/k commit ID. Given we don't always depend on a k/k's commit ID, once there is a need, we can get the staging repo's pseudo version and update It's not just CA has this problem, people discussed a lot at this issue. Seems using a tagged version can solve most of the cases. |
Hey folks, I want to add a new SDK for Huawei Cloud, can we make a quick decision and push forward? Can we merge this PR first and create a new issue for a follow-up? How do you think? @feiskyer @benmoss @MaciekPytel |
Good point on kubernetes commit hashes in commit messages. Seems like it would be possible to write a script that goes through each repo and checks if the latest commit in that repo is also the last commit to a given subdirectory of staging. It seems like it may be a bit messy, but it seems like a reasonable solution for vendoring specific commit id. I think vendoring kubernetes in a way that doesn't break go mod and vendoring provider specific dependencies are a completely separate issue though. The problem with latter is how to avoid non-core dependencies when compiling CA with just one provider or when maintaining provider in a fork (AFAIK there are still more forked providers than in-repo ones). I don't really have any good ideas for how to deal with that, but I'd be very happy to discuss any proposals. |
Yes. To be precise, the script should get commit ID from the staging repo for any Kubernetes commit, not only the latest. |
1e19168
to
8c4c1ff
Compare
New changes are detected. LGTM label has been removed. |
I think we need more than one scripts, just like k/k does:
And another script takes responsibility to retrieve commit ID from all staging repo according to k/k commit ID. |
I'm not sure about pin-dependency.sh as I'm not sure if we currently have any dependency that is not already a dependency of kubernetes (in which case we probably want to use whatever version they use). Otherwise I agree. |
691441e
to
ec2c2bf
Compare
I just added the I think |
@benmoss
SGTM. I love iteration. |
I don't think I understand, there's no way to have go.mod point to a specific commit. I think submoduling is the only way that works. |
I think you can do it using v0.0.0-<commit_timestamp>-<commit_hash> syntax. This is mentioned in https://github.com/golang/go/wiki/Modules#can-a-module-consume-a-package-that-has-not-opted-in-to-modules and we already have some examples of that in our go.mod: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/go.mod#L42). Or is there a problem with this approach that I'm not aware of? |
Ah, yeah that's true. I still don't see why it's preferable, it seems a lot more complicated than submodules. |
In my opinion, submodule way is git-ish, and we'd prefer go-ish, right? |
I don't think that's a fair way to contrast them. Submodules solve the problem more simply in my opinion, you point to a specific SHA of k/k and then all the staging repos will be pulled from inside of there. With the proposed bash script solution we have to correlate the Git commit inside k/k to the commits in each of the staging repos. This isn't trivial. Either way though, both solutions work, and I don't really know why we're debating it endlessly. If people really want to go with that approach, then by all means let's write that script and go with it. It seems like a mostly hypothetical problem at this point, considering we don't need to be on an unreleased commit of k/k right now as far as I know. I wish we could just move forward on this and iterate as we encounter problems. |
+1 |
ec2c2bf
to
05f5fb4
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: benmoss, elmiko The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes to the scripts LGTM, one question to the Makefile. The vendor update got out of sync because we had to do an update using the old script in the meantime. Could you change the PR to use a newer tag for the update, or just remove that commit altogether and only leave the script changes?
Hey @benmoss! This is a very valuable contribution, would you consider reopening the PR and rebasing? If not, would you mind if I copied the changes and merged them myself? |
@benmoss: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Glad to see this PR could move forward. @MaciekPytel How do you say? |
@elmiko @benmoss @RainbowMango The changes have been merged in #3915. |
Fantastic!! Thanks, @towca for your hard work. |
Change our update-vendor script so we use go modules in a normal-ish way, avoiding all the /tmp staging folder stuff. I split up the commits just so that we can backport the script changes independently.
/area dependency
/area cluster-autoscaler