Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CI for disabled holder pods #13922

Open
BlaineEXE opened this issue Mar 12, 2024 · 7 comments · May be fixed by #14159
Open

Add CI for disabled holder pods #13922

BlaineEXE opened this issue Mar 12, 2024 · 7 comments · May be fixed by #14159
Assignees

Comments

@BlaineEXE
Copy link
Member

CI multus and non-host network tests are currently verifying that legacy behavior is preserved. Let's follow up in a new PR to ensure the new CSI_DISABLE_HOLDER_POD: "true" behavior works greenfield beyond the manual testing I've done.

Follow up from #13890 and continuation of work described in #13055

@subhamkrai subhamkrai self-assigned this Mar 13, 2024
@subhamkrai
Copy link
Contributor

@BlaineEXE for this change I just need to start fresh with CSI_DISABLE_HOLDER_POD: "true" what else do need to verify also the new test will be part canary test?

@BlaineEXE
Copy link
Member Author

BlaineEXE commented Mar 29, 2024

We should duplicate the 2 CI canary tests that were modified here, and ensure the copies are running with CSI_DISABLE_HOLDER_POD: "true".

https://github.com/rook/rook/pull/13890/files#diff-273ee0fef4a59873aa9c60d2cf9a5e039a83d30a459b86f052b731154cfecea2

csi-hostnetwork-disabled should work fine without making host changes because it'll use pod networking. But multus-cluster-network should run with host net, and we'll need to do host routing stuff.

[Update]
It might be good to add a 3rd test also that tests Multus with host network disabled. That will be a combo of the 2 tests and shouldn't require host config changes.

@travisn
Copy link
Member

travisn commented Mar 29, 2024

Could we get away with only adding a single new test? (in the interest of avoiding too many tests in the long term) But if we can anyway remove these tests in a near future release assuming disabling the holder pod is only a temporary transition setting, then I'm not concerned about the test count.

@subhamkrai
Copy link
Contributor

  • CI for csi-hostnetwork-disabled
  • CI for multus-cluster-network
  • CI for combo of above two

@BlaineEXE
Copy link
Member Author

After discussion, we are clarifying for users that we don't support CSI without host network, so we can remove those tests. That includes the existing test for CSI hostnet-disabled, as well as multus hostnet-disabled. And then also, the combo test need not be modified.

@subhamkrai
Copy link
Contributor

After discussion, we are clarifying for users that we don't support CSI without host network, so we can remove those tests. That includes the existing test for CSI hostnet-disabled, as well as multus hostnet-disabled. And then also, the combo test need not be modified.

  • remove existing csi-hostnetwork-disabled test
  • create new multus test without holder pod deployment

@BlaineEXE sounds good?

@BlaineEXE
Copy link
Member Author

Yes, I think that sounds good. 💯

For other devs/users, the reasoning (based on discussion during huddle today) is as follows:

The non-Multus scenario with host networking disabled was never explicitly supported, so that one can really be removed (as mentioned). There are no alternate network modes for non-Multus CSI. This default networking case is being tested implicitly by all normal CI tests.

At the end of this work, there should be 2 CSI networking tests for Multus clusters:

  1. Multus with holder pod (host net disabled) to ensure legacy deployments still work (already exists)
  2. Multus with host networking to ensure new development works (needs added)

@subhamkrai subhamkrai linked a pull request May 6, 2024 that will close this issue
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants