-
Notifications
You must be signed in to change notification settings - Fork 577
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: fix upgrade tests not to skip versions #7310
Comments
Some upgrade tests start out pegging to a specific version as a means to test the onboarding behavior when going from a version that doesn't support feature Foo to a version that does support Foo. Depending on how the test is written, it may only make sense to ever test from a specific older version and below. Perhaps our guidance should be to always write two kinds of upgrade test when writing a new feature:
|
We discussed this a bit and we concluded that moving forward we should categorize our existing upgrade tests:
We also agreed that we do still want the ability to test several upgrades in a row, though without the cost of making all of our existing fixed-version feature-onboarding upgrade tests perform sequential upgrades. Instead, we should add a single test that starts at specific older versions (22.1, 22.2, etc) and runs through the sequential upgrades until dev, performing some smoke checks for various new features as we go. Developers of new features should consider adding basic checks that ensure their feature works on each version after the version that introduced the feature. Something along the lines of the following pseudo-code:
It'd be great if the above could be made generic and easy to extend. It'll be expected that the test will grow an increasingly high runtime, hence the desire to have very few of these in the tree. We can revisit what versions to start with (or perhaps have variants of the test that start from different feature versions) as more versions fall out of the support window. |
Thinking about delivering this, I think there are probably ~3 PRs:
#7687 is a good example of the kind of thing we can gain coverage of with the (3) changes: stepping through each version, and at each version writing some data to a compacted topic, enough to fill a segment and cause an index to be written, and then also at each version waiting for some compaction to happen to ensure that the versions can handle each others' data. This ticket doesn't require testing every possible feature, but it should add the framework for easily hooking in a new aspect to the upgrade test to cover a different feature. |
tackling this one at #7836 Instead of hardcoding all the "before" redpanda version, I added a function latest_for_line that retrieves the most recent version for a line. |
for this one, I'm thinking of a function releases_sequence(first_release, last_release, list_of_releases_not_to_skip = []) that will accept a range of releases (or line of releases) and will produce a list of versions to install.
then a base class
that can be subclassed for simple tests that have to run after the whole cluster is upgraded. For tests that need partial updates, it will probably need an intermediate hook that returns which node to update and which to rollback |
Where an upgrade test has an explicit starting version, it should not try to upgrade straight to HEAD -- we do not support skipping versions, and this risks missing problems that only occur when an intermediate release is used.
Fixing is probably some mixture of:
The text was updated successfully, but these errors were encountered: