-
Notifications
You must be signed in to change notification settings - Fork 579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster: enable controller replay if last_applied is ahead of log #5703
Merged
jcsp
merged 4 commits into
redpanda-data:dev
from
jcsp:controller-replay-after-dataloss
Nov 28, 2022
Merged
cluster: enable controller replay if last_applied is ahead of log #5703
jcsp
merged 4 commits into
redpanda-data:dev
from
jcsp:controller-replay-after-dataloss
Nov 28, 2022
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
jcsp
force-pushed
the
controller-replay-after-dataloss
branch
2 times, most recently
from
August 2, 2022 13:08
30887d4
to
2089526
Compare
jcsp
requested review from
dotnwat,
NyaliaLui,
mmaslankaprv,
ztlpn and
VadimPlh
as code owners
August 2, 2022 13:09
mmaslankaprv
reviewed
Aug 2, 2022
mmaslankaprv
reviewed
Aug 2, 2022
mmaslankaprv
reviewed
Aug 2, 2022
jcsp
force-pushed
the
controller-replay-after-dataloss
branch
from
November 24, 2022 20:04
2089526
to
d1e3ff2
Compare
jcsp
requested review from
mmaslankaprv
and removed request for
dotnwat,
ztlpn,
NyaliaLui and
VadimPlh
November 24, 2022 20:55
Noticed this while writing a test with an off by one on a node id. Internally the error handling is safe, but in the API we're returning a 500 instead of a 400.
This situation happens if someone deletes controller log segments while leaving the kvstore in place. Previously, the kvstore last_applied would cause the node to hang waiting for the controller log to replay to that offset. Now, we log an error about the apparent inconsistency, and proceed. In general we do not want to ignore data inconsistency, but this is a special case: deleting the controller log is something a user might legitimately do in order to work around another issue + force redpanda to rebuild the local copy of the controller log.
For tests that want to know "did node X log message Y" rather than just "was message Y logged anywhere"
This test validates that it is possible to reset the controller log on a single node by removing it, a procedure occasionally used in the field in the event of a cluster in a split brain situation resulting from interference with a node's storage.
jcsp
force-pushed
the
controller-replay-after-dataloss
branch
from
November 25, 2022 15:29
d1e3ff2
to
0eda42c
Compare
mmaslankaprv
approved these changes
Nov 28, 2022
This pull request was closed.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Cover letter
This situation happens if someone deletes controller log
segments while leaving the kvstore in place.
Previously, the kvstore last_applied would cause the
node to hang waiting for the controller log to replay
to that offset. This is not a redpanda bug per-se, as it only happens
when the underlying system violates invariants about storage,
but it is a case where we can be more helpful.
Now, we log an error about the apparent inconsistency,
and proceed.
In general we do not want to ignore data inconsistency, but
this is a special case: deleting the controller log is something
a user might legitimately do in order to work around another
issue + force redpanda to rebuild the local copy of the
controller log.
Fixes #4950
UX changes
None
Release notes
Improvements