-
Notifications
You must be signed in to change notification settings - Fork 577
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
config: correct value of expected retention #5750
Conversation
the config's type of "retention.ms" is defined in `topic_properties`, like ```c++ tristate<std::chrono::milliseconds> retention_duration{std::nullopt}; ``` and we use `describe_as_string()` to format the value of config description, and `describe_as_string()` in turn uses `fmt::format()` to stringify the value of config's value. in this case, the type of "retention.ms" is `std::chrono::milliseconds`, which is formatted like `1234ms` if its value is `std::chrono::milliseconds(1234)` in fmtlib v6.x, v7.x, v8.x and v.9.0. in this change, the expected value is changed from `"1234"` to `fmt::format("{}", 1234ms)` to more future-proof, in case fmtlib changes its formatting in future. for the same reason, the calls like `config::shard_local_cfg().delete_retention_ms().value_or(-1ms).count()` are also changed to `config::shard_local_cfg().delete_retention_ms().value_or(-1ms)`. Signed-off-by: Kefu Chai <tchaikov@gmail.com>
@LenaAn hi Lena, could you please take a look? see https://godbolt.org/z/rEedzG7WG for a minimal reproducer to demonstrate the behavior of i am able to reproduce this test failure using |
Hi @tchaikov , thanks so much for your contribution! This change is to I don't quite understand how this test works now if |
@LenaAn sorry, i was looking at a wrong terminal. yeah, should be
no, we don't expect "1234". what we expect is the output of $ bin/test_kafka_server_fixture_rpunit 2>&1 | tee foo.out
$ grep 1234 foo.out
INFO 2022-08-02 20:11:56,559 [shard 0] cluster - topics_frontend.cc:82 - Create topics {{configuration: { topic: {ns: {kafka}, topic: {topicC}}, partition_count: 3, replication_factor: 1, properties: { compression: {nullopt}, cleanup_policy_bitflags: {delete}, compaction_strategy: {nullopt}, retention_bytes: {1234567}, retention_duration_ms: {}, segment_size: {7654321}, timestamp_type: {nullopt}, recovery_enabled: {nullopt}, shadow_indexing: {nullopt}, read_replica: {nullopt}, read_replica_bucket: {nullopt} }}, custom_assignments: {}}, {configuration: { topic: {ns: {kafka}, topic: {topicD}}, partition_count: 3, replication_factor: 1, properties: { compression: {nullopt}, cleanup_policy_bitflags: {delete}, compaction_strategy: {nullopt}, retention_bytes: {1234567}, retention_duration_ms: {}, segment_size: {7654321}, timestamp_type: {nullopt}, recovery_enabled: {nullopt}, shadow_indexing: {nullopt}, read_replica: {nullopt}, read_replica_bucket: {nullopt} }}, custom_assignments: {}}}
../src/v/kafka/server/tests/alter_config_test.cc(226): fatal error: in "test_alter_single_topic_config": critical check cfg_it->value == value has failed [{1234ms} != 1234]
../src/v/kafka/server/tests/alter_config_test.cc(226): fatal error: in "test_alter_multiple_topics_config": critical check cfg_it->value == value has failed [{1234ms} != 1234]
../src/v/kafka/server/tests/alter_config_test.cc(226): fatal error: in "test_alter_configuration_should_override": critical check cfg_it->value == value has failed [{1234ms} != 1234]
../src/v/kafka/server/tests/alter_config_test.cc(226): fatal error: in "test_incremental_alter_config": critical check cfg_it->value == value has failed [{1234ms} != 1234]
../src/v/kafka/server/tests/alter_config_test.cc(226): fatal error: in "test_incremental_alter_config_remove": critical check cfg_it->value == value has failed [{1234ms} != 1234]
../src/v/kafka/server/tests/alter_config_test.cc(226): fatal error: in "test_alter_config_does_not_rewrite_data_policy": critical check cfg_it->value == value has failed [{1234ms} != 1234] |
hmmm I can't reproduce it locally, the test looks to pass @tchaikov can you tell me the steps to reproduce the issue? |
sure, will get back to you with the steps early tonight. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. No need backporting since that's not a bugfix.
@LenaAn ping? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm. restarted ci
Failure is #5276 |
the config's type of "retention.ms" is defined in
topic_properties
,like
and we use
describe_as_string()
to format the value of configdescription, and
describe_as_string()
in turn usesfmt::format()
to stringify the value of config's value. in this case, the type
of "retention.ms" is
std::chrono::milliseconds
, which is formattedlike
1234ms
if its value isstd::chrono::milliseconds(1234)
infmtlib v6.x, v7.x, v8.x and v.9.0.
in this change, the expected value is changed from
"1234"
tofmt::format("{}", 1234ms)
to more future-proof, in case fmtlibchanges its formatting in future.
for the same reason, the calls like
config::shard_local_cfg().delete_retention_ms().value_or(-1ms).count()
are also changed to
config::shard_local_cfg().delete_retention_ms().value_or(-1ms)
.Signed-off-by: Kefu Chai tchaikov@gmail.com
Cover letter
Describe in plain language the motivation (bug, feature, etc.) behind the change in this PR and how the included commits address it.
Fixes #ISSUE-NUMBER, Fixes #ISSUE-NUMBER, ...
Backport Required
UX changes
Describe in plain language how this PR affects an end-user. What topic flags, configuration flags, command line flags, deprecation policies etc are added/changed.
Release notes