Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

metrics double registration (storage_log_readers) in partition balancer test #5938

Closed
dotnwat opened this issue Aug 10, 2022 · 0 comments · Fixed by #5939
Closed

metrics double registration (storage_log_readers) in partition balancer test #5938

dotnwat opened this issue Aug 10, 2022 · 0 comments · Fixed by #5939
Labels

Comments

@dotnwat
Copy link
Member

dotnwat commented Aug 10, 2022

https://buildkite.com/redpanda/redpanda/builds/13909#0182884c-653b-428d-9b29-0601f4c8788b/1461-3506



test_id:    rptest.tests.partition_balancer_test.PartitionBalancerTest.test_unavailable_nodes
--
  | status:     FAIL
  | run time:   3 minutes 57.028 seconds
  |  
  |  
  | <BadLogLines nodes=docker-rp-22(1) example="ERROR 2022-08-10 15:38:18,810 [shard 1] cluster - controller_backend.cc:693 - exception while executing partition operation: {type: update, ntp: {kafka/topic-rhsqukwrtm/21}, offset: 350, new_assignment: { id: 21, group_id: 22, replicas: {{node_id: 5, shard: 1}, {node_id: 4, shard: 1}, {node_id: 3, shard: 1}} }, previous_replica_set: {{{node_id: 4, shard: 1}, {node_id: 3, shard: 1}, {node_id: 1, shard: 1}}}} - seastar::metrics::double_registration (registering metrics twice for metrics: storage_log_readers_added)">
  | Traceback (most recent call last):
  | File "/usr/local/lib/python3.10/dist-packages/ducktape/tests/runner_client.py", line 135, in run
  | data = self.run_test()
  | File "/usr/local/lib/python3.10/dist-packages/ducktape/tests/runner_client.py", line 227, in run_test
  | return self.test_context.function(self.test)
  | File "/root/tests/rptest/services/cluster.py", line 48, in wrapped
  | self.redpanda.raise_on_bad_logs(allow_list=log_allow_list)
  | File "/root/tests/rptest/services/redpanda.py", line 1126, in raise_on_bad_logs
  | raise BadLogLines(bad_lines)
  | rptest.services.utils.BadLogLines: <BadLogLines nodes=docker-rp-22(1) example="ERROR 2022-08-10 15:38:18,810 [shard 1] cluster - controller_backend.cc:693 - exception while executing partition operation: {type: update, ntp: {kafka/topic-rhsqukwrtm/21}, offset: 350, new_assignment: { id: 21, group_id: 22, replicas: {{node_id: 5, shard: 1}, {node_id: 4, shard: 1}, {node_id: 3, shard: 1}} }, previous_replica_set: {{{node_id: 4, shard: 1}, {node_id: 3, shard: 1}, {node_id: 1, shard: 1}}}} - seastar::metrics::double_registration (registering metrics twice for metrics: storage_log_readers_added)">
@dotnwat dotnwat added kind/bug Something isn't working ci-failure labels Aug 10, 2022
NyaliaLui added a commit to NyaliaLui/redpanda that referenced this issue Aug 10, 2022
We recently ran into a double registration issue where a reader cache
was still alive even though stop() was called on it. This leads to a
double registration situation because a new reader cache will
attempt to register metrics again.

Fixes redpanda-data#5938
NyaliaLui added a commit to NyaliaLui/redpanda that referenced this issue Aug 10, 2022
We recently ran into a double registration issue where a reader cache
was still alive even though stop() was called on it. This leads to a
double registration situation because a new reader cache will
attempt to register metrics again.

Fixes redpanda-data#5938
vbotbuildovich pushed a commit to vbotbuildovich/redpanda that referenced this issue Aug 11, 2022
We recently ran into a double registration issue where a reader cache
was still alive even though stop() was called on it. This leads to a
double registration situation because a new reader cache will
attempt to register metrics again.

Fixes redpanda-data#5938

(cherry picked from commit 9bd1c45)
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants