Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consumer Offsets Topic defaults to 1 partition #5222

Closed
tmgstevens opened this issue Jun 24, 2022 · 7 comments · Fixed by #5412
Closed

Consumer Offsets Topic defaults to 1 partition #5222

tmgstevens opened this issue Jun 24, 2022 · 7 comments · Fixed by #5412
Assignees
Labels
area/redpanda kind/enhance New feature or request

Comments

@tmgstevens
Copy link

Version & Environment

22.1

What went wrong?

Consumer offsets topic defaults to one partition, which means that on a busy cluster, all consumer requests are having to go to that single node to read and write offsets. It would be better practice to distribute this across at least as many nodes as are in the cluster, if not more.

Additional information

We can fix this by documentation in the short term and in the product longer term - the challenge is getting this right on bootstrapping a cluster (with a single seed) and then adding nodes in. This could also go into the operator at some stage in the future.

@tmgstevens tmgstevens added the kind/bug Something isn't working label Jun 24, 2022
@emaxerrno
Copy link
Contributor

@dotnwat doesnt this get automatically increased on a cluster of 3 nodes or more to something like 16.

I thought it was the same as the other internal topics

cc @mattschumpert

@tmgstevens
Copy link
Author

So I'm looking at a 3 node FMC cluster, which it does have set to 16. But the customer cluster today was definitely set to 1. At what point does it get increased do we know?

@piyushredpanda
Copy link
Contributor

@senior7515 : That is my understanding as well. Perhaps @dlex could help take a look on the internals around this?

@dotnwat
Copy link
Member

dotnwat commented Jun 28, 2022

Replication count is probably increased going from 1 to 3 nodes (it should). I doubt the partition count increases automatically--at least I don't have a recollection of that occurring and I'm not entirely sure that would work.

@dotnwat
Copy link
Member

dotnwat commented Jun 28, 2022

I think that kafka defaults to 50 partitions

@dotnwat
Copy link
Member

dotnwat commented Jun 28, 2022

In the short term at least it probably makes sense to increase the default

@mmedenjak mmedenjak added area/redpanda kind/enhance New feature or request and removed kind/bug Something isn't working labels Jul 6, 2022
dlex added a commit to dlex/redpanda that referenced this issue Jul 8, 2022
When a consumer tries to locate a consumer group coordinator of a cluster
for the first time, the __consumer_offsets topic is created with the
number of partitions as per the group_topic_partitions property.
The default value for that property was 1 which means that unless
a different value was explicitly specified by the customer at a very
early stage of cluster's life, all OffsetCommit requests from all
consumers will be going to a single broker. This change increases
the default value to 16 as a reasonable trade-off between OffsetCommit
parallelism for the clusters that will use consumer groups
later in their life, and the overhead for the clusters that
won't use consumer groups.

redpanda-data#5222
@dlex
Copy link
Contributor

dlex commented Jul 14, 2022

Related: redpanda-data/docs#489

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/redpanda kind/enhance New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants