You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Increasing the cluster replicas triggers a rolling restart, which means that new Redpanda pods get scheduled on existing pods' nodes.
E.g. in an N node cluster scaled up to M nodes:
Operator triggers a rolling restart
Pod 0 is deleted (restarted)
Pod N+1 is scheduled in Node 0
Pod 0, which has an affinity for Node 0, becomes unschedulable
What should have happened instead?
Existing pods shouldn't be restarted, and new pods should be scheduled on available nodes.
How to reproduce the issue?
Deploy an N-broker Redpanda cluster on an M-node k8s cluster using the operator.
Edit the cluster CR, increasing the replicas from N to M
Monitor the pods in the redpanda namespace (kubectl get pods -n redpanda -w)
Watch a rolling restart be attempted, with a new pod being scheduled on node 0, and then pod 0 becoming unschedulable due to a persistent volume conflict.
Additional information
Please attach any relevant logs, backtraces, or metric charts.
Deleting the pod scheduled on pod 0's node allows the rolling restart to continue, but of course a pod inevitably becomes unschedulable in the end:
Version & Environment
Redpanda version: (use
rpk version
): v22.3.1-rc4What went wrong?
Increasing the cluster replicas triggers a rolling restart, which means that new Redpanda pods get scheduled on existing pods' nodes.
E.g. in an N node cluster scaled up to M nodes:
What should have happened instead?
Existing pods shouldn't be restarted, and new pods should be scheduled on available nodes.
How to reproduce the issue?
replicas
from N to Mredpanda
namespace (kubectl get pods -n redpanda -w
)Additional information
Please attach any relevant logs, backtraces, or metric charts.
Deleting the pod scheduled on pod 0's node allows the rolling restart to continue, but of course a pod inevitably becomes unschedulable in the end:
The text was updated successfully, but these errors were encountered: