Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting lagThreshold to 0 causes the keda-operator to panic #3366

Closed
Stii opened this issue Jul 14, 2022 · 3 comments · Fixed by #3367
Closed

Setting lagThreshold to 0 causes the keda-operator to panic #3366

Stii opened this issue Jul 14, 2022 · 3 comments · Fixed by #3367
Assignees
Labels
bug Something isn't working

Comments

@Stii
Copy link

Stii commented Jul 14, 2022

Report

The lagThreshold for a scaledobject was set to 0 in an attempt to disable the SO, but this caused the keda-operator to panic, resulting in the operator going into a CrashloopBackoff state.

Expected Behavior

Warn that the scaledobject has an incorrect value or prevent lagThreshold to be set to 0

Actual Behavior

The keda-operator produces panic: runtime error: integer divide by zero and goes into a CrashloopBackoff state

Steps to Reproduce the Problem

  1. Edit a scaledobject
  2. Set the lagThreshold to 0

Logs from KEDA operator

panic: runtime error: integer divide by zero

goroutine 1021 [running]:
github.com/kedacore/keda/v2/pkg/scalers.(*kafkaScaler).GetMetrics(0xc0079837a0, {0x1, 0x4}, {0x3435e92, 0xb}, {0x0, 0xa})
        /workspace/pkg/scalers/kafka_scaler.go:485 +0x4ac
github.com/kedacore/keda/v2/pkg/scaling/cache.(*ScalersCache).getScaledJobMetrics(0xc00384a550, {0x3ae9008, 0xc0079ed680}, 0xc0078d0a00)
        /workspace/pkg/scaling/cache/scalers_cache.go:274 +0x523
github.com/kedacore/keda/v2/pkg/scaling/cache.(*ScalersCache).IsScaledJobActive(0xc00074f8b0, {0x3ae9008, 0xc0079ed680}, 0xc0078d0a00)
        /workspace/pkg/scaling/cache/scalers_cache.go:124 +0xda
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).checkScalers(0xc000617340, {0x3ae9008, 0xc0079ed680}, {0x3375f00, 0xc0078d0a00}, {0x3ac7030, 0xc0079f3810})
        /workspace/pkg/scaling/scale_handler.go:286 +0x317
github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).startScaleLoop(0xc000617340, {0x3ae9008, 0xc0079ed680}, 0xc0078d8640, {0x3375f00, 0xc0078d0a00}, {0x3ac7030, 0xc0079f3810})
        /workspace/pkg/scaling/scale_handler.go:149 +0x31c
created by github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).HandleScalableObject
        /workspace/pkg/scaling/scale_handler.go:108 +0x465

KEDA Version

2.7.1

Kubernetes Version

1.19

Platform

Other

Scaler Details

Kafka

Anything else?

No response

@Stii Stii added the bug Something isn't working label Jul 14, 2022
@zroubalik
Copy link
Member

Thanks for reporting, though I am not able to reproduce the problem, if I set lagThreshold to 0, the HPA is not even created:

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: test
spec:
  minReplicaCount: 0
  maxReplicaCount: 5
  scaleTargetRef:
    name: test
  triggers:
  - type: kafka
    metadata:
      topic: my-topic
      bootstrapServers: my-cluster-kafka.svc:9092
      consumerGroup: my-group
      lagThreshold: '0'
      offsetResetPolicy: 'latest'
ERROR   controller.scaledobject Failed to update HPA    {"reconciler group": "keda.sh", "reconciler kind": "ScaledObject", "name": "test", "namespace": "test", "HPA.Namespace": "test", "HPA.Name": "keda-hpa-test", "error": "HorizontalPodAutoscaler.autoscaling \"keda-hpa-test\" is invalid: spec.metrics[0].external.target.averageValue: Invalid value: resource.Quantity{i:resource.int64Amount{value:0, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"0\", Format:\"DecimalSI\"}: must be positive"}
keda-operator-78b964d4cd-z9zvd keda-operator github.com/kedacore/keda/v2/controllers/keda.(*ScaledObjectReconciler).ensureHPAForScaledObjectExists
keda-operator-78b964d4cd-z9zvd keda-operator    /workspace/controllers/keda/scaledobject_controller.go:385
keda-operator-78b964d4cd-z9zvd keda-operator github.com/kedacore/keda/v2/controllers/keda.(*ScaledObjectReconciler).reconcileScaledObject
keda-operator-78b964d4cd-z9zvd keda-operator    /workspace/controllers/keda/scaledobject_controller.go:231
keda-operator-78b964d4cd-z9zvd keda-operator github.com/kedacore/keda/v2/controllers/keda.(*ScaledObjectReconciler).Reconcile
keda-operator-78b964d4cd-z9zvd keda-operator    /workspace/controllers/keda/scaledobject_controller.go:182
keda-operator-78b964d4cd-z9zvd keda-operator sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
keda-operator-78b964d4cd-z9zvd keda-operator    /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:114
keda-operator-78b964d4cd-z9zvd keda-operator sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
keda-operator-78b964d4cd-z9zvd keda-operator    /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:311
keda-operator-78b964d4cd-z9zvd keda-operator sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
keda-operator-78b964d4cd-z9zvd keda-operator    /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:266
keda-operator-78b964d4cd-z9zvd keda-operator sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
keda-operator-78b964d4cd-z9zvd keda-operator    /go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:227

But adding an extra check before the division doesn't hurt.

@zroubalik
Copy link
Member

@Stii also, to pause scaling, you should use the annotation:
https://keda.sh/docs/2.7/concepts/scaling-deployments/#pause-autoscaling

@Stii
Copy link
Author

Stii commented Jul 18, 2022

Ah, thank you so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

2 participants