Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Operator Guide #1456

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
+++
title = "Operate"
title = "Operator Guide"
description = "Guidance and requirements for operating KEDA"
weight = 1
+++
Expand Down
12 changes: 12 additions & 0 deletions content/docs/2.15/Operator Guide/caching-metrics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
+++
title = "Caching Metrics"
weight = 600
+++

## Caching Metrics

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior such that KEDA Metrics Server tries to read metric from the cache first. This cache is updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

This feature is not supported for `cpu`, `memory` or `cron` scaler.
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
+++
title = "Pause Auto-Scaling with deployments"
weight = 600
+++

## Pausing autoscaling

It can be useful to instruct KEDA to pause the autoscaling of objects, to do to cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads.

This is preferable to deleting the resource because it removes the instances it is running from operation without touching the applications themselves. When ready, you can then reenable scaling.

You can pause autoscaling by adding this annotation to your `ScaledObject` definition:


```yaml
metadata:
annotations:
autoscaling.keda.sh/paused-replicas: "0"
autoscaling.keda.sh/paused: "true"
```

The presence of these annotations will pause autoscaling no matter what number of replicas is provided.

The annotation `autoscaling.keda.sh/paused` will pause scaling immediately and use the current instance count while the annotation `autoscaling.keda.sh/paused-replicas: "<number>"` will scale your current workload to specified amount of replicas and pause autoscaling. You can set the value of replicas for an object to be paused to any arbitrary number.

Typically, either one or the other is being used given they serve a different purpose/scenario. However, if both `paused` and `paused-replicas` are set, KEDA will scale your current workload to the number specified count in `paused-replicas` and then pause autoscaling.

To unpause (reenable) autoscaling again, remove all paused annotations from the `ScaledObject` definition. If you paused with `autoscaling.keda.sh/paused`, you can unpause by setting the annotation to `false`.
26 changes: 26 additions & 0 deletions content/docs/2.15/Operator Guide/pause-autoscaling-jobs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
+++
title = "Pause Auto-Scaling jobs"
weight = 600
+++

## Pausing autoscaling

It can be useful to instruct KEDA to pause the autoscaling of objects, to do to cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads.

This is preferable to deleting the resource because it removes the instances it is running from operation without touching the applications themselves. When ready, you can then reenable scaling.

You can pause autoscaling by adding this annotation to your `ScaledJob` definition:

```yaml
metadata:
annotations:
autoscaling.keda.sh/paused: true
```

To reenable autoscaling, remove the annotation from the `ScaledJob` definition or set the value to `false`.

```yaml
metadata:
annotations:
autoscaling.keda.sh/paused: false
```
23 changes: 23 additions & 0 deletions content/docs/2.15/Operator Guide/prevention-rules.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
+++
title = "Prevention Rules"
description = "Rules to prevent misconfigurations and ensure proper scaling behavior"
weight = 600
+++

There are some several misconfiguration scenarios that can produce scaling problems in productive workloads, for example: in Kubernetes a single workload should never be scaled by 2 or more HPA because that will produce conflicts and unintended behaviors.

Some errors with data format can be detected during the model validation, but these misconfigurations can't be detected in that step because the model is correct indeed. For trying to avoid those misconfigurations at data plane detecting them early, admission webhooks validate all the incoming (KEDA) resources (new or updated) and reject any resource that doesn't match the rules below.

### Prevention Rules

KEDA will block all incoming changes to `ScaledObject` that don't match these rules:

- The scaled workload (`scaledobject.spec.scaleTargetRef`) is already autoscaled by another other sources (other ScaledObject or HPA).
- CPU and/or Memory trigger are used and the scaled workload doesn't have the requests defined. **This rule doesn't apply to all the workload types, only to `Deployment` and `StatefulSet`.**
- CPU and/or Memory trigger are **the only used triggers** and the ScaledObject defines `minReplicaCount:0`. **This rule doesn't apply to all the workload types, only to `Deployment` and `StatefulSet`.**
- In the case of multiple triggers where a `name` is **specified**, the name must be **unique** (it is not allowed to have multiple triggers with the same name)

KEDA will block all incoming changes to `TriggerAuthentication`/`ClusterTriggerAuthentication` that don't match these rules:

- The specified identity ID for Azure AD Workload Identity and/or Pod Identity is empty. (Default/unset identity ID will be passed through.)
> NOTE: This only applies if the `TriggerAuthentication/ClusterTriggerAuthentication` is overriding the default identityId provided to KEDA during the installation
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,7 @@ kubectl logs -n keda {keda-pod-name} -c keda-operator
## Reporting issues

If you are having issues or hitting a potential bug, please file an issue [in the KEDA GitHub repo](https://github.com/kedacore/keda/issues/new/choose) with details, logs, and steps to reproduce the behavior.

## Common issue and their Solutions

{{< troubleshooting >}}
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
+++
title = "Operate"
title = "Operator Guide"
description = "Guidance and requirements for operating KEDA"
weight = 1
+++
Expand Down
12 changes: 12 additions & 0 deletions content/docs/2.16/Operator Guide/caching-metrics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
+++
title = "Caching Metrics"
weight = 600
+++

## Caching Metrics

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior such that KEDA Metrics Server tries to read metric from the cache first. This cache is updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

This feature is not supported for `cpu`, `memory` or `cron` scaler.
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ As a reference, this compatibility matrix shows supported k8s versions per KEDA

| KEDA | Kubernetes |
| ----- | ------------- |
| v2.16 | TBD |
| v2.15 | v1.28 - v1.30 |
| v2.14 | v1.27 - v1.29 |
| v2.13 | v1.27 - v1.29 |
Expand Down
211 changes: 211 additions & 0 deletions content/docs/2.16/Operator Guide/migration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,211 @@
+++
title = "Migration Guide"
+++

## Migrating from KEDA v1 to v2

Please note that you **can not** run both KEDA v1 and v2 on the same Kubernetes cluster. You need to [uninstall](../../1.5/deploy) KEDA v1 first, in order to [install](../deploy) and use KEDA v2.

> 💡 **NOTE:** When uninstalling KEDA v1 make sure v1 CRDs are uninstalled from the cluster as well.

KEDA v2 is using a new API namespace for its Custom Resources Definitions (CRD): `keda.sh` instead of `keda.k8s.io` and introduces a new Custom Resource for scaling of Jobs. See full details on KEDA Custom Resources [here](../concepts/#custom-resources-crd).

Here's an overview of what's changed:

- [Scaling of Deployments](#scaling-of-deployments)
- [Scaling of Jobs](#scaling-of-jobs)
- [Improved flexibility & usability of trigger metadata](#improved-flexibility--usability-of-trigger-metadata)
- [Scalers](#scalers)
- [TriggerAuthentication](#triggerauthentication)

### Scaling of Deployments

In order to scale `Deployments` with KEDA v2, you need to do only a few modifications to existing v1 `ScaledObjects` definitions, so they comply with v2:

- Change the value of `apiVersion` property from `keda.k8s.io/v1alpha1` to `keda.sh/v1alpha1`
- Rename property `spec.scaleTargetRef.deploymentName` to `spec.scaleTargetRef.name`
- Rename property `spec.scaleTargetRef.containerName` to `spec.scaleTargetRef.envSourceContainerName`
- Label `deploymentName` (in `metadata.labels.`) is no longer needed to be specified on v2 ScaledObject (it was mandatory on older versions of v1)

Please see the examples below or refer to the full [v2 ScaledObject Specification](./reference/scaledobject-spec)

**Example of v1 ScaledObject**

```yaml
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: { scaled-object-name }
labels:
deploymentName: { deployment-name }
spec:
scaleTargetRef:
deploymentName: { deployment-name }
containerName: { container-name }
pollingInterval: 30
cooldownPeriod: 300
minReplicaCount: 0
maxReplicaCount: 100
triggers:
# {list of triggers to activate the deployment}
```

**Example of v2 ScaledObject**

```yaml
apiVersion: keda.sh/v1alpha1 # <--- Property value was changed
kind: ScaledObject
metadata: # <--- labels.deploymentName is not needed
name: { scaled-object-name }
spec:
scaleTargetRef:
name: { deployment-name } # <--- Property name was changed
envSourceContainerName: { container-name } # <--- Property name was changed
pollingInterval: 30
cooldownPeriod: 300
minReplicaCount: 0
maxReplicaCount: 100
triggers:
# {list of triggers to activate the deployment}
```

### Scaling of Jobs

In order to scale `Jobs` with KEDA v2, you need to do only a few modifications to existing v1 `ScaledObjects` definitions, so they comply with v2:

- Change the value of `apiVersion` property from `keda.k8s.io/v1alpha1` to `keda.sh/v1alpha1`
- Change the value of `kind` property from `ScaledObject` to `ScaledJob`
- Remove property `spec.scaleType`
- Remove properties `spec.cooldownPeriod` and `spec.minReplicaCount`

You can configure `successfulJobsHistoryLimit` and `failedJobsHistoryLimit`. They will remove the old job histories automatically.

Please see the examples below or refer to the full [v2 ScaledJob Specification](./reference/scaledjob-spec/)

**Example of v1 ScaledObject for Jobs scaling**

```yaml
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: { scaled-object-name }
spec:
scaleType: job
jobTargetRef:
parallelism: 1
completions: 1
activeDeadlineSeconds: 600
backoffLimit: 6
template:
# {job template}
pollingInterval: 30
cooldownPeriod: 300
minReplicaCount: 0
maxReplicaCount: 100
triggers:
# {list of triggers to create jobs}
```

**Example of v2 ScaledJob**

```yaml
apiVersion: keda.sh/v1alpha1 # <--- Property value was changed
kind: ScaledJob # <--- Property value was changed
metadata:
name: { scaled-job-name }
spec: # <--- spec.scaleType is not needed
jobTargetRef:
parallelism: 1
completions: 1
activeDeadlineSeconds: 600
backoffLimit: 6
template:
# {job template}
pollingInterval: 30 # <--- spec.cooldownPeriod and spec.minReplicaCount are not needed
successfulJobsHistoryLimit: 5 # <--- property is added
failedJobsHistoryLimit: 5 # <--- Property is added
maxReplicaCount: 100
triggers:
# {list of triggers to create jobs}
```

### Improved flexibility & usability of trigger metadata

We've introduced more options to configure trigger metadata to give users more flexibility.

> 💡 **NOTE:** Changes only apply to trigger metadata and don't impact usage of `TriggerAuthentication`

Here's an overview:

| Scaler | 1.x | 2.0 |
| -------------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- | -------- |
| `azure-blob` | `connection` (**Default**: `AzureWebJobsStorage`) | `connectionFromEnv` |
| `azure-monitor` | `activeDirectoryClientId` `activeDirectoryClientPassword` | `activeDirectoryClientId` `activeDirectoryClientIdFromEnv` `activeDirectoryClientPasswordFromEnv` |
| `azure-queue` | `connection` (**Default**: AzureWebJobsStorage) | `connectionFromEnv` |
| `azure-servicebus` | `connection` | `connectionFromEnv` |
| `azure-eventhub` | `storageConnection` (**Default**: `AzureWebJobsStorage`) `connection` (**Default**: `EventHub`) | `storageConnectionFromEnv` `connectionFromEnv` |
| `aws-cloudwatch` | `awsAccessKeyID` (**Default**: `AWS_ACCESS_KEY_ID`) `awsSecretAccessKey` (**Default**: `AWS_SECRET_ACCESS_KEY`) | `awsAccessKeyID` `awsAccessKeyIDFromEnv` `awsSecretAccessKeyFromEnv` |
| `aws-kinesis-stream` | `awsAccessKeyID` (**Default**: `AWS_ACCESS_KEY_ID`) `awsSecretAccessKey` (**Default**: `AWS_SECRET_ACCESS_KEY`) | `awsAccessKeyID` `awsAccessKeyIDFromEnv` `awsSecretAccessKeyFromEnv` |
| `aws-sqs-queue` | `awsAccessKeyID` (**Default**: `AWS_ACCESS_KEY_ID`) `awsSecretAccessKey` (**Default**: `AWS_SECRET_ACCESS_KEY`) | `awsAccessKeyID` `awsAccessKeyIDFromEnv` `awsSecretAccessKeyFromEnv` |
| `kafka` | _(none)_ | _(none)_ |
| `rabbitmq` | `apiHost` `host` | ~~`apiHost`~~ `host` `hostFromEnv` |
| `prometheus` | _(none)_ | _(none)_ |
| `cron` | _(none)_ | _(none)_ |
| `redis` | `address` `host` `port` `password` | `address` `addressFromEnv` `host` `hostFromEnv` ~~`port`~~ `passwordFromEnv` |
| `redis-streams` | `address` `host` `port` `password` | `address` `addressFromEnv` `host` `hostFromEnv` ~~`port`~~ `passwordFromEnv` |
| `gcp-pubsub` | `credentials` | `credentialsFromEnv` |
| `external` | _(any matching value)_ | _(any matching value with `FromEnv` suffix)_ |
| `liiklus` | _(none)_ | _(none)_ |
| `stan` | _(none)_ | _(none)_ |
| `huawei-cloudeye` | | _(none)_ | _(none)_ |
| `postgresql` | `connection` `password` | `connectionFromEnv` `passwordFromEnv` |
| `mysql` | `connectionString` `password` | `connectionStringFromEnv` `passwordFromEnv` |

### Scalers

**Azure Service Bus**

- `queueLength` was renamed to `messageCount`

**Kafka**

- `authMode` property was replaced with `sasl` and `tls` properties. Please refer [documentation](../scalers/apache-kafka/#authentication-parameters) for Kafka Authentication Parameters details.

**RabbitMQ**

In KEDA 2.0 the RabbitMQ scaler has only `host` parameter, and the protocol for communication can be specified by
`protocol` (http or amqp). The default value is `amqp`. The behavior changes only for scalers that were using HTTP
protocol.

Example of RabbitMQ trigger before 2.0:

```yaml
triggers:
- type: rabbitmq
metadata:
queueLength: "20"
queueName: testqueue
includeUnacked: "true"
apiHost: "https://guest:password@localhost:443/vhostname"
```

The same trigger in 2.0:

```yaml
triggers:
- type: rabbitmq
metadata:
queueLength: "20"
queueName: testqueue
protocol: "http"
host: "https://guest:password@localhost:443/vhostname"
```

### TriggerAuthentication

In order to use Authentication via `TriggerAuthentication` with KEDA v2, you need to change:

- Change the value of `apiVersion` property from `keda.k8s.io/v1alpha1` to `keda.sh/v1alpha1`

For more details please refer to the full
[v2 TriggerAuthentication Specification](../concepts/authentication/#re-use-credentials-and-delegate-auth-with-triggerauthentication)
Loading