Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kafka Scaler scaleToZeroOnInvalidOffset flag is only working for 'latest' offsetresetpolicy #4910

Closed
jeevanragula opened this issue Aug 25, 2023 · 7 comments · Fixed by #5689
Labels
bug Something isn't working

Comments

@jeevanragula
Copy link
Contributor

Report

We have configured the offsetresetpolicy as "earliest" in our Kafka scaledobject.
Also, the scaleToZeroOnInvalidOffset as "true".

The pods are not scaled to Zero when we get consumeroffset as -1.

As per the code below in getLagForPartition function, this property is used only for offsetResetPolicy== latest.

Is that expected behavior?

consumerOffset := block.Offset
	if consumerOffset == invalidOffset && s.metadata.offsetResetPolicy == latest {
		retVal := int64(1)
		if s.metadata.scaleToZeroOnInvalidOffset {
			retVal = 0
		}
		msg := fmt.Sprintf(
			"invalid offset found for topic %s in group %s and partition %d, probably no offset is committed yet. Returning with lag of %d",
			topic, s.metadata.group, partitionID, retVal)
		s.logger.V(1).Info(msg)
		return retVal, retVal, nil
	}

If the offsetresetpolicy is earliest, If we get invalid offset for 1 partition that lag (-1) will be deducted in the total lag in below for loop.
Can't we use scaleToZeroOnInvalidOffset as true in this case?

		for partition := range partitionsOffsets {
			lag, lagWithPersistent, err := s.getLagForPartition(topic, partition, consumerOffsets, producerOffsets)
			if err != nil {
				return 0, 0, err
			}
			totalLag += lag
			totalLagWithPersistent += lagWithPersistent
		}
	if consumerOffset == invalidOffset && s.metadata.offsetResetPolicy == earliest {
		return latestOffset, latestOffset, nil
	}

Expected Behavior

I think the behavior needs to be documented clearly or changes needed in the logic.

Actual Behavior

The property is only working in some cases.

Steps to Reproduce the Problem

  1. Configure Kafka scaler

  2. Make sure there are no messages in Kafka for the retention period of 7 days (default value in kafka)

  3. Configure below properties

    offsetResetPolicy: earliest
    scaleToZeroOnInvalidOffset: "true"

Logs from KEDA operator

No response

KEDA Version

2.11.2

Kubernetes Version

1.24

Platform

Any

Scaler Details

Kafka

Anything else?

No response

@jeevanragula jeevanragula added the bug Something isn't working label Aug 25, 2023
@somabodonyi
Copy link

somabodonyi commented Sep 29, 2023

I think I'm also having this issue.

I use Azure Container Apps which uses Keda version 2.10.0.

No events were produced to the topic for 1 week, after which the number of replicas went up to the maximum value (3 in my case) - there are 6 partitions.

After setting the offsetresetpolicy to latest, the replicas are scaled down to 0. This setting works fine for a while (maybe a couple of days), after which the number of replicas is always zero, and it does not scale up.

Copy link

stale bot commented Dec 1, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Dec 1, 2023
Copy link

stale bot commented Dec 8, 2023

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Dec 8, 2023
@zroubalik zroubalik reopened this Dec 11, 2023
@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Dec 11, 2023
@DmitrySigalov
Copy link

hi,
reproduced same issue.
any progress of current fix?

Copy link

stale bot commented Mar 30, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Mar 30, 2024
@zroubalik zroubalik removed the stale All issues that are marked as stale due to inactivity label Apr 10, 2024
@zroubalik
Copy link
Member

@dttung2905 wdyt? For me it sounds reasonable, are there any problems from Kafka side?

@dttung2905
Copy link
Contributor

Hi @jeevanragula,
just to summarise your issue here. You would like to be able to set scaleToZeroOnInvalidOffset: "true" to scale to 0 replica regardless of whether the offset reset policy is latest or earliest ? In this case, scaleToZeroOnInvalidOffset: "true" only works for latest offset reset policy, which conflicts with the curren documentation 🤔

scaleToZeroOnInvalidOffset - This parameter controls what the scaler does when a partition doesn’t have a valid offset. If ‘false’ (the default), the scaler will keep a single consumer for that partition. Otherwise (’true’), the consumers for that partition will be scaled to zero. See the discussion about this parameter.

If @zroubalik and @JorTurFer agree, I can issue a PR to fix this issue this week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

5 participants