Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kops terraform output for aws_launch_template includes tags with empty/null values which causes trouble #12071

Closed
gwohletz opened this issue Jul 30, 2021 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@gwohletz
Copy link

gwohletz commented Jul 30, 2021

/kind bug

1. What kops version are you running? The command kops version, will display
this information.

kops 21.0
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

kubernetes 1.17.17

3. What cloud provider are you using?
aws

4. What commands did you run? What is the simplest way to reproduce this issue?
update cluster XXXX --create-kube-config=false --target=terraform --out=XXXX

5. What happened after the commands executed?
resulting terraform contains tags in the aws_launch_template (and other sections) with empty strings as the values, this causes terraform (version 0.12.31 and aws provider 3.51.0) to endlessly think it needs to update the launch templates tags no matter how many times you plan/apply

6. What did you expect to happen?
update things one time and then have an empty terraform plan

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

kind: Cluster
metadata:
  creationTimestamp: null
  name: XXX
spec:
  additionalPolicies:
    master: |
      [
        {
          "Effect": "Allow",
          "Action": ["autoscaling:DescribeTags"],
          "Resource": ["*"]
        }
      ]
  api:
    dns: {}
  authorization:
    rbac: {}
  channel: stable
  cloudLabels:
    environment: XXX-K8S
    kubernetes: ""
  cloudProvider: aws
  clusterDNSDomain: cluster.local
  configBase: s3://XXX
  dnsZone: XXX
  docker:
    logOpt:
    - max-file=3
    - max-size=1g
  etcdClusters:
  - enableEtcdTLS: true
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-XXX-a
      kmsKeyId: XXX
      name: a
    - encryptedVolume: true
      instanceGroup: master-XXX-b
      kmsKeyId: XXX
      name: b
    - encryptedVolume: true
      instanceGroup: master-XXX-c
      kmsKeyId: XXX
      name: c
    name: main
    version: 3.4.3
  - enableEtcdTLS: true
    etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-XXX-a
      kmsKeyId: XXX
      name: a
    - encryptedVolume: true
      instanceGroup: master-XXX-b
      kmsKeyId: XXX
      name: b
    - encryptedVolume: true
      instanceGroup: master-XXX-c
      kmsKeyId: XXX
      name: c
    name: events
    version: 3.4.3
  fileAssets:
  - content: |
      apiVersion: audit.k8s.io/v1beta1
      kind: Policy
      rules:
        # The following requests were manually identified as high-volume and low-risk,
        # so drop them.
        - level: None
          users: ["system:kube-proxy"]
          verbs: ["watch"]
          resources:
            - group: "" # core
              resources: ["endpoints", "services", "services/status"]
        - level: None
          users: ["system:unsecured"]
          namespaces: ["kube-system"]
          verbs: ["get"]
          resources:
            - group: "" # core
              resources: ["configmaps"]
        - level: None
          users: ["kubelet"] # legacy kubelet identity
          verbs: ["get"]
          resources:
            - group: "" # core
              resources: ["nodes", "nodes/status"]
        - level: None
          userGroups: ["system:nodes"]
          verbs: ["get"]
          resources:
            - group: "" # core
              resources: ["nodes", "nodes/status"]
        - level: None
          users:
            - system:kube-controller-manager
            - system:kube-scheduler
            - system:serviceaccount:kube-system:endpoint-controller
          verbs: ["get", "update"]
          namespaces: ["kube-system"]
          resources:
            - group: "" # core
              resources: ["endpoints"]
        - level: None
          users: ["system:apiserver"]
          verbs: ["get"]
          resources:
            - group: "" # core
              resources: ["namespaces", "namespaces/status", "namespaces/finalize"]
        - level: None
          users: ["cluster-autoscaler"]
          verbs: ["get", "update"]
          namespaces: ["kube-system"]
          resources:
            - group: "" # core
              resources: ["configmaps", "endpoints"]
        # Don't log HPA fetching metrics.
        - level: None
          users:
            - system:kube-controller-manager
          verbs: ["get", "list"]
          resources:
            - group: "metrics.k8s.io"
        # Don't log these read-only URLs.
        - level: None
          nonResourceURLs:
            - /healthz*
            - /version
            - /swagger*
        # Don't log events requests.
        - level: None
          resources:
            - group: "" # core
              resources: ["events"]
        # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
        - level: Request
          users: ["kubelet", "system:node-problem-detector", "system:serviceaccount:kube-system:node-problem-detector"]
          verbs: ["update","patch"]
          resources:
            - group: "" # core
              resources: ["nodes/status", "pods/status"]
          omitStages:
            - "RequestReceived"
        - level: Request
          userGroups: ["system:nodes"]
          verbs: ["update","patch"]
          resources:
            - group: "" # core
              resources: ["nodes/status", "pods/status"]
          omitStages:
            - "RequestReceived"
        # deletecollection calls can be large, don't log responses for expected namespace deletions
        - level: Request
          users: ["system:serviceaccount:kube-system:namespace-controller"]
          verbs: ["deletecollection"]
          omitStages:
            - "RequestReceived"
        # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
        # so only log at the Metadata level.
        - level: Metadata
          resources:
            - group: "" # core
              resources: ["secrets", "configmaps"]
            - group: authentication.k8s.io
              resources: ["tokenreviews"]
          omitStages:
            - "RequestReceived"
        # Get repsonses can be large; skip them.
        - level: Request
          verbs: ["get", "list", "watch"]
          resources:
            - group: "" # core
            - group: "admissionregistration.k8s.io"
            - group: "apiextensions.k8s.io"
            - group: "apiregistration.k8s.io"
            - group: "apps"
            - group: "authentication.k8s.io"
            - group: "authorization.k8s.io"
            - group: "autoscaling"
            - group: "batch"
            - group: "certificates.k8s.io"
            - group: "extensions"
            - group: "metrics.k8s.io"
            - group: "networking.k8s.io"
            - group: "policy"
            - group: "rbac.authorization.k8s.io"
            - group: "scheduling.k8s.io"
            - group: "settings.k8s.io"
            - group: "storage.k8s.io"
          omitStages:
            - "RequestReceived"
        # Default level for known APIs
        - level: RequestResponse
          resources:
            - group: "" # core
            - group: "admissionregistration.k8s.io"
            - group: "apiextensions.k8s.io"
            - group: "apiregistration.k8s.io"
            - group: "apps"
            - group: "authentication.k8s.io"
            - group: "authorization.k8s.io"
            - group: "autoscaling"
            - group: "batch"
            - group: "certificates.k8s.io"
            - group: "extensions"
            - group: "metrics.k8s.io"
            - group: "networking.k8s.io"
            - group: "policy"
            - group: "rbac.authorization.k8s.io"
            - group: "scheduling.k8s.io"
            - group: "settings.k8s.io"
            - group: "storage.k8s.io"
          omitStages:
            - "RequestReceived"
        # Default level for all other requests.
        - level: Metadata
          omitStages:
            - "RequestReceived"
    name: kubernetes-audit
    path: /srv/kubernetes/audit.yaml
    roles:
    - Master
  hooks:
  - manifest: |
      [Unit]
      Description=Allow iptables forwarding

      [Service]
      Type=oneshot
      ExecStart=/sbin/iptables -P FORWARD ACCEPT

      [Install]
      WantedBy=multi-user.target
    name: iptables-forward.service
  iam:
    legacy: false
  kubeAPIServer:
    admissionControl:
    - NamespaceLifecycle
    - LimitRanger
    - ServiceAccount
    - PersistentVolumeLabel
    - DefaultStorageClass
    - DefaultTolerationSeconds
    - NodeRestriction
    - Priority
    - ResourceQuota
    - PodSecurityPolicy
    anonymousAuth: true
    auditLogPath: '-'
    auditPolicyFile: /srv/kubernetes/audit.yaml
    oidcClientID: XXX
    oidcIssuerURL: XXX
    oidcUsernameClaim: XXX
  kubeDNS:
    provider: CoreDNS
  kubeScheduler:
    usePolicyConfigMap: true
  kubelet:
    anonymousAuth: false
    enforceNodeAllocatable: pods
    kubeReserved:
      cpu: 125m
      memory: 512Mi
    systemReserved:
      cpu: 125m
      memory: 512Mi
  kubernetesApiAccess:
  - 10.0.0.0/8
  kubernetesVersion: 1.17.17
  networkCIDR: XXX
  networkID: XXX
  networking:
    canal: {}
  nonMasqueradeCIDR: 172.16.0.0/12
  serviceClusterIPRange: 172.31.0.0/16
  sshAccess:
  - XXX
  - XXX
  sshKeyName: XXX
  subnets:
  - cidr: XXX
    name: XXX
    type: Private
    zone: XXX
  - cidr: XXX
    name: XXX
    type: Private
    zone: XXX
  - cidr: XXX
    name: XXX
    type: XXX
    zone: XXX
  - cidr: XXX
    name: XXX
    type: XXX
    zone: XXX
  - cidr: XXX
    name: XXX
    type: Utility
    zone: XXX
  - cidr: XXX
    name: XXX
    type: Utility
    zone: XXX
  - cidr: XXX
    name: XXX
    type: Utility
    zone: XXX
  - cidr: XXX
    name: XXX
    type: Utility
    zone: XXX
  - cidr: XXX
    name: XXX
    type: Public
    zone: XXX
  - cidr: XXX
    name: XXX
    type: Public
    zone: XXX
  - cidr: XXX
    name: XXX
    type: Public
    zone: XXX
  - cidr: XXX
    name: XXX
    type: Public
    zone: XXX
  topology:
    dns:
      type: Private
    masters: private
    nodes: private

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-07-29T18:48:51Z"
  labels:
    kops.k8s.io/cluster: XXX
  name: XXX
spec:
  additionalSecurityGroups:
  - XXX
  image: XXX
  machineType: m5a.large
  maxSize: 1
  minSize: 1
  role: Master
  rootVolumeEncryption: true
  rootVolumeEncryptionKey: XXX
  rootVolumeType: gp2
  subnets:
  - XXX

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-07-29T18:48:52Z"
  labels:
    kops.k8s.io/cluster: XXX
  name: XXX
spec:
  additionalSecurityGroups:
  - XXX
  image: XXX
  machineType: m5a.large
  maxSize: 1
  minSize: 1
  role: Master
  rootVolumeEncryption: true
  rootVolumeEncryptionKey: XXX
  rootVolumeType: gp2
  subnets:
  - XXX

---

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2021-07-29T18:48:52Z"
  labels:
    kops.k8s.io/cluster: XXX
  name: XXX
spec:
  additionalSecurityGroups:
  - XXX
  image: XXX
  machineType: m5a.large
  maxSize: 1
  minSize: 1
  role: Master
  rootVolumeEncryption: true
  rootVolumeEncryptionKey: XXX
  rootVolumeType: gp2
  subnets:
  - XXX
[additional instancegroup's elided]

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

plans continue to say the following for each launch template, note that is only the tags with empty value it thinks need an update.

  ~ resource "aws_launch_template" "XXX" {
        arn                     = "XXX"
        default_version         = 1
        disable_api_termination = false
        id                      = "XXX"
        image_id                = "XXX"
        instance_type           = "m5a.large"
        key_name                = "XXX"
      ~ latest_version          = 5 -> (known after apply)
        name                    = "XXX"
        security_group_names    = []
      ~ tags                    = {
            "KubernetesCluster"                                                                                     = "kube-us-west-2-beta.addepar.com"
            "Name"                                                                                                  = "master-us-west-2-beta-b.masters.kube-us-west-2-beta.addepar.com"
            "environment"                                                                                           = "BETA-K8S"
          + "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki"                         = ""
            "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                                      = "master"
          + "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane"                   = ""
            "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"                          = ""
          + "k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
            "k8s.io/role/master"                                                                                    = "1"
            "kops.k8s.io/instancegroup"                                                                             = "master-us-west-2-beta-b"
            "kubernetes"                                                                                            = ""
            "kubernetes.io/cluster/kube-us-west-2-beta.addepar.com"                                                 = "owned"
        }

9. Anything else do we need to know?

If i hand edit kubernetes.tf and set these tags to have a value of "1" instead of "" things work correctly (After applying the plan subsequent "terraform plan" operation show no changes required). some resources such as ebs volumes seem to support tags with empty values others do not, tags on aws_iam_role, aws_iam_instance_profile, aws_autoscaling_group and aws_launch_template had to be changed from "" to "1" in order to make tf stop trying to change them on every run.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 30, 2021
@gwohletz
Copy link
Author

Correction the only resources that required setting the tags to have a non-empty value were:

  • aws_iam_instance_profile
  • aws_iam_role
  • aws_launch_template

the re-updating of aws_autoscaling_group was simply occurring as a side-effect of the aws_launch_template needing to be updated.

@gwohletz
Copy link
Author

also filed this hashicorp/terraform-provider-aws#20371

@gwohletz
Copy link
Author

Further correction the empty tags that showed up in "aws_iam_instance_profile" and "aws_iam_role" were not auto-generated by kops but rather as a result of a cloudLabel directive in our cluster yaml which i have since removed.

The following tags with blank values do appear (and cause problems in) aws_launch_template resource blocks:

  • k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki
  • k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane
  • k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master
  • k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers
  • k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/node

@gwohletz
Copy link
Author

gwohletz commented Jul 31, 2021

Example of problematic aws_launch_template resource block

resource "aws_launch_template" "master-us-west-2-a-masters-XXX" {
  block_device_mappings {
    device_name = "/dev/sda1"
    ebs {
      delete_on_termination = true
      encrypted             = true
      kms_key_id            = "XXX"
      volume_size           = 64
      volume_type           = "gp2"
    }
  }
  iam_instance_profile {
    name = aws_iam_instance_profile.masters-XXX.id
  }
  image_id      = "ami-XXX"
  instance_type = "c5a.xlarge"
  key_name      = aws_key_pair.XXX.id
  lifecycle {
    create_before_destroy = true
  }
  metadata_options {
    http_endpoint               = "enabled"
    http_put_response_hop_limit = 1
    http_tokens                 = "optional"
  }
  monitoring {
    enabled = false
  }
  name = "master-us-west-2-a.masters.XXX"
  network_interfaces {
    associate_public_ip_address = false
    delete_on_termination       = true
    security_groups             = [aws_security_group.masters-XXX.id, "sg-XXX"]
  }
  tag_specifications {
    resource_type = "instance"
    tags = {
      "KubernetesCluster"                                                                                     = "XXX"
      "Name"                                                                                                  = "master-us-west-2-a.masters.XXX"
      "environment"                                                                                           = "PROD"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki"                         = ""
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                                      = "master"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane"                   = ""
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"                          = ""
      "k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
      "k8s.io/role/master"                                                                                    = "1"
      "kops.k8s.io/instancegroup"                                                                             = "master-us-west-2-a"
      "kubernetes.io/cluster/kube-us-west-2.XXX"                                                      = "owned"
    }
  }
  tag_specifications {
    resource_type = "volume"
    tags = {
      "KubernetesCluster"                                                                                     = "XXX"
      "Name"                                                                                                  = "master-us-west-2-a.masters.XXX"
      "environment"                                                                                           = "PROD"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki"                         = ""
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                                      = "master"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane"                   = ""
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"                          = ""
      "k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
      "k8s.io/role/master"                                                                                    = "1"
      "kops.k8s.io/instancegroup"                                                                             = "master-us-west-2-a"
      "kubernetes.io/cluster/kube-us-west-2.XXX"                                                      = "owned"
    }
  }
  tags = {
    "KubernetesCluster"                                                                                     = "XXX"
    "Name"                                                                                                  = "master-us-west-2-a.masters.XXX"
    "environment"                                                                                           = "PROD"
    "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki"                         = ""
    "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                                      = "master"
    "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane"                   = ""
    "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"                          = ""
    "k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
    "k8s.io/role/master"                                                                                    = "1"
    "kops.k8s.io/instancegroup"                                                                             = "master-us-west-2-a"
    "kubernetes.io/cluster/XXX"                                                      = "owned"
  }
  user_data = filebase64("${path.module}/data/aws_launch_template_master-us-west-2-a.masters.XXX_user_data")
}

example of same resource block manually edited to eliminate blank tag values that are causing trouble

resource "aws_launch_template" "master-us-west-2-a-masters-XXX" {
  block_device_mappings {
    device_name = "/dev/sda1"
    ebs {
      delete_on_termination = true
      encrypted             = true
      kms_key_id            = "XXX"
      volume_size           = 64
      volume_type           = "gp2"
    }
  }
  iam_instance_profile {
    name = aws_iam_instance_profile.masters-XXX.id
  }
  image_id      = "ami-XXX"
  instance_type = "c5a.xlarge"
  key_name      = aws_key_pair.XXX.id
  lifecycle {
    create_before_destroy = true
  }
  metadata_options {
    http_endpoint               = "enabled"
    http_put_response_hop_limit = 1
    http_tokens                 = "optional"
  }
  monitoring {
    enabled = false
  }
  name = "master-us-west-2-a.masters.XXX"
  network_interfaces {
    associate_public_ip_address = false
    delete_on_termination       = true
    security_groups             = [aws_security_group.masters-XXX.id, "sg-XXX"]
  }
  tag_specifications {
    resource_type = "instance"
    tags = {
      "KubernetesCluster"                                                                                     = "XXX"
      "Name"                                                                                                  = "master-us-west-2-a.masters.XXX"
      "environment"                                                                                           = "PROD"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki"                         = ""
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                                      = "master"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane"                   = ""
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"                          = ""
      "k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
      "k8s.io/role/master"                                                                                    = "1"
      "kops.k8s.io/instancegroup"                                                                             = "master-us-west-2-a"
      "kubernetes.io/cluster/kube-us-west-2.XXX"                                                      = "owned"
    }
  }
  tag_specifications {
    resource_type = "volume"
    tags = {
      "KubernetesCluster"                                                                                     = "XXX"
      "Name"                                                                                                  = "master-us-west-2-a.masters.XXX"
      "environment"                                                                                           = "PROD"
      "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki"                         = ""
      "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                                      = "master"
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane"                   = ""
      "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"                          = ""
      "k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = ""
      "k8s.io/role/master"                                                                                    = "1"
      "kops.k8s.io/instancegroup"                                                                             = "master-us-west-2-a"
      "kubernetes.io/cluster/kube-us-west-2.XXX"                                                      = "owned"
    }
  }
  tags = {
    "KubernetesCluster"                                                                                     = "XXX"
    "Name"                                                                                                  = "master-us-west-2-a.masters.XXX"
    "environment"                                                                                           = "PROD"
    "k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/kops-controller-pki"                         = "1"
    "k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role"                                      = "master"
    "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/control-plane"                   = "1"
    "k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master"                          = "1"
    "k8s.io/cluster-autoscaler/node-template/label/node.kubernetes.io/exclude-from-external-load-balancers" = "1"
    "k8s.io/role/master"                                                                                    = "1"
    "kops.k8s.io/instancegroup"                                                                             = "master-us-west-2-a"
    "kubernetes.io/cluster/XXX"                                                      = "owned"
  }
  user_data = filebase64("${path.module}/data/aws_launch_template_master-us-west-2-a.masters.XXX_user_data")
}

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 29, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 28, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants