Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump CoreDNS version to 1.6.5 and update manifest #85108

Merged

Conversation

rajansandeep
Copy link
Contributor

@rajansandeep rajansandeep commented Nov 11, 2019

What type of PR is this?

/kind cleanup

What this PR does / why we need it:

  • Bumps the CoreDNS version to 1.6.5
  • Updates the corefile-migration library to v1.0.4 which includes migration support up to CoreDNS v1.6.5

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:
This PR is dependent on the CoreDNS 1.6.5 image be pushed to gcr.io for which #84993 has been opened.

/hold

Does this PR introduce a user-facing change?:

Kubeadm now includes CoreDNS version 1.6.5
 - `kubernetes` plugin adds metrics to measure kubernetes control plane latency.
 -  the `health` plugin now includes the `lameduck` option by default, which waits for a duration before shutting down.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Nov 11, 2019
@k8s-ci-robot k8s-ci-robot requested review from dchen1107, detiber and a team November 11, 2019 21:32
@k8s-ci-robot k8s-ci-robot added area/dependency Issues or PRs related to dependency changes area/kubeadm sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Nov 11, 2019
@rajansandeep
Copy link
Contributor Author

/priority important-soon
/assign @neolit123 @BenTheElder
/cc @chrisohaver

@k8s-ci-robot k8s-ci-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Nov 11, 2019
@neolit123
Copy link
Member

@rajansandeep this is very close the release, but i'm going to try to review it later.

pull-kubernetes-e2e-kind — Job failed.
ERROR: error building node image: command "docker save -o /tmp/kind-node-image755345787/bits/images/6.tar k8s.gcr.io/coredns:1.6.5" failed with error: exit status 1

we should push the image fist.
the kind job is now PR blocking.

@BenTheElder
Copy link
Member

image is pushing #84993 (comment)

@@ -223,7 +223,10 @@ metadata:
labels:
k8s-app: kube-dns
spec:
replicas: 2
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does kubeadm have an "addon manager" ? @neolit123

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, it does not have one, per se.
it has "phases" that manage addons.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the comment can be:
# Default replica count is 1

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, that's what I thought re: phases.
I think the rest of the details in this comment make more sense for kube-up and less sense for kubeadm (presuming this is referring to the "Addon manager" in cluster/)

@BenTheElder
Copy link
Member

/test pull-kubernetes-e2e-kind

@@ -313,7 +325,9 @@ data:
Corefile: |
.:53 {
errors
health
health {
lameduck 12s
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is 12s the timeout for the health check in this case?
the timeout for CP components is 15s, so we may want to match that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually - as I was describing the reasoning behind this, I realized that a timeout of 5 seconds should be all that is necessary. When picking 12s, I was conflating the issue with the readiness/health check periods, which I don't think actually come into play. The function of lameduck is to finish processing in flight queries before shutting down. A lameduck of longer than 5 would typically be pointless, since most clients have a default timeout of 5 seconds (and thus would have stopped listening for a response after then).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, 5 seems good if that is sufficient.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rajansandeep do you agree with the change to 5 seconds?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree. I'll push a commit to reflect those changes.

@BenTheElder
Copy link
Member

BenTheElder commented Nov 11, 2019

/retest
kind passed [now that the coreDNS image is live]

@rajansandeep
Copy link
Contributor Author

/hold cancel
Since the image seems to have been pushed to gcr

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 11, 2019
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rajansandeep could you please explain the motivation?

my understanding is the following:

  • we reduce the replica count to 1.
  • coredns will deploy on the primary CP node (where kubeadm init is called).
  • the anti-affinity rule makes sure that the Pod will not schedule on a Node that already has it.

if i'm not mistaken, this will not improve much what we have right now.
a problem we have currently, is that both replicas land on the same primary CP Node.

ideally what we want is a coredns instance to be deployed on all CP Nodes.
one way of doing that is with static-pods, but given we treat coredns as an addon we should use a DaemonSet with a NodeSelector that matches the kubeadm "master" node-role.

i'm going to experiment with that in a bit.

Copy link
Member

@neolit123 neolit123 Nov 11, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sadly, by changing the coredns object type we are going to break a lot of users that have automation around kubectl patch deployment coredns..., so such a change is not a great idea without a grace period.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neolit123
With pod anti-affinity enabled and 2 coredns replicas:

  • If a user has only a master node installed via kubeadm init, there will be one coredns pod in running state and one in pending state.
  • The other coredns pod will remain in pending state and waits for scheduling until another worker node is created via kubeadm join.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a problem we have currently, is that both replicas land on the same primary CP Node.

Pod anti-affinity solves this problem.

Copy link
Member

@neolit123 neolit123 Nov 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With pod anti-affinity enabled and 2 coredns replicas:

  • If a user has only a master node installed via kubeadm init, there will be one coredns pod in running state and one in pending state.
  • The other coredns pod will remain in pending state and waits for scheduling until another worker node is created via kubeadm join.

this is true for 2 replicas and antit-affinity, we don't want Pending pods because it will break e2e tests using our test suite, where all pods are expected to be Ready.

a problem we have currently, is that both replicas land on the same primary CP Node.

Pod anti-affinity solves this problem.

yes. but we reduce the replicas to 1, so if the primary CP node becomes NotReady (e.g. shutdown) the coredns service will still go down and the pod will not reschedule on a Ready node. (same happens for 2 replicas, without anti-affinity).

i guess i'm trying to see how 1 replica with anti-affinity is an improvement over 2 replicas without it.

like i've mentioned earlier, ideally we want a coredns DS for all CP nodes.

Copy link
Member

@neolit123 neolit123 Nov 12, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if continuing to use a Deployment we might want to add these tolerations: #55713 (comment)

^ this issue BTW is one where users are being quite confused by some scheduling aspects of k8s.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chrisohaver PTAL too.

so basically i'm proposing that we keep the replica count to 2.
and introduce the following:

spec:
...
  tolerations:
  - key: "node.kubernetes.io/unreachable"
    operator: "Exists"
    effect: "NoExecute"
    tolerationSeconds: 15
  - key: "node.kubernetes.io/not-ready"
    operator: "Exists"
    effect: "NoExecute"
    tolerationSeconds: 15

this will improve the current deployment by rescheduling the coredns Pods if a Node becomes NotReady after 15 seconds.

i don't think the anti-affinity rule is needed here:

      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: k8s-app
                operator: In
                values: ["kube-dns"]
            topologyKey: kubernetes.io/hostname

because with the current setup the deployment already does that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's start with the more trivial questions here. Is this required for the CoreDNS version bump?
If so, why is this a patch release and not a minor version bump? If it's not required, can we split it and move it into a separate PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I've removed the pod anti-affinity changes from this PR and move it to another PR.

@aojea
Copy link
Member

aojea commented Nov 12, 2019

/test pull-kubernetes-e2e-kind-ipv6

Copy link
Contributor

@rosti rosti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @rajansandeep !

- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's start with the more trivial questions here. Is this required for the CoreDNS version bump?
If so, why is this a patch release and not a minor version bump? If it's not required, can we split it and move it into a separate PR?

@k8s-ci-robot k8s-ci-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Nov 12, 2019
@chrisohaver
Copy link
Contributor

Let's start with the more trivial questions here. Is this required for the CoreDNS version bump?

No it's not.

If it's not required, can we split it and move it into a separate PR?

Yes - makes sense.

@neolit123
Copy link
Member

/approve
i think that there are a lot of flakes in CI right now.

@neolit123
Copy link
Member

/lgtm
thanks @rajansandeep

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 12, 2019
@aojea
Copy link
Member

aojea commented Nov 12, 2019

/test pull-kubernetes-e2e-kind-ipv6

@rajansandeep
Copy link
Contributor Author

@neolit123 Does this need the milestone tag?

@neolit123
Copy link
Member

@neolit123 Does this need the milestone tag?

i don't think it does yet.

/retest

Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve
dep updates

@neolit123
Copy link
Member

neolit123 commented Nov 13, 2019

/assign @BenTheElder
PTAL for approval.

@rajansandeep
Copy link
Contributor Author

/assign @liggitt
For root approval for changes in vendor dep.

@@ -632,7 +632,9 @@ func TestCreateCoreDNSConfigMap(t *testing.T) {
}`,
expectedCorefileData: `.:53 {
errors
health
health {
lameduck 5s
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this a required change? will users with a custom dns config be broken if they don't make this change as well?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not required. It's just an improvement that reduces query failures during rolling upgrades. The setting allows CoreDNS to complete in flight dns queries before exiting.

Without the setting, CoreDNS will not be broken.

@liggitt
Copy link
Member

liggitt commented Nov 13, 2019

/approve
vendor update looks good

/hold on the config compatibility question
kubeadm maintainers can unhold at will

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 13, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: liggitt, neolit123, rajansandeep, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 13, 2019
@BenTheElder
Copy link
Member

looks like I got scooped @neolit123 ... :prow_fire: 😞

@neolit123
Copy link
Member

looks like I got scooped @neolit123 ... :prow_fire:

np

@neolit123
Copy link
Member

canceling the hold as per @chrisohaver 's explanation here:
#85108 (comment)

thanks
/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 13, 2019
@k8s-ci-robot k8s-ci-robot merged commit c33af5b into kubernetes:master Nov 13, 2019
@k8s-ci-robot k8s-ci-robot added this to the v1.17 milestone Nov 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/dependency Issues or PRs related to dependency changes area/kubeadm cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants