Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1.14 Release Notes: "Known Issues" #74425

Closed
onyiny-ang opened this issue Feb 22, 2019 · 17 comments
Closed

1.14 Release Notes: "Known Issues" #74425

onyiny-ang opened this issue Feb 22, 2019 · 17 comments
Assignees
Labels
kind/documentation Categorizes issue or PR as related to documentation. sig/release Categorizes an issue or PR as relevant to SIG Release.
Milestone

Comments

@onyiny-ang
Copy link

This issue is a bucket placeholder for collaborating on the "Known Issues" additions for the 1.14 Release Notes. If you know of issues or API changes that are going out in 1.14, please comment here so that we can coordinate incorporating information about these changes in the Release Notes.

/assign @dstrebel @jeefy @onyiny-ang @alenkacz

/sig release
/milestone v1.14

@k8s-ci-robot
Copy link
Contributor

@onyiny-ang: You must be a member of the kubernetes/kubernetes-milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your and have them propose you as an additional delegate for this responsibility.

In response to this:

This issue is a bucket placeholder for collaborating on the "Known Issues" additions for the 1.14 Release Notes. If you know of issues or API changes that are going out in 1.14, please comment here so that we can coordinate incorporating information about these changes in the Release Notes.

/assign @dstrebel @jeefy @onyiny-ang @alenkacz

/sig release
/milestone v1.14

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the sig/release Categorizes an issue or PR as relevant to SIG Release. label Feb 22, 2019
@k8s-ci-robot
Copy link
Contributor

@onyiny-ang: GitHub didn't allow me to assign the following users: alenkacz.

Note that only kubernetes members and repo collaborators can be assigned and that issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

This issue is a bucket placeholder for collaborating on the "Known Issues" additions for the 1.14 Release Notes. If you know of issues or API changes that are going out in 1.14, please comment here so that we can coordinate incorporating information about these changes in the Release Notes.

/assign @dstrebel @jeefy @onyiny-ang @alenkacz

/sig release
/milestone v1.14

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@spiffxp
Copy link
Member

spiffxp commented Feb 24, 2019

/milestone v1.14
We're entering burndown and this looks relevant to the 1.14 release. Please /milestone clear if I am incorrect

@k8s-ci-robot k8s-ci-robot added this to the v1.14 milestone Feb 24, 2019
@spiffxp
Copy link
Member

spiffxp commented Mar 18, 2019

/kind documentation

@k8s-ci-robot k8s-ci-robot added the kind/documentation Categorizes issue or PR as related to documentation. label Mar 18, 2019
@fturib
Copy link

fturib commented Mar 18, 2019

@chrisohaver : can you add here a comment about the known issue coredns/coredns#2629 ? Thanks!

@chrisohaver
Copy link
Contributor

chrisohaver commented Mar 18, 2019

There is a known issue (coredns/coredns#2629) in CoreDNS 1.3.1, wherein if the Kubernetes API shuts down while CoreDNS is connected, CoreDNS will crash. The issue is fixed in CoreDNS 1.4.0 in coredns/coredns#2529.

Let me know if more detail is needed.

@fturib
Copy link

fturib commented Mar 20, 2019

workaround for this issue would be to use another version of CoreDNS : 1.3.0, or 1.4.0.
Another possible solution is to mount an EmptyDir volume to /tmp.

@BenTheElder
Copy link
Member

FYI a working CoreDNS issue link is at coredns/coredns#2629 and fix link at coredns/coredns#2529

@clkao
Copy link
Contributor

clkao commented Mar 21, 2019

kubelet might fail to restart if an existing flexvolume mounted pvc contains large number of directories, or is full. #75019

@rochacon
Copy link

rochacon commented Mar 26, 2019

The v1.14.0 release notes misses a note on apiserver --enable-swagger-ui flag removal. API server 1.14 will fail to start if the flag is present.

Ref: 9229399

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 24, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 24, 2019
@BenTheElder
Copy link
Member

@justaugustus is there something we should be doing here?

@onyiny-ang
Copy link
Author

@BenTheElder @justaugustus These are created to catch known issues prior to the release so that they can be included in the release notes. The only reason I haven't closed this is because I am not sure whether or not it is useful for users to keep the issue open once the release has been cut--since other "known issues" may arise. If not, I'm happy to close this as well as the issue for 1.15.

@spiffxp
Copy link
Member

spiffxp commented Jul 25, 2019

/remove-lifecycle rotten
/close
I looked back at other "known issues" issues going back to 1.9 and they never stayed open long enough to hit rotten. Most were usually open / active for roughly a month, which I suspect lines up with when a patch release or two had been cut. I think we can call this done.

@k8s-ci-robot
Copy link
Contributor

@spiffxp: Closing this issue.

In response to this:

/remove-lifecycle rotten
/close
I looked back at other "known issues" issues going back to 1.9 and they never stayed open long enough to hit rotten. Most were usually open / active for roughly a month, which I suspect lines up with when a patch release or two had been cut. I think we can call this done.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 25, 2019
@BenTheElder
Copy link
Member

Makes sense! Thanks 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. sig/release Categorizes an issue or PR as relevant to SIG Release.
Projects
None yet
Development

No branches or pull requests