-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
containerd: use systemd cgroup driver by default? #1726
Comments
this seems a simple change, @AkihiroSuda do you know there can be issues due to be running in nested containers? |
Maybe there is no problem? Currently, my docker is using systemd as the cgroup driver, and cgroupfs is used in KIND. I will test locally and give the results as soon as possible |
That page talks about having two controllers, but if it intends to recommend it, the language there could do with some updating, I wouldn't really call that "recommends". Nowhere does it say "you should do this". I'd want to know that we're not going to cause a regression here. We also want to be doing something like the changes in https://d2iq.com/blog/running-kind-inside-a-kubernetes-cluster-for-continuous-integration with custom kubepods path etc. sooner rather than later, cc @jieyu xref: #1614. |
For cgroup v2, runc explicitly recommends systemd driver, though it is still opt-in. Podman and Docker already switched the default on v2 to systemd. |
cc @neolit123 |
We should strongly consider this in v0.11.0, see also kubernetes/kubernetes#96594 for some excitement around kind's (poor) behavior with cgroups v1 currently ... |
Noting that there's currently some issues upstream with runc upgrade and systemd |
There is some cri-o CI in Kubernetes related to this now but still this driver continues to have problems kubernetes/kubernetes#102508 (comment) |
This driver is getting more testing upstream but still seems to have major issues kubernetes/kubernetes#104280 😬 |
Now there is systemd cgroup job for containerd as well |
I was just talking to @stevekuznetsov about this, we are in a weird state right now where we are cgroupfs + systemd. kubeadm 1.22+ switched to systemd by default, and I think this is fairly well tested now. Also it is likely problematic on cgroupv2 (though ... kind is working there at least in CI environment and some users ...). We should update That will fix this even for older kind binaries, and it's cheaper than another runtime check. It will still be possible to override at runtime with config patches. |
P.S. apologies in advance if anyone takes that up, that code is a bit messy in particular 😬 , it should not be a difficult change though. |
/remove-help Existing proposal is not quite enough if we want to avoid breaking changes, because in the kubeadm config we emit from existing kind binaries we're specifying the kubelet cgroup driver (to avoid kubeadm in 1.21+ overriding the kubelet's actual default). Considering alternatives. Will PR soon. |
In #2737, thinking is to basically do what kubeadm did, and just say "1.24.0 onwards uses systemd" (and will require the next kind release to use), treating it as breaking then (instead of 1.21 which we already patched over. it's not unusual for a new kubernetes release to require a new kind version because of some change in kubernetes. the fact that we deferred it a few releases seems fine. |
kubernetes v1.24.0+ and kind v0.13.0+ will do this. |
Kind images for k8s v1.24 require `cgroup-driver: systemd` to be set kubernetes-sigs/kind#1726
What would you like to be added:
Use systemd cgroup driver by default
Why is this needed:
Because it is recommended on systemd hosts: https://kubernetes.io/docs/setup/production-environment/container-runtimes/
The text was updated successfully, but these errors were encountered: