Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📑 update termination handler 🖋 #5627

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions docs/proposals/20200330-spot-instances.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ superseded-by:

<!--ts-->
* [Add support for Spot Instances](#add-support-for-spot-instances)
* [Termination handler](#termination-handler)
* [Termination handler security](#termination-handler-security)
* [Table of contents](#table-of-contents)
* [Glossary](#glossary)
* [Summary](#summary)
Expand Down Expand Up @@ -496,6 +498,17 @@ Azure Spot VMs support two types of eviction policy:
- Delete: This deletes the VM and all associated disks and networking when the node is preempted.
This is *only* supported on Scale Sets backed by Spot VMs.


#### Running the termination handler
The Termination Pod will be part of a DaemonSet, that can be deployed using [ClusterResourceSet](https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/proposals/20200220-cluster-resource-set.md). The DaemonSet will select Nodes which are labelled as spot instances to ensure the Termination Pod only runs on instances that require termination handlers.

The spot label will be added to the Node by the machine controller as described [here](#interruptible-label), provided they support spot instances and the instance is a spot instance.
#### Termination handler security
The metadata services that are hosted by the cloud providers are only accessible from the hosts themselves, so the pod will need to run within the host network.

To restrict the possible effects of the termination handler, it should re-use the Kubelet credentials which pass through the [NodeRestriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission controller. This limits the termination handler to only be able to modify the Node on which it is running. Eg, it would not be able to set the conditions on a different Node.


## Implementation History

- [x] 12/11/2019: Proposed idea in an [issue](https://github.com/kubernetes-sigs/cluster-api/issues/1876)
Expand Down