diff --git a/content/en/docs/concepts/configuration/manage-compute-resources-container.md b/content/en/docs/concepts/configuration/manage-compute-resources-container.md index fe3cd36444f6a..b1ba81939b90a 100644 --- a/content/en/docs/concepts/configuration/manage-compute-resources-container.md +++ b/content/en/docs/concepts/configuration/manage-compute-resources-container.md @@ -143,9 +143,7 @@ When using Docker: multiplied by 100. The resulting value is the total amount of CPU time that a container can use every 100ms. A container cannot use more than its share of CPU time during this interval. - {{< note >}} - **Note**: The default quota period is 100ms. The minimum resolution of CPU quota is 1ms. - {{< /note >}} + {{< note >}}**Note**: The default quota period is 100ms. The minimum resolution of CPU quota is 1ms.{{ {{}}} - The `spec.containers[].resources.limits.memory` is converted to an integer, and used as the value of the @@ -208,12 +206,10 @@ $ kubectl describe nodes e2e-test-minion-group-4lw4 Name: e2e-test-minion-group-4lw4 [ ... lines removed for clarity ...] Capacity: - alpha.kubernetes.io/nvidia-gpu: 0 cpu: 2 memory: 7679792Ki pods: 110 Allocatable: - alpha.kubernetes.io/nvidia-gpu: 0 cpu: 1800m memory: 7474992Ki pods: 110 @@ -299,10 +295,10 @@ Container in the Pod was terminated and restarted five times. You can call `kubectl get pod` with the `-o go-template=...` option to fetch the status of previously terminated Containers: -```shell +```shell{% raw %} [13:59:01] $ kubectl get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}' simmemleak-hra99 Container Name: simmemleak -LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]] +LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]{% endraw %} ``` You can see that the Container was terminated because of `reason:OOM Killed`, @@ -544,6 +540,4 @@ consistency across providers and platforms. * [ResourceRequirements](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#resourcerequirements-v1-core) -{{% /capture %}} - - +{{% /capture %}} \ No newline at end of file diff --git a/content/en/docs/tasks/administer-cluster/extended-resource-node.md b/content/en/docs/tasks/administer-cluster/extended-resource-node.md index 5c2419f061675..7eb65c32512c0 100644 --- a/content/en/docs/tasks/administer-cluster/extended-resource-node.md +++ b/content/en/docs/tasks/administer-cluster/extended-resource-node.md @@ -83,7 +83,6 @@ The output shows that the Node has a capacity of 4 dongles: ``` "capacity": { - "alpha.kubernetes.io/nvidia-gpu": "0", "cpu": "2", "memory": "2049008Ki", "example.com/dongle": "4", @@ -99,7 +98,6 @@ Once again, the output shows the dongle resource: ```yaml Capacity: - alpha.kubernetes.io/nvidia-gpu: 0 cpu: 2 memory: 2049008Ki example.com/dongle: 4 @@ -205,6 +203,3 @@ kubectl describe node | grep dongle {{% /capture %}} - - - diff --git a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md index 5f548e435269b..1059de12f511c 100644 --- a/content/en/docs/tasks/manage-gpus/scheduling-gpus.md +++ b/content/en/docs/tasks/manage-gpus/scheduling-gpus.md @@ -143,68 +143,3 @@ spec: This will ensure that the pod will be scheduled to a node that has the GPU type you specified. - -## v1.6 and v1.7 -To enable GPU support in 1.6 and 1.7, a special **alpha** feature gate -`Accelerators` has to be set to true across the system: -`--feature-gates="Accelerators=true"`. It also requires using the Docker -Engine as the container runtime. - -Further, the Kubernetes nodes have to be pre-installed with NVIDIA drivers. -Kubelet will not detect NVIDIA GPUs otherwise. - -When you start Kubernetes components after all the above conditions are true, -Kubernetes will expose `alpha.kubernetes.io/nvidia-gpu` as a schedulable -resource. - -You can consume these GPUs from your containers by requesting -`alpha.kubernetes.io/nvidia-gpu` just like you request `cpu` or `memory`. -However, there are some limitations in how you specify the resource requirements -when using GPUs: -- GPUs are only supposed to be specified in the `limits` section, which means: - * You can specify GPU `limits` without specifying `requests` because - Kubernetes will use the limit as the request value by default. - * You can specify GPU in both `limits` and `requests` but these two values - must be equal. - * You cannot specify GPU `requests` without specifying `limits`. -- Containers (and pods) do not share GPUs. There's no overcommitting of GPUs. -- Each container can request one or more GPUs. It is not possible to request a - fraction of a GPU. - -When using `alpha.kubernetes.io/nvidia-gpu` as the resource, you also have to -mount host directories containing NVIDIA libraries (libcuda.so, libnvidia.so -etc.) to the container. - -Here's an example: - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: cuda-vector-add -spec: - restartPolicy: OnFailure - containers: - - name: cuda-vector-add - # https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidia-cuda/Dockerfile - image: "k8s.gcr.io/cuda-vector-add:v0.1" - resources: - limits: - alpha.kubernetes.io/nvidia-gpu: 1 # requesting 1 GPU - volumeMounts: - - name: "nvidia-libraries" - mountPath: "/usr/local/nvidia/lib64" - volumes: - - name: "nvidia-libraries" - hostPath: - path: "/usr/lib/nvidia-375" -``` - -The `Accelerators` feature gate and `alpha.kubernetes.io/nvidia-gpu` resource -works on 1.8 and 1.9 as well. It will be deprecated in 1.10 and removed in -1.11. - -## Future -- Support for hardware accelerators in Kubernetes is still in alpha. -- Better APIs will be introduced to provision and consume accelerators in a scalable manner. -- Kubernetes will automatically ensure that applications consuming GPUs get the best possible performance.