Skip to content

Commit

Permalink
Update docs/faq.md
Browse files Browse the repository at this point in the history
Adding @starbops feedback

Co-authored-by: Zespre Chang <zespre.chang@suse.com>
  • Loading branch information
LucasSaintarbor and starbops committed Jul 20, 2023
1 parent 8be53a4 commit 358c347
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 10 deletions.
10 changes: 5 additions & 5 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,12 @@ As of Harvester v1.0.2, we no longer support adding additional partitioned disks

### Why are there some Harvester pods that become ErrImagePull/ImagePullBackOff?

This is likely because your Harvester cluster is an air-gapped setup, and some pre-loaded container images are missing. Kubernetes has a mechanism that does garbage collection against bloated image stores. When the partition which stores container images is over 85% full, `kubelet` will try to prune some least used images to save disk space until the occupancy is lower than 80%. These numbers (85% and 80%) are default High/Low thresholds that come with Kubernetes.
This is likely because your Harvester cluster is an air-gapped setup, and some pre-loaded container images are missing. Kubernetes has a mechanism that does garbage collection against bloated image stores. When the partition which stores container images is over 85% full, `kubelet` tries to prune the images based on the last time they were used, starting with the oldest, until the occupancy is lower than 80%. These numbers (85% and 80%) are default High/Low thresholds that come with Kubernetes.

To recover from this state, do one of the following depending on the cluster's configuration:
- Pull the missing images from sources outside of the cluster (if it's an air-gapped environment, you might need to set up an HTTP proxy beforehand)
- Manually import the images from the Harvester ISO image
- Find the missing images on one node on the other nodes, and then export the images from the node still with them and import them on the missing one
- Pull the missing images from sources outside of the cluster (if it's an air-gapped environment, you might need to set up an HTTP proxy beforehand).
- Manually import the images from the Harvester ISO image.
- Find the missing images on one node on the other nodes, and then export the images from the node still with them and import them on the missing one.

To prevent this from happening, we recommend cleaning up unused container images from the previous version after each successful Harvester upgrade if the image store disk space is stressed. We provided a [harv-purge-images script](https://github.com/harvester/upgrade-helpers/blob/main/bin/harv-purge-images.sh) that makes cleaning up disk space easy, especially for container image storage. The script has to be executed on each Harvester node. For example, if the cluster was originally in v1.1.2, and now it gets upgraded to v1.2.0, you can do the following to discard the container images that are only used in v1.1.2 but no longer needed in v1.2.0:

Expand All @@ -68,6 +68,6 @@ $ ./harv-purge-images.sh v1.1.2 v1.2.0
:::caution

- The script only downloads the image lists and compares the two to calculate the difference between the two versions. It does not communicate with the cluster and, as a result, does not know what version the cluster was upgraded from.
- We published image lists for each version released since v1.1.0. For clusters older than v1.1.0, users have to clean up the old images manually.
- We published image lists for each version released since v1.1.0. For clusters older than v1.1.0, you have to clean up the old images manually.

:::
10 changes: 5 additions & 5 deletions versioned_docs/version-v1.1/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,12 @@ As of Harvester v1.0.2, we no longer support adding additional partitioned disks

### Why are there some Harvester pods that become ErrImagePull/ImagePullBackOff?

This is likely because your Harvester cluster is an air-gapped setup, and some pre-loaded container images are missing. Kubernetes has a mechanism that does garbage collection against bloated image stores. When the partition which stores container images is over 85% full, `kubelet` will try to prune some least used images to save disk space until the occupancy is lower than 80%. These numbers (85% and 80%) are default High/Low thresholds that come with Kubernetes.
This is likely because your Harvester cluster is an air-gapped setup, and some pre-loaded container images are missing. Kubernetes has a mechanism that does garbage collection against bloated image stores. When the partition which stores container images is over 85% full, `kubelet` tries to prune the images based on the last time they were used, starting with the oldest, until the occupancy is lower than 80%. These numbers (85% and 80%) are default High/Low thresholds that come with Kubernetes.

To recover from this state, do one of the following depending on the cluster's configuration:
- Pull the missing images from sources outside of the cluster (if it's an air-gapped environment, you might need to set up an HTTP proxy beforehand)
- Manually import the images from the Harvester ISO image
- Find the missing images on one node on the other nodes, and then export the images from the node still with them and import them on the missing one
- Pull the missing images from sources outside of the cluster (if it's an air-gapped environment, you might need to set up an HTTP proxy beforehand).
- Manually import the images from the Harvester ISO image.
- Find the missing images on one node on the other nodes, and then export the images from the node still with them and import them on the missing one.

To prevent this from happening, we recommend cleaning up unused container images from the previous version after each successful Harvester upgrade if the image store disk space is stressed. We provided a [harv-purge-images script](https://github.com/harvester/upgrade-helpers/blob/main/bin/harv-purge-images.sh) that makes cleaning up disk space easy, especially for container image storage. The script has to be executed on each Harvester node. For example, if the cluster was originally in v1.1.1, and now it gets upgraded to v1.1.2, you can do the following to discard the container images that are only used in v1.1.1 but no longer needed in v1.1.2:

Expand All @@ -68,6 +68,6 @@ $ ./harv-purge-images.sh v1.1.1 v1.1.2
:::caution

- The script only downloads the image lists and compares the two to calculate the difference between the two versions. It does not communicate with the cluster and, as a result, does not know what version the cluster was upgraded from.
- We published image lists for each version released since v1.1.0. For clusters older than v1.1.0, users have to clean up the old images manually.
- We published image lists for each version released since v1.1.0. For clusters older than v1.1.0, you have to clean up the old images manually.

:::

0 comments on commit 358c347

Please sign in to comment.