diff --git a/docs/docs-content/release-notes/known-issues.md b/docs/docs-content/release-notes/known-issues.md index 707355ea96..88deee3f86 100644 --- a/docs/docs-content/release-notes/known-issues.md +++ b/docs/docs-content/release-notes/known-issues.md @@ -16,6 +16,7 @@ The following table lists all known issues that are currently active and affecti | Description | Workaround | Publish Date | Product Component | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------- | ----------------- | +| When you upgrade VerteX from version 4.3.x to 4.4.x, a few system pods may remain unhealthy and experience _CrashLoopBackOff_ errors. This issue only impacts VMware vSphere-based installations and occurs because the internal Mongo DNS is incorrectly configured in the configserver ConfigMap. | Refer to the [Mongo DNS Configmap Value is Incorrect](../troubleshooting/palette-upgrade.md#mongo-dns-configmap-value-is-incorrect) troubleshooting guide for detailed workaround steps. This issue may also impact Enterprise Cluster backup operations. | June 24, 2024 | VerteX | | [Sonobuoy](../clusters/cluster-management/compliance-scan.md#conformance-testing) scans fail to generate reports on airgapped Palette Edge clusters. | No workaround is available. | June 24, 2024 | Edge | | Clusters configured with OpenID Connect (OIDC) at the Kubernetes layer encounter issues when authenticating with the [non-admin Kubeconfig file](../clusters/cluster-management/kubeconfig.md#cluster-admin). Kubeconfig files using OIDC to authenticate will not work if the SSL certificate is set at the OIDC provider level. | Use the admin Kubeconfig file to authenticate with the cluster, as it does not use OIDC to authenticate. | June 21, 2024 | Clusters | | During the platform upgrade from Palette 4.3 to 4.4, Virtual Clusters may encounter a scenario where the pod `palette-controller-manager` is not upgraded to the newer version of Palette. The virtual cluster will continue to be operational, and this does not impact its functionality. | Refer to the [Controller Manager Pod Not Upgraded](../troubleshooting/palette-dev-engine.md#scenario---controller-manager-pod-not-upgraded) troubleshooting guide. | June 15, 2024 | Virtual Clusters | diff --git a/docs/docs-content/troubleshooting/enterprise-install.md b/docs/docs-content/troubleshooting/enterprise-install.md index 2e4ce86b95..3a743b5d54 100644 --- a/docs/docs-content/troubleshooting/enterprise-install.md +++ b/docs/docs-content/troubleshooting/enterprise-install.md @@ -45,3 +45,35 @@ This error may occur if the self-hosted pack registry specified in the installat After a few moments, a system profile will be created and Palette or VerteX will be able to self-link successfully. If you continue to encounter issues, contact our support team by emailing [support@spectrocloud.com](mailto:support@spectrocloud.com) so that we can provide you with further guidance. + +## Scenario - Enterprise Backup Stuck + +In the scenario where an enterprise backup is stuck, a restart of the management pod may resolve the issue. Use the +following steps to restart the management pod. + +### Debug Steps + +1. Open up a terminal session in an environment that has network access to the Kubernetes cluster. Refer to the + [Access Cluster with CLI](../clusters/cluster-management/palette-webctl.md) for additional guidance. + +2. Identify the `mgmt` pod in the `hubble-system` namespace. Use the following command to list all pods in the + `hubble-system` namespace and filter for the `mgmt` pod. + + ```shell + kubectl get pods --namespace hubble-system | grep mgmt + ``` + + ```shell hideClipboard + mgmt-f7f97f4fd-lds69 1/1 Running 0 45m + ``` + +3. Restart the `mgmt` pod by deleting it. Use the following command to delete the `mgmt` pod. Replace `` + with the actual name of the `mgmt` pod that you identified in step 2. + + ```shell + kubectl delete pod --namespace hubble-system + ``` + + ```shell hideClipboard + pod "mgmt-f7f97f4fd-lds69" deleted + ``` diff --git a/docs/docs-content/troubleshooting/palette-upgrade.md b/docs/docs-content/troubleshooting/palette-upgrade.md index 8d2a94a274..fcf71f653e 100644 --- a/docs/docs-content/troubleshooting/palette-upgrade.md +++ b/docs/docs-content/troubleshooting/palette-upgrade.md @@ -168,3 +168,105 @@ cluster. If you continue to encounter issues, contact our support team by emailing [support@spectrocloud.com](mailto:support@spectrocloud.com) so that we can provide you with further guidance. + +## Mongo DNS Configmap Value is Incorrect + +In VMware vSphere VerteX installations, if you encounter an error during the upgrade process where the MongoDB DNS +ConfigMap value is incorrect, use the following steps to resolve the issue. + +### Debug Steps + +1. Open up a terminal session in an environment that has network access to the Kubernetes cluster. Refer to the + [Access Cluster with CLI](../clusters/cluster-management/palette-webctl.md) for additional guidance. + +2. Verify that the pods in the `hubble-system` namespace are not starting correctly by issuing the following command. + + ```shell + kubectl get pods --namespace=hubble-system + ``` + +3. Verify that the configmap for the _configserver_ in the _hubble-system_ namespace contains the incorrect host value + `mongo-1.mongohubble-system.svc.cluster`. Use the following command to describe the configmap and search for the host + value. + + ```shell + kubectl describe configmap configserver --namespace hubble-system | grep host + ``` + + ```shell hideClipboard + host: mongo-0.mongo.hubble-system.svc.cluster.local,mongo-1.mongohubble-system.svc.cluster.local,mongo-2.mongo.hubble-system.svc.cluster.local + host: '0.0.0.0' + host: '0.0.0.0' + host: '0.0.0.0' + host: '0.0.0.0' + host: '0.0.0.0' + host: '0.0.0.0' + host: '0.0.0.0' + host: '0.0.0.0' + host: '0.0.0.0' + host: '0.0.0.0' + host: '0.0.0.0' + host: '0.0.0.0' + ``` + +4. If the host value is incorrect, log in to the System Console. You can find guidance on how to access the System + Console in the + [Access the System Console](../vertex/system-management/system-management.md#access-the-system-console) + documentation. + +5. Navigate to the **Main Menu** and select **Enterprise Cluster**. From the **System Profiles** page, select the + **Spectro** pack. + + ![A view of the Spectro pack in the System Profiles page](/troubleshooting_enterprise_install_system-profile-pack.webp) + +6. In the YAML editor, locate the parameter `databaseUrl` and update the value + `mongo-1.mongohubble-system.svc.cluster.local` to `mongo-1.mongo.hubble-system.svc.cluster.local`. + + Below is what the updated `databaseUrl` value should look like. + + ```yaml + databaseUrl: "mongo-0.mongo.hubble-system.svc.cluster.local,mongo-1.mongo.hubble-system.svc.cluster.local,mongo-2.mongo.hubble-system.svc.cluster.local" + ``` + +7. Click **Save** to apply the changes. + +8. Verify the system pods are starting correctly by issuing the following command. + + ```shell + kubectl get pods --namespace=hubble-system + ``` + + ```hideClipboard text + NAME READY STATUS RESTARTS AGE + auth-64b88d97dd-5z7ph 1/1 Running 0 31m + auth-64b88d97dd-bchr7 1/1 Running 0 31m + cloud-b8796c57d-5r7d9 1/1 Running 0 31m + cloud-b8796c57d-xpbx7 1/1 Running 0 31m + configserver-778bd7c4c9-mrtc6 1/1 Running 0 31m + event-5869c6bd75-2n7jl 1/1 Running 0 31m + event-5869c6bd75-xnvmj 1/1 Running 0 31m + foreq-679c7b7f6b-2ts2v 1/1 Running 0 31m + hashboard-9f865b6c8-c52bb 1/1 Running 0 31m + hashboard-9f865b6c8-rw6p4 1/1 Running 0 31m + hutil-54995bfd6b-sh4dt 1/1 Running 0 31m + hutil-54995bfd6b-tlqbj 1/1 Running 0 31m + memstore-7584fdd94f-479pj 1/1 Running 0 31m + mgmt-68c8dbfd58-8gxsx 1/1 Running 0 31m + mongo-0 2/2 Running 0 29m + mongo-1 2/2 Running 0 30m + mongo-2 2/2 Running 0 30m + msgbroker-7d7655559b-zxxfq 1/1 Running 0 31m + oci-proxy-6fdf95885f-qw58g 1/1 Running 0 31m + reloader-reloader-845cfd7fdf-2rq5t 1/1 Running 0 31m + spectrocluster-5c4cb4ff58-658w9 1/1 Running 0 31m + spectrocluster-5c4cb4ff58-fn8g5 1/1 Running 0 31m + spectrocluster-5c4cb4ff58-zvwfp 1/1 Running 0 31m + spectrocluster-jobs-5b54bf6bcf-mtgh8 1/1 Running 0 31m + system-6678d47874-464n6 1/1 Running 0 31m + system-6678d47874-rgn55 1/1 Running 0 31m + timeseries-6564699c7d-b6fnr 1/1 Running 0 31m + timeseries-6564699c7d-hvv94 1/1 Running 0 31m + timeseries-6564699c7d-jzmnl 1/1 Running 0 31m + user-866c7f779d-drf9w 1/1 Running 0 31m + user-866c7f779d-rm4hw 1/1 Running 0 31m + ``` diff --git a/docs/docs-content/vertex/upgrade/upgrade-notes.md b/docs/docs-content/vertex/upgrade/upgrade-notes.md new file mode 100644 index 0000000000..b1bc1882ca --- /dev/null +++ b/docs/docs-content/vertex/upgrade/upgrade-notes.md @@ -0,0 +1,23 @@ +--- +sidebar_label: "Upgrade Notes" +title: "Upgrade Notes" +description: "Learn how to upgrade self-hosted Palette instances." +icon: "" +sidebar_position: 0 +tags: ["vertex", "self-hosted", "airgap", "kubernetes", "upgrade"] +keywords: ["vertex", "enterprise", "airgap", "kubernetes"] +--- + +This page offers version-specific reference to help you prepare for upgrading self-hosted Vertex instances. + +## Upgrade VerteX 4.3.x to 4.4.x + + +Prior to upgrading VMware vSphere VerteX installations from version 4.3.x to 4.4.x, complete the +steps outlined in the +[Mongo DNS ConfigMap Issue](../../troubleshooting/palette-upgrade.md#mongo-dns-configmap-value-is-incorrect) guide. +Addressing this Mongo DNS issue will prevent system pods from experiencing _CrashLoopBackOff_ errors after the upgrade. + +After the upgrade, if Enterprise Cluster backups are stuck, refer to the +[Enterprise Backup Stuck](../../troubleshooting/enterprise-install.md#scenario---enterprise-backup-stuck) +troubleshooting guide for resolution steps. diff --git a/docs/docs-content/vertex/upgrade/upgrade.md b/docs/docs-content/vertex/upgrade/upgrade.md index ebce0f230c..90a92921b8 100644 --- a/docs/docs-content/vertex/upgrade/upgrade.md +++ b/docs/docs-content/vertex/upgrade/upgrade.md @@ -43,6 +43,7 @@ Before upgrading Palette VerteX to a new major version, you must first update it Refer to the respective guide for guidance on upgrading your self-hosted Palette VerteX instance. +- [Upgrade Notes](upgrade-notes.md) - [Non-Airgap VMware](upgrade-vmware/non-airgap.md) - [Airgap VMware](upgrade-vmware/airgap.md) - [Non-Airgap Kubernetes](upgrade-k8s/non-airgap.md) diff --git a/static/assets/docs/images/troubleshooting_enterprise_install_system-profile-pack.webp b/static/assets/docs/images/troubleshooting_enterprise_install_system-profile-pack.webp new file mode 100644 index 0000000000..034f2fb8ca Binary files /dev/null and b/static/assets/docs/images/troubleshooting_enterprise_install_system-profile-pack.webp differ