Can't log in to the Rancher: Rancher UI continuously loading loading #10735
Labels
kind/bug
QA/dev-automation
Issues that engineers have written automation around so QA doesn't have look at this
status/needs-info
Setup
Describe the bug
Starting from March one time per 1/2 week we can't login/browse the Rancher, it's indefinitely loading, on the container logs we see only errors related to WebSocket disconnection.
Rancher stuck on loading -> Restarted rancher - > the one node became unavailable -> restarted node -> Cluster is back to normal (sometimes node restart doesn't help).
To Reproduce
No actual scenario
Result
Rancher UI indefinitely loading
Expected Result
No issues that can impact user experience
Screenshots
Additional context
Infrastructure description:
The rancher container is running on a separate server in Azure - Linux (ubuntu 18.04)/Standard D8s v3
The cluster was created through the Rancher - 3 Linux nodes(Control plane, etcd, worker) and 5 Windows nodes
Kubernetes Version: v1.24.16 +rke2r1
NAME STATUS ROLES AGE VERSION
qa-neo-wnode1 Ready worker 103d v1.24.16
qa-neo-wnode2 Ready worker 103d v1.24.16
qa-neo-wnode3 Ready worker 103d v1.24.16
qa-neo-wnode4 Ready worker 96d v1.24.16
qa-neo-wnode5 Ready worker 41d v1.24.16
qa-neo-worker1 Ready control-plane,etcd,master,worker 226d v1.24.16+rke2r1
qa-neo-worker2 Ready control-plane,etcd,master,worker 227d v1.24.16+rke2r1
qa-neo-worker3 Ready control-plane,etcd,master,worker 39d v1.24.16+rke2r1
Logs
024-04-02T08:15:16.668277574Z"}
{"log":"W0402 08:17:33.040326 48 transport.go:301] Unable to cancel request for *client.addQuery\n","stream":"stderr","time":"2024-04-02T08:17:33.040648063Z"}
{"log":"W0402 08:17:33.153651 48 transport.go:301] Unable to cancel request for *client.addQuery\n","stream":"stderr","time":"2024-04-02T08:17:33.153831208Z"}
{"log":"W0402 08:17:33.153684 48 transport.go:301] Unable to cancel request for *client.addQuery\n","stream":"stderr","time":"2024-04-02T08:17:33.153862008Z"}
{"log":"W0402 08:17:33.153711 48 transport.go:301] Unable to cancel request for *client.addQuery\n","stream":"stderr","time":"2024-04-02T08:17:33.153881508Z"}
{"log":"W0402 08:17:33.282101 48 transport.go:301] Unable to cancel request for *client.addQuery\n","stream":"stderr","time":"2024-04-02T08:17:33.282265573Z"}
{"log":"W0402 08:17:35.946516 48 transport.go:301] Unable to cancel request for *client.addQuery\n","stream":"stderr","time":"2024-04-02T08:17:35.94684618Z"}
{"log":"W0402 08:17:37.544349 48 transport.go:301] Unable to cancel request for *client.addQuery\n","stream":"stderr","time":"2024-04-02T08:17:37.544720791Z"}
{"log":"W0402 08:17:37.601628 48 transport.go:301] Unable to cancel request for *client.addQuery\n","stream":"stderr","time":"2024-04-02T08:17:37.60176672Z"}
{"log":"W0402 08:17:37.971121 48 transport.go:301] Unable to cancel request for *client.addQuery\n","stream":"stderr","time":"2024-04-02T08:17:37.971340089Z"}
{"log":"2024/04/02 08:18:09 [ERROR] Error during subscribe websocket: close sent\n","stream":"stdout","time":"2024-04-02T08:18:09.520165786Z"}
The text was updated successfully, but these errors were encountered: