Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] GPU config disappears permanently when GPU is hot-unplugged #906

Open
lapsio opened this issue Aug 8, 2024 · 1 comment
Open

[BUG] GPU config disappears permanently when GPU is hot-unplugged #906

lapsio opened this issue Aug 8, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@lapsio
Copy link

lapsio commented Aug 8, 2024

Describe the bug

Adding gpuX to displayed boxes will make it downgrade to stock shown_boxes = "cpu mem net proc" if said GPU is not currently registered by driver (eg. it's temporarily bound to VFIO driver to pass-through to VM)

To Reproduce

  1. Add gpu0 gpu1 to system with 2 GPUs.
  2. Unbind second GPU from driver / remove second GPU from system.
  3. Observe that all GPUs are missing from displayed boxes

Expected behavior

Btop should preserve configuration and skip temporarily missing GPUs runtime-level, without hard-adjusting config to temporary environmental change.

Info (please complete the following information):

  • btop++ version: btop version: 1.3.2
  • Binary: from repo
  • Architecture: x86_64
  • Platform: Linux
  • (Linux) Kernel: 6.10.1-arch1-1
  • Terminal used: Konsole
  • Font used: dunno
@lapsio lapsio added the bug Something isn't working label Aug 8, 2024
@lapsio
Copy link
Author

lapsio commented Aug 8, 2024

Tbf in general multi-GPU support is quite... lacking. CPU status display shows first GPU load,

image

GPU gpu-pwr-totals don't show any unit so you kinda don't know how much power they draw anyways

image

These are obviously not very critical bugs but I'd simply like to point out that this area of btop has quite severe practicality issues. My use case is that NVidia GPUs require manually re-applying Persistence mode after being detached from guest VM so that they don't idle at 120w. I'd like to monitor power draw of all GPUs that are currently not passed-through to KVM guests (aka all GPUs that are registered in host OS but subset of those GPUs changes dynamically depending on which guests are running and which GPUs are they getting assigned by KVM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants