-
-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU hotplug support #130
GPU hotplug support #130
Conversation
looking at some of the stuff changed like closing fds, this might fix/help #83 |
This patch also allows me to dissociate Hyprland from nouveau if I launch Hyprland with nouveau loaded from the start. I tested it with AQ_DRM_DEVICES=/dev/dri/card1(amd):/dev/dri/card0(nvidia) to set amd as the primary drm backend. |
lgtm, can you please clang-format all the files though? |
Done |
I realize, but we need a release here first. |
I forgot about having this hack because connectors were not being destroyed, so I forced them to. What I wanted was conn.disconnect() to destroy the outputs and then the conn refcount works out. It's a plus that it looks much cleaner now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some final style nits. Content lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
Do you have to set your iGPU as the primary drm backend to hotplug the dGPU? I am currently running an AMD iGPU+dGPU setup with card0(dGPU) set to primary via the envvar. Hyprland crashes when I release it to pass to vfio-pci while keeping the iGPU on host. |
Your usecase is similar to mine. You can't release the primary drm without Hyprland crashing. You should have the iGPU as your primary in this case. You can then release hyprland resources from the dGPU with the udevadm command above, rmmod nouveau and bind vfio-pci for passthrough. I do not have to set AQ_DRM_DEVICES because I have configured the dGPU to be off before login so the iGPU is always the primary, and I can then load the module I want for the dGPU. |
I wonder if forcing iGPU for drm would impact gaming performance on linux. I am on a desktop so ideally I should be running everything directly off dGPU whenever possible. Regardless, this is something I wanted for a long while so thank you for your PR! |
Use Case
I have a Lenovo Legion 5 Pro (16ACH6H) laptop with an AMD iGPU and an NVIDIA dGPU. I want the flexibility to use different modules for the dGPU depending on my needs:
vfio-pci
: For passthrough to a vm.nouveau
: To use the connectors connected to the dGPU exclusively for driving additional monitors.nvidia
(without modesetting): For CUDA workloads, offloaded rendering, etc.Workflow involving Hyprland
I can now use the outputs connected to my dGPU only.
udevadm trigger --type=devices --action=remove --subsystem-match=drm --property-match="MINOR=0"
I'm happy to take this to the finish line in case any changes are required.