Update: Released first bug-fixes (v 0.1.1). Notable fixes include:
- Defaulting to SGL display-mode when enabling 3D accel for Linux VMs for better graphics performance.
- Changing display mode for existing VMs is now available in Manage
- Adding multiple VMs for the same OS is now supported
- Custom VM naming now persists.
- Renaming VMs is now supported.
Next major feature: Full PCI passthrough support, including GPU passthrough via looking-glass.
btreecat 4 days ago [-]
I look forward to testing this out. I keep a windows VM mostly for updating hardware when there's no other way. USB passthrough is a feature that doesn't always work in a stable manner with libvirt.
IOMMU pass-through is the next feature I'm working on, but I felt it was now time to release the V1. Currently, vm-curator supports:
- VM creation with over 100 different OS profiles, built for KVM and emulation
- 3D para-virtualization support using virtio-vga-gl (virgl)
- UEFI and TPM support (auto-configured for OSes that need it, like Windows 11)
- QCOWS2 Snapshot support.
- USB Pass-through support and management.
There is also a rich metadata library with ascii art, descriptions of OSes, and fun-facts.
VM Creation with IOMMU will require the following for GPU pass-through:
- a motherboard capable of proper IOMMU support.
- 2+ GPUs, plus a dummy HDMI or DP1.4 plug for the passed-through GPU
- Looking-Glass for display
VM-curator can host and manage other gpu-passthrough configurations, as the application supports editing each VM's launch script, but the above profile is what I'm planning to put into the creator system.
I have a TRX40 (Threadripper) motherboard, which will serve as an ample test-bed, but I still need to acquire a second GPU.
theYipster 24 hours ago [-]
Btw, this feature is now available in v0.2.x! vm-curator supports single-gpu passthrough (tested locally,) and multi-gpu pass-through via looking-glass (experimental: needs testing.)
single-gpu-passthrough relies on a script (run outside the app) to disconnect the GPU from the current X.org or Wayland session and then to attach it to the running VM. When the VM is shut down, the script runs this process in reverse. This means you can only run one VM at a time with your main display and peripherals, and while you're running that VM, you can't access your host with your display and peripherals (you can always SSH into it while the VM is running.)
This is the common process for getting single-GPU-passthrough to work. vm-curator helps prepare the system and generates the scripts automatically.
multi-gpu-passthrough is designed to run with looking-glass, but it can also support physical KVM switching if the user prefers.
westurner 4 days ago [-]
Systems with iGPU (CPU RAM) + dGPU (dedicated GPU RAM) support GPU passthrough IIUC.
EnvyControl and supergfxctl support selecting between modes (integrated / hybrid / nvidia) to specify whether processes run on the iGPU or the dGPU(s).
https://github.com/bayasdev/envycontrol#hybrid
Bazzite has supergfxctl and the Nvidia modules installed in their OCI system images ("Native Containers"; ublue-os)
IIRC from (awhile ago) trying to run a Windows VM with GPU passthrough to the dGPU, a device selection gui would've helped
> After running this, the terminal will display a list of all your PCI devices, listed by their IOMMU group. Skim through the list until you find the IOMMU group that contains your dGPU.
How is this possible? I remember reading something about 3D para-virtualization not being supported on NVIDIA consumer GPUs.
theYipster 4 days ago [-]
I think you’re referring to the ability to split a physical NVIDIA GPU into multiple virtual GPUs so that you can do full GPU pass-through with one card (without having to resort to hacks like disconnecting host sessions.)
What vm-curator provides is an easy way to use QEMU”s built in para-virtualization (virtio-vga-gl, a.k.a. virgl) in a manner that works with NVIDIA cards. This is not possible using libvirt based tools because of a bug between libvirt and NVIDIA’s Linux drivers.
unixhero 4 days ago [-]
Does it fetch the hardrive inages for all these preconfigured ones from somewhere? That would be a huge timesaver.
theYipster 4 days ago [-]
No. I thought about that, but decided it would be too much of a hassle to maintain, plus it becomes legally problematic with non-FOSS, non-abandonware profiles (i.e. modern Windows and MacOS.)
Instead, for many profiles, it provides a link to the OS’s website (or archive.org) where you can download the installation media.
unixhero 4 days ago [-]
I see...
To be fair the archive.org links probably does not change much
Rendered at 18:29:46 GMT+0000 (Coordinated Universal Time) with Vercel.
- Defaulting to SGL display-mode when enabling 3D accel for Linux VMs for better graphics performance. - Changing display mode for existing VMs is now available in Manage - Adding multiple VMs for the same OS is now supported - Custom VM naming now persists. - Renaming VMs is now supported.
Next major feature: Full PCI passthrough support, including GPU passthrough via looking-glass.
> IOMMU GPU passthrough with device selection would be a helpful feature: https://www.google.com/search?q=gpu+passthrough+qemu
rutabaga_gfx does GPU paravirtualization: https://github.com/magma-gpu/rutabaga_gfx
- VM creation with over 100 different OS profiles, built for KVM and emulation - 3D para-virtualization support using virtio-vga-gl (virgl) - UEFI and TPM support (auto-configured for OSes that need it, like Windows 11) - QCOWS2 Snapshot support. - USB Pass-through support and management.
There is also a rich metadata library with ascii art, descriptions of OSes, and fun-facts.
VM Creation with IOMMU will require the following for GPU pass-through: - a motherboard capable of proper IOMMU support. - 2+ GPUs, plus a dummy HDMI or DP1.4 plug for the passed-through GPU - Looking-Glass for display
VM-curator can host and manage other gpu-passthrough configurations, as the application supports editing each VM's launch script, but the above profile is what I'm planning to put into the creator system.
I have a TRX40 (Threadripper) motherboard, which will serve as an ample test-bed, but I still need to acquire a second GPU.
single-gpu-passthrough relies on a script (run outside the app) to disconnect the GPU from the current X.org or Wayland session and then to attach it to the running VM. When the VM is shut down, the script runs this process in reverse. This means you can only run one VM at a time with your main display and peripherals, and while you're running that VM, you can't access your host with your display and peripherals (you can always SSH into it while the VM is running.)
This is the common process for getting single-GPU-passthrough to work. vm-curator helps prepare the system and generates the scripts automatically.
multi-gpu-passthrough is designed to run with looking-glass, but it can also support physical KVM switching if the user prefers.
With the proprietary Nvidia Linux module,
These environment variables cause processes to run on an Nvidia dGPU instead of the iGPU: https://download.nvidia.com/XFree86/Linux-x86_64/435.17/READ... :
EnvyControl and supergfxctl support selecting between modes (integrated / hybrid / nvidia) to specify whether processes run on the iGPU or the dGPU(s). https://github.com/bayasdev/envycontrol#hybridBazzite has supergfxctl and the Nvidia modules installed in their OCI system images ("Native Containers"; ublue-os)
IIRC from (awhile ago) trying to run a Windows VM with GPU passthrough to the dGPU, a device selection gui would've helped
Arch wiki > Supergfxctl > 5.1 Using supergfxctl for GPU passthrough (VFIO) https://wiki.archlinux.org/title/Supergfxctl#Using_supergfxc...
Linux for ROG notebooks > VFIO dGPU Passthrough Guide > VM Creation Walkthrough: https://asus-linux.org/guides/vfio-guide/#vm-creation-walkth... ;
> After running this, the terminal will display a list of all your PCI devices, listed by their IOMMU group. Skim through the list until you find the IOMMU group that contains your dGPU.
But then under "selinux considerations" it says: https://asus-linux.org/guides/vfio-guide/#selinux-considerat... :
> /etc/libvirt/qemu.conf and find this line:
What vm-curator provides is an easy way to use QEMU”s built in para-virtualization (virtio-vga-gl, a.k.a. virgl) in a manner that works with NVIDIA cards. This is not possible using libvirt based tools because of a bug between libvirt and NVIDIA’s Linux drivers.
Instead, for many profiles, it provides a link to the OS’s website (or archive.org) where you can download the installation media.
To be fair the archive.org links probably does not change much