Gaming on Arch Linux and Windows 10 with VFIO/IOMMU GPU Passthrough
I’m going to focus more on practical tips not covered in a lot of guides I’ve seen, since the basic technical setup is covered very well elsewhere.
TL;DR? Here’s my win10.xml for reference (which you’re probably going to get very familiar with), or skip to the end to check out my results.
It seems like the majority of modern CPUs support IOMMU nowadays. The biggest exception is some of Intel’s “K” range of overclocking CPUs. Look for VT-d on Intel or AMD-Vi on AMD. Also be aware that brand new platforms may take a while to work out all the kinks. I have an Intel i5 4570, so I didn’t run into any trouble. For Intel, an i7 is probably better than an i5 for this though as it gives you a lot more cores to play with.
Update: Got an i7 4790, lot more to go around now.
Motherboards are a bit trickier, as it’s very hard to tell whether they support IOMMU, and how the groups and buses are laid out. I have an Asus B85M-E for example, but the only official reference to VT-d I could find was one line in the manual. You might want to find endorsements online from people who have got it working if you’re buying.
I have 8GB of (DDR3) RAM, but I’d recommend 16GB so you can just assign 8GB to each and not worry about it. Running two operating systems with their own full desktop environments (and probably their own memory hungry web browsers) can easily push 4GB each.
Update: I have 4x4GB now and give 8GB to the guest, check out huge pages below.
You can use basically any GPUs you want, but you need at least two (which can and probably will include your CPU’s integrated GPU). I’d definitely recommend AMD cards, since Nvidia is actively hostile to running in VMs (and consumers in general), but you can get either working. I have an XFX R9 280X, so again I didn’t run into any trouble. You’ll also need to worry about the combination of ports on your GPUs and monitors (which you probably want at least two of as well, by the way), but I’ll talk about that more later.
I had a hard time because I decided to juggle lots of partitions between my SSD and HDDs, so I won’t go into that, and I’d recommend just giving Windows its own disk, preferably an SSD. You can shrink the C: partition and move your Linux and GRUB onto it as well if you really want. I’ll talk about this more later. Anything is possible really though.
Update: I have a 250GB SSD for the host and 1TB SSD for the guest now, so both have good performance and no space issues. I also have a 3TB HDD and use Steam Library Manager to move games between the 1TB SSD.
Not really hardware, but still.
There’s no reason not to use Windows 10 as the guest, as it’s the most optimised and has exclusive features. The built-in malware/telemetry isn’t such an issue inside a VM either. Basic/Pro doesn’t really matter, but you can even get it effectively for free by putting in a matching Windows 7/8/etc licence key, which is what I did.
I used Arch with GNOME as the host as it’s well supported and I’m very familiar with it. Fedora is the next best option, but I haven’t used it much personally. Just make sure you have as new a kernel as possible, with KVM and IOMMU configurations enabled.
Update: I’ve switched to i3-gaps now, otherwise all the same. btw I use arch
I followed the Arch Wiki guide pretty religiously, and so should you. It’s the best single source around, but you will probably have to look for tips in many other guides (such as this one!) before you’re done. This section just covers what I ran into that isn’t necessarily mentioned in the guide.
Enable IOMMU on Host
I won’t go into too much detail as it varies with your hardware, but what I ended up doing was:
- Turned on VT-d in motherboard settings, and set the default GPU to “onboard” and plugged my monitors into the I/O panel (easy to forget!)
intel_iommu=onto my kernel command line in
- Disabled my dedicated GPU using
Installing Windows on Guest
Unfortunately virt-manager doesn’t let you use real disks through the “New Virtual Machine” wizard, so just give it a temporary virtual disk and hook up your Windows 10 ISO which you remembered to download in the meantime, right? Before running the VM, add the real disk as a custom storage device by selecting SATA and browse to e.g.
/dev/sdb, and remove the virtual disk.
I also had to specify “host-passthrough” as the CPU model, and manually specify the topology to prevent each core being its own socket.
I ran into an issue where
/dev/kvm wasn’t in the right group, so virt-manager couldn’t access it, but it fixed itself after a few reboots. You can manually
chown it to the
kvm group if necessary.
Now for the exciting part!
Note that you might be tempted to cut off all virtual hardware straight away, but it’s not that simple, read through if you don’t want to struggle like I did.
Passing the GPU should be as simple as adding it to the VM as a “PCI Host Device”, and its accompanying HDMI Audio device as well usually. If it’s an Nvidia card, there’s some more hoops to jump through to install the driver, but I won’t cover them. My VM wouldn’t boot without the QXL video device for some reason, so I just left it and disabled the device in Windows.
Note that up until this point you will have been using the VM through the remote display in virt-manager, but you should be able to see output on a monitor plugged into your GPU now. Clicking in the remote display will still let you use your mouse and keyboard to control it until you press LCtrl+LAlt. This confused me for a while!
How you connect your GPUs and monitors is totally up to you. I’ve plugged both of my monitors into the motherboard, then one also into the GPU using a different port, so I can manually switch input to see the guest, and still use dual monitors in the host if I want to. If you have the VM running all the time, then you probably only need one connected to each. If you only have one monitor, then you will need to plug it into both and switch back and forth, which would be very annoying. Supposedly there are monitors which switch to the active input automatically, which would be ideal, but I’ve never seen one. A possible idea would be to automatically disable the unused host monitor while the VM is running.
1. Turns out my Philips monitor at work switches inputs automatically, and it’s really annoying when my laptop goes to sleep!
2. I use ARandR to quickly disable the hidden output via a GUI. Haven’t found it enough of an issue to make a script let alone trigger automatically.
3. I have LookingGlass set up to be able to efficiently view the guest output in a window in the host. It’s got about 3–4 frames delay in my testing which is on the edge of noticeable, but you can use it for capturing for streaming for example.
This is surprisingly the most annoying part of the entire setup!
The remote display is what is passing your audio to the host, even if no screen is being displayed, but you probably don’t want to have to keep it open all the time. I set up PulseAudio passthrough to get around this. The wiki guide covers this, and it’s very easy to get working.
The problem is… neither gives perfect audio quality. There was hitching in the remote display passthrough, just like there’s graphics hitching, as you might expect. PulseAudio gives crackling, but I managed to get it to almost completely disappear by setting Windows to use 44.1kHz instead of 48kHz. It still occasionally skips under heavy load though, so it seems to depend on CPU performance, and you might have a better time with an i7 for example.
The ideal method would probably be a hardware solution, passing guest audio out via HDMI and mixing it with the host audio output.
Then you also have to worry about your microphone input. I’m happy to just run Discord in the host, but for in-game VOIP you’ll need the guest to see the input. I haven’t tried any solutions for this yet, but again hardware is probably the way to go.
Update: So, this is still by far the most annoying part of the setup.
1. I pass the guest audio out via GPU HDMI to the monitor, then monitor 3.5mm into my Logitech Z337 speakers.
2. The host audio is also passed out from the motherboard 3.5mm to the speakers which mixes both physical inputs and also bluetooth from my phone.
3. THEN, I have Logitech G933 wireless headphones which I use sometimes. I pass 3.5mm out from the speakers to the USB dongle, and DON’T use its USB audio output device directly!
4. However, do use the USB audio input for the headphone’s microphone. The dongle doesn’t pass microphone out via 3.5mm for some reason, but you can plug the headset 3.5mm directly into some 3.5mm input but then it’s not wireless…
5. To access microphone input in the guest, you can pass the USB dongle to the guest, just remember to disable the output device like with the host, and you’ll still get mixed output via 3.5mm to the dongle. To get it in both guest and host simultaneously, you can pass the input via PulseAudio, but I can’t get it to not sound horrible so I gave up. This can be used for in-game VoiP, but I still just use the host.
6. To access guest audio in the host, you can use PulseAudio but you have to use qemu-patched to make it sound good. In the guest just switch the audio output device. You should still be able to hear the guest audio via the host audio, so you don’t need a hardware mixer if the quality is acceptable to you. This can be used for capturing for streaming for example though.
Input and other USB
Next you have to figure out how you’re going to pass mouse and keyboard control between the guest and host.
Yet again a hardware solution is probably ideal here. There are USB hubs which can switch between two hosts at the press of a button for example.
You can do something similar in software using
evdev. See this thread for example for details, but make sure to use the
by-id path instead of
event0 etc. Essentially: add your user to the
input group, add the input devices to the
qemu.conf, and the command line args to your XML. You also probably want to use the
virtio drivers instead of PS/2, installed the same way as above. You can press both Ctrl keys at once to toggle between host and guest.
A more flexible version of the evdev solution is to set up some scripts to attach and detech devices to and from the VM on demand. This is detail in this blog post, but essentially you set up the scripts, bind them to global hotkeys, and set up ssh from the host to guest to trigger the detach. It takes a lot more effort to set up but the tradeoffs are the best in my opinion.
Synergy is an excellent software solution (and well worth the price in general), but it has its limits. You’ll want to run the server on the guest for maximum performance and compatibility — everyone might not notice sluggishness, but I quickly ran into games not handling the mouse speed properly when running as the client. This means you have to pass the mouse and keyboard devices through, which is annoying as you can’t control the host while the guest is booting or shutting down, and could even get stuck if the VM hangs. I’d recommend adding and removing the USB devices manually each time you boot and shut down until you’re sure it works.
The last thing to mention here is that you’ll probably want to pass through an entire USB bus instead of individual devices. This is covered very well in the wiki guide. This allows the guest total control over devices plugged into certain ports, instead of having to manually assign each device (e.g. bluetooth dongle, controller, USB drive). It also stops virt-manager from complaining if the device isn’t present when starting the VM. It also allows you to plug things into the host instead if needed — unfortunately my motherboard runs all case ports off of one bus so I can’t do this currently.
Update: I pretty much only use Synergy now since getting Scroll Lock to lock the screen properly in games, it’s easy and haven’t noticed lag. See below for setup details.
You can totally remove the Spice remote display from the VM now, and no longer need to keep virt-manager open at all. You should be able to run games with basically native performance now!
There’s a lot of tricks to improve performance. You’ll probably want to try these out one by one after you have everything at least working.
A lot of these are from this great thread and elsewhere.
With all of these applied I get excellent performance, I just wish I had more CPU cores to spare!
Pinning CPU Cores
Prevents the host moving the VM between cores, which is inefficient. Dead simple, check out my XML for an example.
Prevents the host swapping out memory as much. Also very simple. I passed
hugepages=2048 in my kernel command line to allocate 4GB, then see my XML for how to get the VM to use it.
Update: I pass twice as much now.
Removes the overhead of emulating SATA. Add another fake disk and set it to use
virtio, and hook up the driver ISO. When Windows complains about the device missing drivers, tell it to look on the CD drive. Reboot and remove the fake drive, then switch your main disk to
You can do this for the network device as well, in the same way.
Update: I stopped using this because the weird drivers seem to break Windows feature updates, and the performance is good enough normally.
virtio device its own host thread, which you can even pin to a particular host CPU. Again see XML for an example.
Makes it as if the VM is directly connected to your network. This is pretty important as a lot of games rely on UPnP/NAT tricks when using P2P networking, and having a second NAT is exponentially more annoying. Supposedly bridging doesn’t work with WiFi, but I didn’t try.
Essentially you want to create a bridge network device (
br0), enslave your ethernet NIC (
$IFNAME below, e.g.
enp3s0) to it, then connect through the bridge. This can all be done with your favourite network manager. Once set up, in your VM’s NIC settings select “Specify shared device name” and enter
br0 as the bridge name.
NetworkManager: Delete any existing connections and run:
nmcli con add type bridge ifname br0
nmcli con add type bridge-slave ifname $IFNAME master slave-br0
systemd: Delete any existing connections and add these files to
/etc/systemd/network, and remember to enable
systemd-resolved if not already:
The best part about all this is that you’re using real hardware, so you can really take it as far as you want.
Normally you’ll be using your integrated GPU for your host graphics, which is significantly less powerful than a dedicated GPU, so you can’t run intensive games in the host.
One option to to just use two dedicated GPUs, if your motherboard can fit them and they’re in separate IOMMU groups, but obviously that would be expensive and probably wasteful since you’ll only want to play one game at a time anyway.
Another option is to not disable the dedicated GPU with
vfio-pci at all and run on it while not using the VM. This is pretty annoying though since you have to restart everything when starting the VM.
Supposedly you can also use the dedicated GPU just for rendering games (with a performance penalty), and not the whole desktop, therefore not requiring restarting. Here’s a thread about it, but it probably depends on your hardware. I didn’t have any luck with this.
Anyone who’s dual booted before knows that Windows doesn’t like to play along, so it’s best to just leave it alone on its own entire disk. Two SSDs are again expensive and probably wasteful though.
My setup has Windows in the first two partitions, then Arch as another at the end of the disk. I initially installed Windows on a HDD and shrunk, copied, and expanded it, but that broke the Windows boot process (not fun to fix at all), so I’d do it the opposite way if I were to do it again.
Sharing both on the same disk has the problem of having to select the right OS in the GRUB menu. I’ve settled with defaulting to booting Windows after 5 seconds (which you can actually do outside of a VM if you want, by the way!), so that the VM boots without input. I have to remember to select Arch when booting initially though. [Edit] I’ve come up with a short GRUB script to automatically detect whether it’s running in the VM. It simply checks whether a filesystem which isn’t visible to the guest is present (replace
search --no-floppy --fs-uuid --set default_hack $FSUUID
if [ x$default_hack != x ]; then
Regardless, it’s always a good idea to have a large HDD to install your games and portable apps onto, so they’re not dependent on any particular OS.
If you need to pass files between host and guest, you have a few options: USB drive if you have two USB buses (which I don’t), normal shared folders exposed to the VM, normal network sharing, or another partition which you manually mount and unmount (my favourite since it’s the most flexible and performant).
Update: Got a separate SSD for the guest, so much easier!
Overall I’m really surprised by how well it all works. I feel like I got pretty lucky with my hardware though, since I built this PC back in early 2014 with no thought about this at all, but as it’s slightly older it’s also probably better supported.
As I’ve mentioned, there’s a few annoyances for me still: no second USB bus, awkward audio routing, manually switching displays, no access to dedicated GPU in host, manual bootloader OS selection. But the positive is that all of these are totally avoidable with some additional hardware!
It’s definitely not for the faint of heart though, but I’m hopeful it’ll only continue to improve in the future. It took me most of a weekend to get to this point, and I work with Linux for a living. Virtually every component of the VM requires manual tweaks to optimise performance as well.
So in conclusion, with the right hardware and a bit of work, this is a completely viable solution today, and I’m very excited to not have to shoehorn development software into my Windows installs any more!
[TODO add benchmarks]
6 Months Later
Still using this setup!
I’ve switched from GNOME to i3, which requires explicitly starting a polkit agent, but otherwise makes no difference. At some point the VM started using 100% CPU for maybe 10 seconds while starting up, but I think it’s something messed up in my config.
I upgrading to an i7 4790 and 16GB of RAM, don’t have to worry at all now.
Displays and audio have been juggled around a bit. The guest is now attached via HDMI to my main monitor, with the guest using it as the default audio device. I then connect the headphone out from the monitor to my new Logitech Z337 speakers using a 3.5mm to RCA adapter. The host is connected to the same monitor via DisplayPort, and my second monitor via DVI, and directly to the speakers with 3.5mm. The speakers are amazing and mix the two inputs together, and even a third bluetooth input from my phone. They’re even smaller and more powerful than my old ones, 11/10 would buy again. I also bought a USB to headphone/microphone converter (Logitech again), so that I can plug my headset (also Logitech) into its microphone input and the speaker’s headphone output at the same time since they can be right next to each other. I still just run Discord on the host, haven’t needed to figure out in-game voice yet.
The only thing that still annoys me is USB and input control. I’m currently using a combination of Synergy with the host as the server for general usage, and the ssh script method to attach the devices for actual gaming, mostly because of the raw mouse input going crazy over Synergy. I tried adding a PCI USB controller but it was second hand and didn’t work, a new one might be fine though. USB hubs also can’t be passed through directly. The microphone converter also doesn’t seem to work, I just stop getting any input, otherwise that would be a good solution to in-game voice.
So yeah, still very happy. Windows 10 has only degraded more (the new feature update even bricks the install), glad to be rid of it.
9 Months Later
Still going strong.
I couldn’t resolve the issues with the Windows update, and booting directly into it only made things worse. My best guess is Windows doesn’t like the virtio drivers while in various “safe” modes, so this can probably be avoided by switching to less performant settings for the yearly “upgrade”.
I also realised I didn’t actually use UEFI mode for the VM. Turns out I’m using BIOS/legacy mode for my Linux host as well, and since they share an SSD I couldn’t be bothered switching it around.
I ended up wiping the Window partition, and expanding the Linux partition to take up the entire SSD. I reinstalled Windows inside a virtual disk instead. The disk performance isn’t as good, and I don’t have the free space to be copying games onto it willy nilly like I used to. Would definitely recommend having a dedicated SSD for Windows. I can’t justify buying one right now but will definitely in the long run, which will resolve all these awkward problems with partitions I’ve had.
As Saren Arterius mentioned in the comments, there is a solution for the mouse issues I was having with Synergy. Just add
relativeMouseMoves = true to your
synergy.conf, like so:
relativeMouseMoves = true
screenSaverSync = true
You can also hit the scroll lock key to lock the cursor to a screen, if a game doesn’t already do this. This does indeed make it work fine, but I can personally notice the lag so I’ll probably stick with my existing convoluted detaching and reattaching USB devices via ssh solution. I still use Synergy for convenience for everything except in game though.
I’ve found a fun solution here. I’m still using Discord etc in the host, but the problem is my mouse and keyboard are attached to the guest while playing, so I can’t detect input for push-to-talk. I bought a cheap USB foot pedal from eBay, and with a bit of tweaking to some
vim-clutchify script I found, you can actually map it to a keyboard key. So since the pedal stays attached to the host, that can still be used for PTT. Also one less thing you need to use your hands for. And it’s just cool anyway. Obviously still can’t VoIP in game in the guest, but text chat is more than enough for me to troll people anyway.
Essentially you want to grab https://github.com/twitchard/vim-clutchify and follow its instructions. In my case the device shows up as
413d:2107, so I had to hard code that into the script instead. I then changed the keys pressed to only be a down or up on F13, not a down and up together, i.e.:
devices = [evdev.InputDevice(fn) for fn in evdev.list_devices()]
device = next(x for x in devices if re.search("413d:2107", x.name))
with UInput() as ui:
for event in device.read_loop():
if event.type == evdev.ecodes.EV_KEY:
if event.value == 1:
ui.write(e.EV_KEY, e.KEY_F13, 1)
elif event.value == 0:
ui.write(e.EV_KEY, e.KEY_F13, 0)
Still very happy. Would like to get that second SSD eventually. Also still need to try out solutions for using the dedicated GPU in the host while the guest isn’t running. All of the other annoyances I’ve gotten used to and don’t really need solving.
18 Months Later
Got that second SSD, made life a lot easier.
Got a new wireless headset. My old one was wired 3.5mm which I used via a USB dongle. The new one also uses a USB dongle but it’s wireless, and most importantly surprisingly has a 3.5mm input which I can pass the mixed output into from the speakers. Microphone input comes via USB though, which is alright. Routing audio is such a nightmare, but it works, see the audio section above for way too much detail on different combinations I use with this.
Looking Glass - Quickstart Guide
These guides are designed to help you get Looking Glass up and running on an already configured QEMU KVM Virtual…
This is a pretty cool tool which captures your video output on the guest and lets you view it in a window on the host with minimal latency. It works pretty well, again check the video section above for details. Requires a little setup as detailed in the link but nothing too hard.
So one minor annoyance is when booting the VM it spikes to 100% CPU and takes a long time. Turns out this is because Arch’s default kernel has preemption enabled by default. I’ve started compiling my own kernels with
CONFIG_PREEMPT_VOLUNTARY=y and without
CONFIG_PREEMPT=y in the config, which makes the boot much smoother.