Virtualizing your personal computer

I built a desktop this summer because I wanted to buy a Vive and start developing in virtual reality, but my MacBook’s GT750m just wasn’t going to cut it. Below, I detail how I can game, play in VR, and work day to day in a Virtual Machine and why I chose to make a VM my daily driver.

Here’s the pcpartpicker link. I’ll put the highlights below, but the hardware is definitely not the interesting part.

Xeon E3 1240 V5 @3.50GHz / 16GB DDR4 Ram / MSI GTX 1070 Aero 8G / 256GB Samsung 950 Pro NVMe SSD

The weirdest thing about my computer is the software. It runs Windows in a Virtual Machine on top of Slackware Linux. Windows is installed on the 950 Pro SSD, Slackware boots off a flash drive plugged into an old USB PCI card my summer job’s boss’s husband (inhale) gave me. I game and do homework in the Windows virtual machine, and Slackware acts as a hypervisor, NAS, and runs several Docker containers. I have 2 Fedora VMs set to use the same virtual disk so when I need to work in a unix environment for stuff like web development I can choose to run Fedora alongside Windows or in lieu of it. Sometimes, when I’m feeling really adventurous, I’ll even spin up a Gentoo vm.


When my mom asked me this question, I told her about the data protection of my storage pool’s parity system and about how serving all my files as SMB shares would make it easier to work fluently across my laptop and my desktop. To be honest though, I just like to tinker. This stuff is fun for me, and if you’re reading this, I bet it’s fun for you too.


I’ve been toying with the idea of virtualizing a primary machine for years, I even did a bunch of research into it back in my Sophomore year of high school when I built my brother a computer for his birthday. I had a flash drive with ESXi ready to go and everything. The issue holding me back was all the uncertainty. This is the sort of complex configuration that can look fine on paper but cause hell when you run into problems you couldn’t possibly have known about before. I knew that it was theoretically feasible to pass my GPU to Windows and play games, but I was worried about the performance loss from the overhead the hypervisor introduces and the extra complexity I’d be saddling my brother with.

I finally decided to go for it on my own machine when a popular YouTube channel called LinusTechTips released a video showing two people gaming on one CPU by running two copies of Windows in virtual machines. They also released another video that employed a similar technique to use a gaming computer as a file server on the side, which is what I was looking for all along. Their video is okay, and I did end up using the same software they use, but I think I’ve improved upon their concept in a few very meaningful ways.


So Unraid is a commercial product based nearly entirely on open source tools. It hosts some files with Samba and manages virtual machines with KVM, QEMU, and Libvirt. It runs on a modified form of Slackware Linux and boots from a (and is licensed per) flash drive.

I could have configured all the tools I needed to achieve the same end result myself, but Unraid provides an extremely helpful web UI to manage it all as well as a forum with a vibrant, helpful community and regularly occurring visits from the developers themselves. Don’t underestimate the value of a large community when working on projects like this, it can mean the difference between succeeding and getting stuck and being forced to give up. The LTT videos got hundreds of thousands of views, which was awesome for me because that meant an influx of people trying to do the exact same thing I was doing, using the exact same software I was.

Configuration, Challenges, and Solutions

Single GPU Passthrough

My CPU does not have onboard graphics, and my motherboard does not have a video output. The only graphics device on my computer is my GTX 1070, which gets used by the Linux host when my computer first boots. Usually, that means that you can’t use that card in a VM because it’s already in use. There’s a fantastic thread on the UnRAID forums, but the basic gist is you need to borrow a second graphics card so the card you want to use isn’t used by Linux at boot. Install the borrowed card in your first PCI slot and your card in the second, then boot your computer. Dump the ROM from the card into a file, then put some code in the virtual machine’s configuration file to tell it specifically to use that ROM. If you’re having trouble loading your ROM, try starting a VM with your video card attached to it, shutting the VM down, then pulling the ROM.

Per-device USB Passthrough Sucks

The method LinusTechTips uses in their video to attach peripherals to the virtual machine involves opening the web ui, modifying the VM configuration, then restarting the VM. Not only is that annoying and extremely cumbersome, but as far as I can tell it doesn’t even work for iPhones or devices that display themselves as multiple devices like my Vive. My solution was to pass through my motherboard’s on-board USB ports to my virtual machine. Your motherboard connects its USB and front panel USB ports to your CPU as a PCI express device, so as long as you have another USB card for your boot flash drive, you can go ahead and pass the onboard USB controller straight through to your VM like any other PCI device. The only hiccup is that Linux uses your onboard USB by default, so you may need to prevent Linux from touching it by stubbing the device (follow these instructions to stub, but for the USB controller).

Front Panel Sound

Same deal as the USB, pass through the front panel audio controller as a PCI device. This time, though, Intel Skylake chips specifically group the front panel sound and the SMBus into the same IOMMU group. It’s an easy fix, just stub the SMBus .

The Final Product

Overall, I don’t know that I would do this again. The front panel sound thing, for example, was a huge unexpected headache that put me out of commission for nearly an entire day. When I first set up my computer, passing through the front panel sound didn’t require stubbing. At one point, UnRAID upgraded the version of the Linux kernel they used, and when I downloaded the upgrade, my Windows VM would no longer boot at all. As it turns out, the only reason it worked in the first place was because of a bug in the Linux kernel that prevented Linux from correctly identifying or using the SMBus, effectively stubbing it for me. It makes sense why the update broke my VM now, but when it happened I wasted a lot of time diagnosing it and not being able to use my computer all day was incredibly stressful. It’s just an example of the many random things you run into that you could not have foreseen or prepared for when you run your computer in a VM.

The performance is great. I only pass through 6 of the 8 threads on my CPU, so multithreaded benchmarks like Cinebench pretty much scale like you’d expect. Single threaded benchmarks measure about a 5–10% decrease in per-thread performance vs bare metal, but honestly I don’t notice the difference. I get the performance I expect out of the games I play, as GPU performance doesn’t appear to be affected at all.

I do rely on the NAS functionality and Docker appliances in my daily workflow, I don’t think I’d be able to use two computers as daily drivers so fluidly without the NAS. I’ll be detailing my multi-os multi-computer workflow more in an upcoming series, stay tuned.

That being said, if I had the money, I’d build a separate machine to house the NAS and Docker stuff, not because I need the extra ~10% performance of bare metal, but because it would eliminate a lot of the complexity of the setup.

Like what you read? Give Chuck Dries a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.