[2/2] From Amazon to Windows 10 on a KVM

Part two — from a running computer to Windows 10 on a KVM

Romain Mondon-Cancel
30 min readAug 29, 2017

A new computer for a fresh start

This is the second part of the story behind my new computer. I described how I built my computer and installed Ubuntu on it on a previous post. As a reminder, here are the different features I want for my computer:

  • Having a 24/7 Linux-based machine running on my local network, offering different services to the other machines, notable a Synergy server for mouse and keyboard sharing and a Samba server for file sharing,
  • Setting up remote access to my Linux system through VPN on my router using OpenVPN, with SSH access and VNC through SSH tunneling,
  • Setting up Wake On LAN for both my Linux and Windows systems,
  • Using NVidia GameStream to play on my TV from the Windows system (and ideally, from anywhere in the world; talking about portability).
My desktop, up and running!

Chapter 4 — Setting up Ubuntu

First things first, let’s update our system; open a “Terminal” (using the top-left icon to open a search field) window and run:

sudo apt-get update
sudo apt-get upgrade

apt-get is the package manager used by Ubuntu to manage the software installed on the machine. After a quick reboot, everything should be up to date.

Firewall, SSH, Vim and git

For security reasons, even though the system is supposedly behind the firewall of my router and hence not directly accessible from outside the local network, it would be better to set up a firewall on the machine; we can do that with the easy to configure ufw on Ubuntu:

sudo ufw enable

Now, we shall install a couple basic things (which I am always surprised are not installed by default): OpenSSH, a service that will run on our computer allowing us to connect to it remotely, via the network, using the SSH protocol; Vim, a console-based text editor; and Git, a versioning manager, which we will practically use to download the latest versions of different projects.

Vim is of course a personal choice, feel free to use whichever text editor you prefer; nano or gedit (if you’d rather have a GUI) are for example great alternatives.

sudo apt-get install openssh
sudo apt-get install vim
sudo apt-get install git

We definitely want to grant ssh access from outside our system; SSH use the port 22 by default:

sudo ufw allow 22

Generally speaking, when setting up networking features, it is a good idea to temporarily disable ufw to make sure things work as expected before setting up the permissions: to do so:

  • use the sudo ufw disable command to disable ufw,
  • set up and test your new feature until it work,
  • use the sudo ufw enable command to enable back ufw,
  • add the relevant rules to ufw to allow traffic for this new feature.

It will save you precious minutes trying to tweak a .conf file before realizing it was just the firewall denying the traffic.

After a reboot to make sure the services start as intended, I configured my router to statically allocate the IP address of my computer: my-desktop -> 192.168.0.80.

Overclocking

As the i7–7700K is supposedly specifically designed to be overclocked, I reckoned it would be a shame not to overclock it. Nevertheless, I wanted to be pretty conservative when overclocking it; I want my system to be reliable rather than powerful.

First things first, if we want to overclock, we need to install a couple things to stress test our CPU and monitor its temperature. On Ubuntu, I used:

  • prime95, a software for distributed prime computing; it’s an excellent software to push the CPU to its limits. Once the tar.gz file downloaded, simply extract it to some folder, then go to this folder in command line (cd name_of_folder to move to a specific folder, ls to list files in a folder) and run ./mprime. It will then ask a couple things, answer that we just want to torture the CPU.
  • To check the status of our CPU, we are going to install lm-sensors, which is a console-based application to display the status of our computer sensors. Let’s install it with sudo apt-get install lm-sensors. We then need to detect our sensors, with sudo sensors-detect; this will ask us a couple questions, let’s answer “yes” to all. Finally, running sensors will display information about our desktop, with CPU cores temperatures.

We will then typically run mprime for some time while running sensors in another terminal to regularly check the status of the CPU cores.

I partly followed this great guide to overclock the i7–7700K, which is an in-depth guide to overclocking. I had a pretty lazy approach, because I did not want to reach the optimum, but rather a good and stable configuration.

Let’s restart our computer and access our BIOS. Here, there should be a menu to access the clock setup of the motherboard; on mine, it was the “OC” menu (for overclocking; duh). The most important thing is the “CPU Ratio”, which is an integer; it is the value by which the “CPU Base Clock” frequency is multiplied to determine the CPU frequency. By default, the base clock is set to 100 MHz, which means a CPU Ratio of 42 (my default value) will bring the CPU frequency up to 4,200 MHz, or 4.2 GHz. I started by setting the “CPU Ratio” to 48, bringing the frequency to 4.8 GHz, before saving and restarting the computer.

Back to the Ubuntu system, let’s stress test this setup: open a terminal, reach and run the mprime application asking it to torture our CPU; once running, open a second terminal to monitor the CPU cores temperature with sensors. It should reach high temperatures pretty quickly, up to about 80 degrees (Celcius). After letting mprime run for some time (ideally, for about an hour, but I left it for about twenty minutes), if Ubuntu is still up and running, it’s good news: the system is stable at this frequency. If the system crashed, well, tough luck, you’d better tune the CPU frequency down or increase the voltage.

It ran smoothly on my side, but the temperatures reached were slightly higher than what I wanted, up to 83/84 degrees; I hence decided to tune the CPU down to 4.7 GHz, and called it a day.

Synergy

Synergy is a paid software developed by Symless to share a keyboard and a mouse across multiple computers. It requires one server and multiple clients, one for each additional system to connect. It is a very convenient way to share a keyboard and a mouse between a supervisor and a KVM.

After buying Synergy Pro (for encryption; I don’t like the idea of keystrokes being sent over the network without encryption, even if it’s a local one), we can go to our download page to download the .deb file. A double-click on the file and it’s installed on the system.

Once installed, let’s start the application searching for “synergy” in the start menu. In the window that just opened, check “Server” to run Synergy as a server on Ubuntu. We will then have to set up the other Synergy applications as “Client” on the other computers specifying the IP address of this computer to connect them to the server (192.168.0.80 in our case).

For a basic configuration of the server, click on the “Configure Server…” button; then, on the “Screens and links” tab:

  • drag and drop the screen on the upper-right corner somewhere next to the central screen (my-desktop with my specific configuration),
  • double-click on it to edit the screen settings: we want to give the screen the name of our future Windows KVM, which for me will be my-windows.

For a more dynamic setup, I also ticked the “Check clients every 5000ms” in the “Advanced server settings”. I also ticked “Use relative mouse moves”, which is useful for full-screen games to properly work with Synergy. Finally, I also set up a couple hotkeys to switch more easily between my screens:

  • I set up the Ctrl+Shift+F1 hotkey to switchToScreen(my-windows),
  • the Ctrl+Shift+F12 hotkey to switchToScreen(my-desktop),
  • and the Ctrl+Shift+ScrollLock hotkey to lockCursorToScreen(toggle), to prevent my cursor from mistakenly leaving the current screen when I don’t want it to.

To make sure other computers will be able to connect to Synergy, we have to add a rule to ufw, with the following command, to allow connections on port 24800 (the port used by Synergy):

sudo ufw allow 24800

Finally, I wanted Synergy to start on boot as a daemon, without the GUI visible for a more fluid experience: for that, we first have to save our settings as a configuration file, with the “File > Save Configuration as…” menu. I saved it under ~/.synergy/synergy.conf.

Now we have our configuration file, we want to ask Ubuntu to start the synergy server as a daemon on boot. For that, we have to search for “Startup Applications” in the start menu. Here, we simply add a new application to start on boot, with the following setup:

  • Name: whatever we want; I called it “Synergy”,
  • Command: synergys --config /home/[username]/.synergy/synergy.conf --enable-crypto (the s at the end of synergys is for server; don’t forget to replace username by your Ubuntu username),
  • Comment: whatever we want; I left it empty.

Here we go! If we restart our computer and run the command ps -e | grep synergy, it should display a line containing synergys, which means the service is correctly configured and the Synergy daemon starts properly.

Samba

Samba is a software that implements a network protocol to share files across computers, notably with Windows systems. Setting up a Samba server on Ubuntu allows us to mount a Ubuntu folder on the Windows KVM as a network drive.

To install samba, simply run the following in a Terminal:

sudo apt-get install samba

This will automatically setup our system to run the samba service on startup, so we do not have much more to do apart from configuring the server as we want. First, let’s configure ufw to open access to the samba daemon from other computers in our local network:

sudo ufw allow from 192.168.0.0/24 to any app Samba

Where 192.168.0.0/24 means any IP address of the form 192.168.0.XXX, which are the addresses assigned by my router.

The configuration file of samba is located at /etc/samba/smb.conf; I would recommend to write a backup file with the cp /etc/samba/smb.conf /etc/samba/smb.conf.bak, but if there is any problem there still remains a default configuration file at /usr/share/samba/smb.conf. For my setup, I added the following lines to my smb.conf file:

First, in the [global] section of the file:

[global]
...
# States that the server is part of the LOCALGROUP workgroup
workgroup = LOCALGROUP
# Gives the name MY-DESKTOP to this server
netbios name = MY-DESKTOP
# Makes sure the server will follow our symlink to ~
allow insecure wide links = yes

The lines starting with a # are comments ignored by samba and only there to describe what each line does.

Finally, at the end of the file, after the [print$] group, let’s create a new shared folder on the Samba server called share:

[share]
# Just a comment to describe the shared folder
comment = Desktop Share Folder
# The path on the Ubuntu system to share
path = /srv/samba/share
# Typical configuration for a folder shared with Windows
available = yes
# Replace username by the actual username on Ubuntu
valid users = [username]
read only = no
browseable = yes
public = yes
writable = yes
# Makes sure the server will follow our symlink to ~
follow symlinks = yes
wide links = yes

The specific configuration to allow Samba to follow symlinks comes from this answer; because I have a symlink to ~ these lines are mandatory in my setup.

VNC

VNC, for Virtual Network Computing, is a convenient protocol for remote desktop access: it allows me to connect myself to my computer, viewing my screen and interacting with it remotely, which is great when I’m not home (especially, I can do that from my smartphone; cool stuff). Problem is, native VNC is not really good with encryption, so we are going to require SSH tunneling to connect to our VNC server.

Ubuntu 16.04 LTS comes with an installed VNC server called Vino. It’s a pretty straightforward application with basic features. However, it is not easy to configure for a specific feature I do want: the ability to connect to my computer while still on the login screen. We hence are going to use x11vnc, which is only slightly more complicated to configure. First, let’s install it:

sudo apt-get install x11vnc

Once installed, we want first to setup a password for a more secure handshake, with the following command:

sudo x11vnc -storepasswd /etc/x11vnc.pass

This will ask us for a password (twice), encrypt it and save it in the /etc/x11vnc.pass file.

Finally, we want to create a systemd service to automatically start x11vnc when the system boots. systemd is a software that manages efficiently the different services initiated when we boot Ubuntu. For that, let’s create a service file with the sudo vim /lib/systemd/system/x11vnc.service command for x11vnc. Here, click I to enter “Insert” mode and write the following configuration file:

[Unit]
Description=Starts x11vnc server at startup
After=multi-user.target
[Install]
WantedBy=multi-user.target
[Service]
Type=simple
ExecStart=/usr/bin/x11vnc -auth guess -forever -loop -noxdamage -repeat -rfbauth /etc/x11vnc.pass -rfbport 5900 -shared

This script comes from this blog post. It will automatically execute the command written after ExecStart= at each startup, which will run the x11vnc server with authentication using the /etc/x11vnc.pass file and listening to port 5900.

Press “Escape” to leave “Insert” mode, then type :wq to write the file and quit vim. We now have our service script; all is left is to enable it. For that, let’s run:

sudo systemctl enable x11vnc.service
sudo systemctl daemon-reload

This will enable our script and reload the configuration to be properly loaded next time. After a quick reboot, the x11vnc server should start properly. We can check that the service is running with the sudo systemctl status x11vnc.service command.

To deny direct connections and make sure we use SSH tunneling, we really have nothing to do, except from making sure ufw is enabled. By default, it will deny connections on port 5900, which is the default port for the VNC protocol.

On Android, to connect to our VNC server through SSH tunneling, I use the free application bVNC Free:

  • in Connection Type, choose Secure VNC over SSH,
  • in SSH Server, enter the IP address of the system, in our case 192.168.0.80,
  • in SSH Username, enter our Ubuntu username,
  • in SSH Password, enter our Ubuntu password (we can also generate a SSH public/private key pair),
  • in VNC Server, enter localhost,
  • in Port, enter 5900,
  • Finally, type the VNC password chosen after installing x11vnc in the appropriate field.

We have to make sure the smartphone is connected to the same network as the computer for the connection to work (it must be connected to the router via WiFi). If the connection is successful, congrats! Everything works as intended.

For some explanation, we are here establishing a SSH connection to our computer (192.168.0.80) using our account credentials, then we create a tunnel though this SSH connection to the VNC server, which is available on port 5900 on the localhost address on the machine (localhost points out to the local machine). ufw denies connections on 5900 from outside the machine, but as we are already connected to the local machine through SSH on port 22 which we allowed earlier, we can connect to the port 5900 locally, from inside the machine.

Wake On LAN

Wake On LAN is a convenient feature allowing to wake up a computer remotely, by sending a specific network packet called “Magic Packet” to an Ethernet controller that will wake up the system. We can therefore switch on our computer remotely. This requires compatible hardware, but most recent motherboards with an integrated Ethernet controller are compatible with the Magic Packet.

We already set the Wake On LAN feature in the BIOS; now we have to do a little bit more configuration on Ubuntu to make sure everything works:

sudo ethtool -s eth0 wol g

Replacing eth0 by the name of the network interface; we can list the different interfaces with the ifconfig command to identify the name of the Ethernet controller. Once done, if we run the sudo ethtool eth0 command, we should see a line with Wake-on: g; if so, it means the command worked.

This was enough on my system to ensure Wake On LAN works consistently, however this previous command may be required after each boot; in that case, this Ubuntu help page describes how to ensure it is executed at each boot.

My router allows me to send Magic Packets directly from its interface, that’s how I checked if the Wake On LAN was working; if that is not the case for you, you might want to use another software to test this feature.

Chapter 5 — Installing Windows

Here comes the tasty stuff. Running Windows 10 on a KVM. That sounds crazy. Is it?

Let’s start by installing Virtual Machine Manager (virt-manager). It’s a nice GUI tool to manage virtual machines:

sudo apt-get install virt-manager

Setting up OVMF

OVMF is a virtual firmware designed to enable UEFI support in virtual machines. UEFI is the modern version of the BIOS. To ensure maximal compatibility with Windows 10 and our GPU, this step is strongly advised, albeit not necessary.

For that, we need first to tell qemu (the emulator behind virt-manager) where to find the OVMF specific files; to do that, open the qemu configuration file with sudo vim /etc/libvirt/qemu.conf and add the following lines to the file:

nvram = [
"/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd"
]

Installing Windows 10

After buying a Windows 10 license, we can download an .iso file for the installer. Once we have it, start “Virtual Machine Manager” and click on “File > New Virtual Machine”:

  • Choose “Local install media (ISO image or CDROM)” and click “Forward”,
  • Click “Use ISO image:”, then “Browse…”; it will open the Virtual Manager Storage Volume management window, which we’ll see later on. Here, simply click “Browse Local” on the bottom of the window to locate the .iso file we downloaded, then click “Forward”,
  • Here, we have to choose how much RAM and cores we want to allocate to our KVM; I chose 16001 GiB of RAM and 6 CPUs out of 8 virtual cores,
  • Here we have to select the primary storage where we will install Windows 10; select “Select or create custom storage” and click on “Manage…”. Here, on the bottom-left corner, click the “+” sign. It will open a new window. Give a name to the storage (I chose windows-ssd) and select type “disk”, then click “Forward”. Here, in “Target path” select /dev, and in “Source Path” select the device name of our SSD, of the form /dev/sd#. In my case, it was /dev/sdb. I also had to check “Build pool”, as I did not use it at all when installing Ubuntu. It will add a new entry on the left (windows-ssd for me), click on it. Finally, next to “Volumes” on the right, click on the “+” sign, give the new volume a name and as much capacity as possible in the “Max Capacity” section. This will create a new partition on the disk ( /dev/sdb1 for me); double click on it to choose it then click “Forward”,
  • Finally, give a name to the KVM (I gave it my-windows); tick “Customize configuration before install” then click “Finish”,
  • This will be the virtual hardware configuration for the KVM; we want to use OVMF instead of the legacy SeaBIOS. For that, in “Overview”, choose in “Firmware” “UEFI x86_64 /usr/share/OVMF/OVMF_CODE.fd”, then click “Apply” to save the changes and finally “Begin installation” in the top-left corner to start the KVM and install Windows 10.

This will configure the environment as described, and start the installation of Windows using the .iso file we specified. Simply follow the instructions; pay attention to give the same name to the Windows computer as we did when we set up Synergy (even though it’s quite easy to rename it in Synergy).

Bridged network interface

By default, libvirt (the software behind Virtual Machine Manager) creates a virtual network on the machine to which the guests connect to; however, that means our Windows system does not have access to our local network and cannot communicate to other devices. To overcome this problem, we must set up a bridge network interface, which will in practice aggregate the virtual network with the local network to enable all machines on both ones to communicate seamlessly to each other.

To do that, in “Virtual Machine Manager”:

  • Open “Edit > Connection Details”,
  • Go to the “Network Interfaces” tab,
  • Click the “+” sign on the bottom-left corner,
  • In the new window, start by choosing “Bridge” in the “Interface type”, then press “Forward”,
  • In the “Name” field, give the interface a name (I gave it br0), then tick your Ethernet connection interface (eth0), then press “Finish”,
  • Finally, check that the “Start mode” for our newly created interface is set to “onboot”.

That’s it for our bridged network interface. We now have to configure our KVM to use it instead of the default virtual network; for that, in “Virtual Machine Manager”, double click on my-windows (or whatever name you gave to your KVM), then click on the information icon on top of the window to display the virtual hardware configuration of the KVM. This is where we will have to update the setup whenever we want to optimize our system. Don’t forget to turn off the KVM before editing the configuration.

We want to change our Network Interface Controller, or NIC. Start by removing the existing one (right click > “Remove Hardware”, no hard feelings), then click “Add hardware on the bottom-left corner. We want a “Network” hardware, with “Network source” set to “Bridge br0”. Check the “MAC address” and leave “Device model” to “Hypervisor default”, it works fine. Finally, press “Finish” to install the virtual hardware (if only it was that simple when I built my desktop).

Now let’s start the KVM to make sure the network is working as intended. After logging in to our desktop and waiting for the connection to establish, we can check if the computer is in the correct network by trying to ping our supervisor: open the command prompt in Windows (or a PowerShell) and run the following command:

ping 192.168.0.80

If the ping receives replies, we’re all set for the next steps!

One last thing we should do before going any further is to set a static IP address to this new system on our local network. As we did for Ubuntu, we can set a specific IP address to the Windows system by configuring the router; I chose to allocate the IP address 192.168.0.90 to my-windows.

Connecting to Synergy

Once the bridge network is working properly, connecting a Synergy client to our Synergy server is easy: from the Symless website, we simply have to go to the download page to download the Windows 64-bits installer, install it, select “Client” and fill the address field with the local IP address of our machine (in our case, 192.168.0.80), before pressing the “Start” button (or “Apply” if the client is already started). If the log shows a “NOTE: connected to server” line, that means it successfully connected to the Ubuntu server. On Windows, Synergy will start automatically at startup, so we do not have any additional configuration to do.

We should now be able to switch from Ubuntu to the Windows KVM simply by moving outside of the screen, following the configuration we chose when configuring the server. Be careful though, Synergy has a disturbing behavior with the KVM emulated window; if you click on the window, you will move your cursor in the Windows system, but as we did not enter it according to Synergy, we cannot leave it; we hence have to use the “Ctrl+Alt” key bind to leave the control of the KVM before entering it the Synergy way.

At the time of writing, there was one significant setback to using Synergy; as I am not a native English speaker, I have two keyboard layouts set up on my systems, and I use the “Alt+Shift” key bind to switch between them. The problem is, Synergy does not support switching between keyboard layouts from a client; it will always follow the layout of the server, so switching language requires me to move the cursor to the server screen before pressing “Alt+Shift”.

Hardware performance optimization

Before going any further, let’s optimize a bit our disk. If we run a disk read/write performance test, the results should be a bit disappointing for a SSD.

If we have a look at our virtual hardware list, we should have an “IDE Disk 1”. As you might know, IDE is an old and slow (for today’s standards) interface for connecting hard drives. Let’s trash that obsolete hardware and install a brand new SATA (the nice and not-so-recent blazing-fast interface your drive probably uses) disk: right click > “Remove Hardware” on the IDE disk (don’t worry, we’re not losing Windows), then “Add Hardware”, select “Storage”, tick “Select or create custom storage”, in “Manage” select the partition on which you installed Windows, make sure “Device type” is set to “Disk device” and finally choose “SATA” for “Bus type”. In “Advanced options”, make sure “Cache mode” is set to “none” before pressing “Finish”.

With an IDE disk, I had read and write speed at about 350 MB/s; after switching to SATA, it skyrocketed up to 500–550 MB/s, which is similar to bare-metal performances.

I also changed a little bit my CPUs configuration: in CPUs, under “Topology”, I changed the default configuration of 6 sockets, 1 core, 1 thread to 1 socket, 6 cores and 1 thread; like that, instead of Windows seeing 6 CPUs with 1 core each, it will now see 1 CPU with 6 cores. It should be very similar in practice, but I felt more comfortable with that topology.

Connecting to the Samba server

Our KVM should not see our Samba server as is; the reason is we set up the server to use the workgroup LOCALGROUP, which is not the default one for a fresh Windows install. Let’s start by updating that:

  • In the “Control Panel”, go to “System”,
  • Here, in the “Computer, name, domain, and workgroup settings”, click “Change settings” on the right,
  • Click then the “Change…” button as if we wanted to rename our computer,
  • In “Workgroup”, enter the name of the desired workgroup (LOCALGROUP for me).

After a reboot to apply the changes, we should now see our MY-DESKTOP computer in the network (MY-DESKTOP being the name we gave in the netbios name of our /etc/samba/smb.conf file). We can also try to access it directly by typing \\MY-DESKTOP in the address bar of the Windows file explorer. If the Samba server is working correctly, we should see a share folder.

We are now going to mount it as a Windows drive; this way, it will be automatically mounted and integrated to our KVM’s filesystem. To do that:

  • First, let’s go to “This PC” and right click anywhere > “Add a network location”,
  • Press “Next” on the first screen,
  • Select “Choose a custom network location”,
  • Enter \\MY-DESKTOP\share\ or browse to the share location,
  • Give it a custom name and/or letter (I went for H: as in “home”).

As I wanted to integrate my Windows environment into the Ubuntu one, I moved the different “My …” folders to this newly connected H: drive; here is how to do that:

  • For each Windows location we want to customize (we are going to start with “Downloads” for the sake of the example), right click on it > “Properties”,
  • In the “Properties” window, go to the “Location” tab,
  • The location should be the default Windows one; click the “Move…” button to change it,
  • Browse to the folder where you want to move the “Downloads” folder; I hence went for H:\[username]\Downloads.

This setup requires us to wait a little bit when we start the Windows KVM for the connection to be established with the bridged network, before it can connect to the samba server and hence reconnect the H: drive.

Fixing the audio

Okay, now we should have a pretty nice set up. But if you try to listen to some music in the KVM, you will notice something really annoying: The sound is cracking, because the default virtualization setting is poorly optimized for sound processing.

The solution I found for this issue is anything but straightforward; after a lot of trial and error (and a couple BSOD), I finally managed to reach a setup that works with almost crystal-clear sound quality:

First, let’s download and extract the AC97 audio drivers, the drivers we will need for the virtual sound card we are going to use, in any folder accessible by the Windows KVM. Now, shut down the KVM.

In a terminal, run the command:

sudo virsh edit my-windows

Where my-windows is the name of our Windows KVM; this opens the configuration file of our KVM, in a not-so-friendly format, where you can find all the parameters we set up for it. We need to change two things. First, we should replace the first line:

<domain type='kvm'>

by

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

Then, we need to add a couple lines at the end of the file, just before the </domain> tag:

...
<qemu:commandline>
<qemu:env name='QEMU_AUDIO_DRV' value='pa'/>
<qemu:env name='QEMU_PA_SAMPLES' value='128'/>
<qemu:env name='QEMU_PA_SERVER' value='/run/user/1000/pulse/native'/>
</qemu:commandline>
</domain>

This will use pulseaudio for our audio driver and more specifically connect it to the server run by our user, which has by default a uid=1000. To check what the specific uid of a specific user is, we can run the command id -u [username]. Once done, type :wq to save the configuration file.

Finally, we are going to change the sound controller of our KVM:

  • Click > “Remove Hardware” on the existing sound device,
  • Click “Add Hardware”,
  • Select “Sound” then “ac97” for the model.

We can now start our machine with this brand new configuration. Sadly, the issue is not over yet; Windows 10 does not recognize automatically the “ac97” audio controller, so we have to install the drivers manually. To do that, we need first to disable driver signature enforcement from Windows to install these drivers:

  • Open the settings and go to “Update & security”,
  • In the “Recovery” tab, click the “Restart now” button below “Advanced startup”,
  • After the reboot, click “Troubleshoot”, “Advanced options” and “Startup Settings”,
  • Click the “Restart” button to restart once more,
  • Press “F7” to “Disable driver signature enforcement” then “Enter” to boot Windows 10.

Finally, we can now install the “AC97” drivers:

  • In the start menu, open “Device Manager”,
  • In “Other Devices”, there should be a “Multimedia Audio Controller”, which is our unrecognized audio controller; right click > “Properties”,
  • In the “Driver” tab, click “Update Driver”,
  • Click on “Browse my computer for driver software”,
  • Click “Browse” and browse to the folder where the drivers were extracted, then click “Next”,
  • If everything goes as planned, this should install properly the “AC97” driver for the audio controller.

We should now have a good sound quality with almost no distortion.

Fixing the clock

While we’re at it, let us fix the clock of our Windows KVM; by default, Windows uses UTC time, and systematically gets back to it after each reboot, which is inconvenient.

To make the KVM use local time setting instead, shut down the KVM, run again the virsh edit my-windows command and replace the line:

<clock offset='utc'>

by

<clock offset='localtime'>

Type :wq to save the configuration file. At the next boot, Windows should have the same time as the Ubuntu system.

Wake On LAN, part 2

As we did for our Ubuntu system, it would be great if we could also wake up our Windows KVM remotely with a Magic Packet, without having to go through the gruesome process of connecting to the Ubuntu machine to start it manually. The problem is, our KVM does not have an actual NIC, so we cannot configure it with the Wake On LAN technology; fortunately, there is a workaround.

Here comes the libvirt-wakeonlan script, available on GitHub here. This is a python script that listens to packets received by the system, and if a packet is a Magic Packet, it will start the KVM the packet is trying to wake up. To install it, follow the instructions on the README.md file: first, open a Terminal, go to the folder of your choice (I went with ~/Documents) and run:

git clone git://github.com/simoncadman/libvirt-wakeonlan.git

This will create a new folder called libvirt-wakeonlan and copy the content of the remote repository in it. We then have to install it with:

cd libvirt-wakeonlan
./configure make install

Unfortunately, this will not work out of the box; this script is rather old, and not maintained anymore. It is configured to install this script as a service with the upstart init system, which has been replaced since Ubuntu 15.04 by systemd. We hence have to create ourselves the .service file to tell systemd to automatically run our script in the background.

For that, create a service file with the following command:

sudo vim /lib/systemd/system/libvirt-wakeonlan.service

Once the file is opened, write the following service configuration:

[Unit]
Description=Starts KVM instances from wake on lan packets
[Service]
EnvironmentFile=/etc/systemd/libvirt-wakeonlan.conf
WorkingDirectory=/usr/local/share/libvirt-wakeonlan
ExecStart=/usr/local/share/libvirt-wakeonlan/libvirtwol.py $LIBVIRTDWOL_INTERFACE
Type=simple
[Unit]
Requires=libvirtd.service
After=libvirtd.service
[Install]
WantedBy=multi-user.target

As before, :wq to save the file. We also need to create a configuration file to setup our environment variables to make the script listen to the right interface:

sudo vim /etc/systemd/libvirt-wakeonlan.conf

In the configuration file, write the following:

LIBVIRTWOL_INTERFACE='br0'

Where br0 should be replaced by the name of the bridged interface we chose before.

We now have to enable this script in systemd for it to start properly, with the following commands:

sudo systemctl enable libvirt-wakeonlan.service
sudo systemctl daemon-reload

After a reboot, the script should start as expected; we can check that with the command

sudo systemctl status libvirt-wakeonlan.service

Finally, we can check if Wake On LAN is working as expected, using the same method as we did for our Ubuntu system.

Setting up the GPU

This part is definitely the most complex of this setup, for multiple reasons:

  • Modern GPUs are designed to be usable by the UEFI (the modern version of the BIOS), which requires it to be usable very early in the booting process of a computer,
  • Unlike most of the other components of a computer, a GPU cannot easily be shared across multiple systems, it is mostly bound to a system once claimed by one,
  • In addition, NVidia cards require specific configurations to bypass the infamous Error 43 happening whenever it detects that it has been booted in a virtual environment.

I hope you came armed with some time and patience. Let’s get on our way! First, let us activate IOMMU on our system. For that, open the grub configuration file with sudo vim /etc/default/grub, and replace the line GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash” by:

  • GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash intel_iommu=on” with an Intel CPU,
  • GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash amd_iommu=on” with an AMD CPU.

Grub is the bootloader for Linux systems, which means it is the thing that will know what and when to load from our hard disk in order to start the operating system (in my case, Ubuntu). Even though the BIOS allows the use of IOMMU (it’s the VT-d / AMD-V thing we enabled in the first part of this post), will still have to tell the grub that Ubuntu can use it. Now, run sudo update-grub to update it with this new configuration, then reboot.

We can check that it worked correctly with the following commands:

  • dmesg | grep -e "Directed I/O" with an Intel CPU, which should return DMAR: Intel(R) Virtualization Technology for Directed I/O,
  • dmesg | grep -e "AMD-Vi" with an AMD CPU, which should return something like:
AMD-Vi: Enabling IOMMU at XXXX:XX:XX.X cap XxXX
AMD-Vi: Lazy IO/TLB flushing enabled
AMD-Vi: Initialized for Passthrough Mode

Now, we want to prevent Ubuntu from claiming our GPU, because we want to hold it for our Windows KVM. For that, we have to grab it with the vfio-pci driver, a specific driver designed to pass PCI devices through a KVM. This is necessary for GPUs, as they cannot usually be shared between multiple systems.

To do that, we need first to get the ID of our GPU; first, let’s run the command lspci -nn | grep VGA. This should return something like:

00:03.0 VGA compatible controller [0300]: Intel Corporation Device [8086:ZZZZ] (rev 04)
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:XXXX] (rev a1)

The first line is the integrated graphics and the second the NVidia GPU, and the second line, starting with 02:. Because modern outputs carry both audio and video, our GPU has actually two controllers, one for the video and one for the audio. To get them both, run the lspci -nn | grep 02: command to get:

02:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:XXXX] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:YYYY] (rev a1)

Note down, the 02:00.0 ID (which represents the physical location of the GPU), as well as both 10de:XXXX and 10de:YYYY IDs (which represents the GPU model).

Please note that the following procedure will only work if the GPU used by Ubuntu and by Windows are different.

We now want to tell the vfio-pci driver to grab this specific hardware. To do that, we need to modify two files:

  • First, open sudo vim /etc/initramfs-tools/modules and add a line at the end of the file containing vfio-pci ids=10de:XXXX,10de:YYYY, replacing XXXX and YYYY by the IDs of the GPU,
  • Second, open sudo vim /etc/modules and add at the end of the file the following lines:
# VFIO settings
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
kvm
kvm_intel

Replace the last line by kvm_amd with an AMD CPU.

This will tell the computer to load the vfio modules at boot time, and bind the devices to vfio-pci via the initramfs. Let’s update the latter before going further with the sudo update-initramfs -u command, then reboot.

Let’s check that vfio is now grabbing correctly the GPU with the dmesg | grep vfio:

vfio-pci 0000:02:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none
vfio_pci: add [10de:XXXX[ffff:ffff]] class 0x000000/00000000
vfio_pci: add [10de:YYYY[ffff:ffff]] class 0x000000/00000000
vfio-pci 0000:02:00.0: enabling device (0000 -> 0003)

If you have this last line, it means vfio-pci is correctly taking control of the GPU.

If this does not work, it may be because the NVidia driver (the nouveau driver by default on a Ubuntu system) grabs the GPU before vfio-pci. To check that, we can run the lspci -k, which should return something like that if the nouveau driver is in use:

02:00.0 VGA compatible controller: NVIDIA Corporation Device XXXX
Kernel driver in use: nouveau

To force the system to use the vfio-pci driver, we can blacklist the NVidia driver in the grub configuration to ensure it does not grab the GPU before vfio-pci; to do that, open once again the grub configuration file with sudo vim /etc/default/grub and replace:

GRUB_CMDLINE_DEFAULT="quiet splash intel_iommu=on"

by

GRUB_CMDLINE_DEFAULT="modprobe.blacklist=nouveau quiet splash intel_iommu=on"

Don’t forget to replace intel_iommu=on by amd_iommu=on for an AMD CPU; I do not know the equivalent of modprobe.blacklist=nouveau for an AMD GPU, so you will have to find by yourself if need be.

Once done, update the grub configuration with sudo update-grub, then reboot and check if vfio-pci now correctly grabs the GPU.

Unfortunately, it’s far from over! We need to tweak a little bit our KVM configuration file for the GPU to work properly. Run sudo virsh edit my-windows and between the <qemu:commandline> tags we added earlier for the audio, add the following lines:

<qemu:commandline>
...
<qemu:args value='-cpu'/>
<qemu:args value='host,kvm=off,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=1234567890ab'/>
</qemu:commandline>

This adds specific parameters to our KVM to ensure maximum compatibility with modern GPUs. I do not know exactly what each one of them does, nor which one are specifically required. It comes from this Reddit post, as well as multiple other resources online about VGA pass-through.

We can now add the GPU to our KVM. In Virtual Machine Manager, open our “my-windows” KVM virtual hardware details, click on “Add hardware”, go to “PCI Host Device” and select the video part of our card (0000:02:00.0 in our case). Don’t add the audio part for now; we still have some workarounds to do.

With a recent NVidia GPU (1000 series), if we try to bind the GPU to our KVM, it will effectively recognize it as our actual GPU (NVidia GTX 1060 6 GB for me), but it will throw an Error 43. That’s because when the GPU is normally booted, it flashes a modified version of its own vBIOS (video BIOS) before being accessible; as we did not boot it normally, it does not recognize the expected vBIOS and returns this error.

If it works like that for you, ignore the following steps, and directly add the audio part to the KVM as we did for the video part above.

To overcome this problem, we have to extract this vBIOS from our GPU. To do that, follow the following steps:

  • Run echo "0000:02:00.0" | sudo tee /sys/bus/pci/drivers/vfio-pci/unbind, where 02:00.0 is the ID of our GPU to unbind it from the vfio-pci driver,
  • Run:
echo 1 | sudo tee /sys/bus/pci/devices/0000:02:00.0/rom
sudo cat /sys/bus/pci/devices/0000:02:00.0/rom > ~/gpu_vbios.rom
echo 0 | sudo tee /sys/bus/pci/devices/0000:02:00.0/rom

This will make the ROM of our GPU readable, extract it to the ~/gpu_vbios.rom file, and then turn it back to normal.

  • Finally, rebind the GPU to vfio-pci with the echo "0000:02:00.0" | sudo tee /sys/bus/pci/drivers/vfio-pci/bind command.

This solution was suggested in this Reddit post and this Medium post. Now we have the driver, we can modify once again the configuration file of our KVM to point to that file, using the sudo virsh edit my-windows command. In the file, there should be a section looking like that:

<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</hostdev>

We need to add a new line between the <hostdev> tags to tell libvirt where to find our ROM, as follow:

<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</source>
<rom file='/home/[username]/gpu_vbios.rom'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</hostdev>

Don’t forget to replace [username] by the Ubuntu username.

Unfortunately, this does not work yet; qemu does not manage to access the ROM because it does not have the correct access rights. We therefore need to modify the qemu configuration file with sudo vim /etc/libvirt/qemu.conf, and add the following lines:

user = "+0"
group = "+0"
cgroup_device_acl = [
"/dev/null", "/dev/full", "/dev/zero",
"/dev/random", "/dev/urandom",
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",
"/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
"/dev/vfio/1"
]
security_driver = "none"
security_default_confined = 0
relaxed_acs_check = 1

This will basically grant root permission to qemu in order to be able to read the ROM file. This solution comes from this post from the Arch Linux forum. Save and reboot.

Now we can finally add the audio part of our GPU to our KVM via the “Add hardware” button, and run the KVM to check if it now works correctly!

Hooray!

We can now enjoy Ultra settings in our KVM for a smooth gaming experience.

Setting up USB hot-swap

There is one last thing to set up for our Windows KVM to be as user-friendly as can be: USB hot-swapping. Whenever we start the KVM, only preset hardware is passed to the virtual machine; which means we have to restart the VM if we want to attach a new device. This is annoying, because we are now used to hot-swapping, notably for USB devices, which means we can plug something in our computer and it will recognize it and let us use it immediately.

There is a way to simulate that behavior in our KVM. Whenever a USB device is plugged in in a port we want to pass through, a daemon called udev will see that the hardware configuration changed. It is therefore possible to register handlers to act upon specific events. In our case, we want, whenever a device is plugged in or out, to dynamically add or remove the device to the XML configuration of our KVM.

Fortunately, there is already a script written to do just that: usb-libvirt-hotplug. To make it work, we first have to clone the repository containing the shell script on our system — I myself downloaded it in

/home/[username]/usb-libvirt-hotplug/usb-libvirt-hotplug.sh

Then, to make udev behave as expected, we need to add additional rules to its configuration. but first, we need to identify the device path of the ports we want to pass through. To do that, we need to run the command

udevadm monitor --property --udev --subsystem-match=usb/usb_device

This will display an event whenever a USB device is plugged in or out. Then, we plug any USB device in the ports we want to pass through. There will be an event appearing containing the following lines:

UDEV  [XXXXXX.XXXXXX] bind     /devices/pci0000:00/0000:00:aa.b/usb1/c-dd (usb)
ACTION=bind
BUSNUM=00c
...

The format might be slightly different, notably if you are plugging the device in a USB hub rather than directly on the computer. Then, we need to add additional rules to our udev daemon to automatically call the script usb-libvirt-hotplug.sh whenever a device is plugged in. To do that, we edit the file

sudo vim /etc/udev/rules.d/usb-libvirt-hotplug.rules

And add the following lines for each port we want to handle:

SUBSYSTEM=="usb",DEVPATH=="/devices/pci0000:00/0000:00:aa.b/usb1/c-dd",RUN+="/home/[username]/usb-libvirt-hotplug/usb-libvirt-hotplug.sh my-windows"
SUBSYSTEM=="usb",DEVPATH=="/devices/pci0000:00/0000:00:aa.b/usb1/c-dd/*",RUN+="/home/[username]/usb-libvirt-hotplug/usb-libvirt-hotplug.sh my-windows"

Now, every time a USB device is plugged in one of the passed through ports, the script will run and will add the device to the KVM config of my-windows.

Conclusion

The road to a KVM is most definitely a long one filled with hurdles; but if you plan on setting one up for yourself, do not give up! It does work. If you are stuck on the way, don’t forget that Google Is Your Friend. You can also ask away in the comments if you have questions, even though there are much more skilled people on internet who could probably give you better answers that I could (skilled people without whom I would never have managed to set it all up myself).

I wanted also to credit my amazingly patient girlfriend, who supported me throughout the long process of trials and errors that led me to this now working desktop.

--

--