QEMU/KVM networking in Ubuntu

Dmitry Shnayder
9 min readFeb 23, 2023

--

I am a technology enthusiast. My house is stuffed with whole zoo of electronics, from smart switches to smartphones. As many other people I have WiFi network at home. But I’m not running consumer-grade routers. My home WiFi is based on enterprise-grade access points.

During recent upgrade of the network I decided to deploy a wireless controller locally on my network.

The controller is available as a KVM disk image and requires direct access to the wired network. I got used to deploy KVM images behind NATed interfaces, that easy to configure with QEMU. But NATed interfaces will not serve my needs, as access points need to connect to the controller directly. Let me visualize what I need. I have a virtualized server, that expects three network interfaces: e1000 for out-of-band management, and two virtio for managing access points and tunneling wireless traffic:

Controller VM

While admin interface is not required any external connectivity, at least one of the data interfaces needs to have direct connection to the same L2 broadcast domain as all the devices on the network. For my needs I don’t need to use both interfaces.

First, I need to choose a hardware. The software, that I was planning to use, successfully runs large-scale netwoks. It supports operations of large retailers across the world. It manages thousands of access point, passing fan traffic during NFL and NASCAR events.

Amazingly, the same software can scale down to a consumer-grade hardware without any code changes. My VM requires at least 4 CPU cores and at least 8 Gb of RAM. I’m not expecting high CPU usage, so CPU cores can be shared between the VM and base OS. But RAM is really needed. I was looking for a boxes with at least 12 Gb of RAM to run the VM and have comfortable spare RAM for the base OS. I bought a fanless mini PC on Amazon: https://a.co/d/05X5tHL

It comes with 4 core CPU, four NICs and no RAM or SSD. I don’t really need four NICs now, but who knows what I invent in a future. I bought RAM and SSD separately to satisfy my view on what a good and cheap server should look like: 32 Gb of RAM and 1 Tb SSD. Not need that much, but I’m planning to use remaining resources for something else in a future, so better get right specs now.

Installation of Ubuntu 22.04 is straightforward on this box. I choose minimal desktop installation. This may sound counter-intuitive for a server platform, but I want to have GUI out of the box, in case some configuration is easier to perform via GUI.

Before starting playing with QEMU, lets summarize what I got and what I need in terms of networking. By default, Ubuntu configures IP address on an ethernet card directly. NIC is exclusively used by kernel IP stack.

Default Linux network

To share access to ethernet NIC, a bridge needs to be configured. The ethernet NIC is then attached to the bridge. Bridge acts as a virtual switch inside the host OS, forwarding L2 packets between interfaces, attached to the bridge. The bridge assigned IP address either manually or via DHCP, providing host OS access to the network. I use DHCP to assign IP address, and configured static MAC/IP mapping on the DHCP server.

Linux with bridge

When QEMU starts virtual machine, it creates TAP interfaces for each network interface, configured for the VM. One end of the TAP is represented as a network interface inside the VM, the other end of TAP is attached to the bridge, providing VM direct connectivity to the ethernet network. TAP interface acts like a virtual ethernet cable, connecting virtual switch (bridge interface) with virtual network card in a VM.

VM network

Now lets express this setup in configuration files. Desktop install of Ubuntu uses Network Manager by default to configure network interfaces. If you navigate to the directory “/etc/netplan”, you’ll find one or more files there. Number of files and their names may be different between version of Ubuntu. I have one called “01-network-manager-all.yaml”. All it does is delegating managing of network interfaces to NetworkManager:

# Let NetworkManager manage all devices on this system
network:
version: 2
renderer: NetworkManager

Lets create a custom configuration file, where all the bridges will be defined. I need two bridges:

  • Management interface bridge. It is not connected to external network I will use it to access out-of-band management interface of the controller from host machine.
  • Data bridge. It needs access to the network.

First, remove all the original configuration files. You may want to move them to a safe place, in case it may be needed in a future.

rm *.yaml

Then create a new config file 01-iqc.yaml. File name can be anything, as long as such a file did not originally exist. The content of the configuration file is:

I set static IP address for the bridge named iqceth0, because by default the controller assigns IP address 192.168.10.1/24 to its admin interface. This way I can ssh admin@192.168.10.1 from host machine to the controller. The second bridge expects to get IP address from DHCP server and attaches to the ethernet interface enp2s0 on the host machine. Ubuntu gives names to interfaces automatically, based on the order they found on the PCI bus. Other machines may have different names, but the idea is the same — attach the ethernet interfaces to the bridge, so bridge can be connected to the network.

Then apply NetPlan configuration to the system:

netplan apply

Now is a scary part — reboot the box and hope it gets network connection. This is why I selected desktop installation of Ubuntu — if anything goes wrong, I have a nicce graphical tools to fix files. Since MAC address of the bridge interface will be different from MAC address of NIC, the Linux box will get different IP address from DHCP server. Again, I created static MAC/IP mapping for the bridge MAC address to be able to login to my host machine on predictable IP address.

Assuming the box rebooted and still have network access. Lets check that interfaces are properly connected and ready to provide conenction for virtial machine. Run ip addr command:

# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master iqcbr0 state UP group default qlen 1000 link/ether 7c:83:34:b7:c2:6f brd ff:ff:ff:ff:ff:ff
3: enp3s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000 link/ether 7c:83:34:b7:c2:6e brd ff:ff:ff:ff:ff:ff
4: wlp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 2c:0d:a7:d4:97:6b brd ff:ff:ff:ff:ff:ff
5: iqceth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 2a:ef:36:db:77:c8 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.100/24 brd 192.168.10.255 scope global iqceth0
valid_lft forever preferred_lft forever
inet6 fe80::28ef:36ff:fedb:77c8/64 scope link
valid_lft forever preferred_lft forever
6: iqcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0a:1c:b7:fc:8a:c5 brd ff:ff:ff:ff:ff:ff
inet 192.168.8.100/24 brd 192.168.8.255 scope global dynamic iqcbr0
valid_lft 83976sec preferred_lft 83976sec
inet6 fe80::81c:b7ff:fefc:8ac5/64 scope link
valid_lft forever preferred_lft forever

Based on the output above:

  • Interface enp2s0 has no IP address and has master iqcbr0
  • Interface iqceth0 has IP address 192.168.10.100/24
  • Interface iqcbr0 has IP address configured manually or obtained from DHCP server.

The network setup looks good now. Let use it to connect the VM to the network.

Desktop installation of Ubuntu 22.04 doesn’t instal QEMU/KVM by default. Let’s install it

apt install qemu-kvm

I copied the raw disk image image to /storage/images. The VM start I want to run has strict requirements on PCI IDs for all its peripherals, so the command line is complicated:

/usr/bin/qemu-system-x86_64 \
-name VE6120K \
-chardev ‘socket,id=qmp,path=/var/run/qemu/iqc.qmp,server=on,wait=off’ \
-mon ‘chardev=qmp,mode=control’ \
-chardev ‘socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5’ \
-mon ‘chardev=qmp-event,mode=control’ \
-pidfile /var/run/qemu/iqc.pid \
-daemonize \
-smbios ‘type=1,uuid=c8cfd099-e370–4905-bf83-afa1e628c3ba’ \
-smp ‘4,sockets=1,cores=4,maxcpus=4’ \
-nodefaults \
-boot ‘menu=on,strict=on,reboot-timeout=1000’ \
-no-shutdown \
-vga none \
-nographic \
-accel kvm \
-cpu host,+kvm_pv_eoi,+kvm_pv_unhalt \
-m 8192 \
-device ‘pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e’ \
-device ‘pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f’ \
-device ‘vmgenid,guid=965377a1–862e-4bd4–94f2–96362dd522a7’ \
-device ‘piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2’ \
-device ‘usb-tablet,id=tablet,bus=uhci.0,port=1’ \
-device ‘i6300esb,bus=pci.0,addr=0x7’ \
-iscsi ‘initiator-name=iqn.1993–08.org.debian:01:c2a1f23f7fc1’ \
-device ‘virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5’ \
-drive ‘file=/storage/images/iqc-disk-0.raw,if=none,id=drive-scsi0,cache=writeback,format=raw,aio=threads,detect-zeroes=on’ \
-device ‘scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=100’ \
-machine ‘type=pc’ \
-chardev ‘socket,id=serial0,port=56031,host=0.0.0.0,server=on,wait=off,telnet=on’ \
-device ‘isa-serial,chardev=serial0’ \
-netdev bridge,id=eth0,br=iqceth0 \
-device e1000,netdev=eth0,mac=52:54:00:05:06:10,bus=pci.0,addr=0x2,id=net0 \
-netdev bridge,id=esa0,br=iqcbr0 \
-device virtio-net-pci,netdev=esa0,mq=on,vectors=6,bus=pci.0,addr=0x3,id=net1\
-netdev bridge,id=esa1,br=iqcbr0 \
-device virtio-net-pci,netdev=esa1,mq=on,vectors=6,bus=pci.0,addr=0x4,id=net2

There are a lot of parameters passed to QEMU to run the virtual machine. I will focus today on networking. The network connections are defined in the last few lines. Definition of each network interface in the KVM consist of two comamnd line parameters. Admin data port is defined as:

  • netdev: Define network link (TAP interface). Link type is bridge, it connects to the Linux bridge named iqceth0, and name is eth0: -netdev bridge,id=eth0,br=iqceth0
  • device: Define device inside the VM. Device hardware type is e1000 and it uses netdev named eth0. Some additional parameters are set, which required for this particular controller (PCI address, number of queues). I set interface MAC address manually via mac= parameter to have predictable identifiers of the box. If MAC address is not specified, then QEMU will assign MAC address in range 52:54:00:XX:XX:XX. The MAC address is assigned sequentially, so they are unique within one host machine, but will be duplicate if the same network has more than one host machine. In the latter case it’s better to specify MAC address manually for each network interface: -device e1000,netdev=eth0,mac=52:54:00:05:06:10,bus=pci.0,addr=0x2,id=net0

Data ports defined as:

  • netdev: Define network link (TAP interface). Link type is bridge, it connects to the Linux bridge named iqcbr0, and name is esa0: -netdev bridge,id=esa0,br=iqcbr0
  • device: Define device inside the VM. Device hardware type is virtio-net-pci and it uses netdev named esa0. Some additional parameters are set, which required for this particular controller (PCI address, number of queues). I do not specify MAC address for the interfaces and let QEMU assign addresses. I don’t have other VMs on the same network, so I’m safe with automatic addresses for now: -device virtio-net-pci,netdev=esa0,mq=on,vectors=6,bus=pci.0,addr=0x3,id=net1

These three netwotk interfaces create the following topology:

Network topology

Admin interface has no access to external network and can be used from the host machine only. Two data interfaces attached to the main wired network via common bridge iqcbr0.

The VM starts in background as the command line requested daemonize and GUI is disabled. Fingers crossed, do ps -efwww | grep qemu and see the VM up and running. I check if TAP interfaces are created using ip link command. They are here, tap0 is attached to my bridge iqceth0, tap1 and tap2 attached to the bridge iqcbr0:

# ip link
……
25: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master iqceth0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:a6:cd:a9:54:e2 brd ff:ff:ff:ff:ff:ff
26: tap1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master iqcbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether ba:b5:a7:15:33:4d brd ff:ff:ff:ff:ff:ff
27: tap2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master iqcbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:e3:19:31:bb:31 brd ff:ff:ff:ff:ff:ff

Let’s login to the admin interface. It connected to the bridge iqceth0 with static IP address 192.168.10.100/24. The controller configures static IP address on the same subnet — 192.168.10.1/24. Try SSH to it:

# ssh admin@192.168.10.1

Enter default high-secure password abc123 and here we go, a healthy CLI

CLI

Configured the controller to use the first data interface with static IP address from my network range:

Data interfaces

Now moment of truth. Pinging the controller from a laptop on the network:

Ping

It just works. The fancy VM is running on plain, cheap and simple Ubuntu box, it has direct acces to the ethernet network, and the packets are going back and forth.

--

--