Network packet flow in KVM/QEMU

shashank Jain
4 min readAug 19, 2018

In continuation of the previous blog on how the KVM and QEMU work to provide a hypervisor in Linux (https://medium.com/@jain.sm/kvm-and-qemu-as-linux-hypervisor-18271376449)

In this blog we explain the role of different entities which orchestrate a packet flow from a guest VM to the Hypervisor and back to guest. We discuss this in context of the vhost-net device drivers.

As alluded in previous blog, when we use vhost mechanism, the Qemu is out of the data plane and there is a direct communication between guest and host over virt queues. Qemu still remains in control plane where it sets up the vhost device on the kernel using the ioctl calls to the /dev/vhost-net device .

When the device is initialized, there gets created a kernel thread for the specific QEMU process. This thread handles the i/o for the specific guest. The thread is listening to events on the host side on the virt queues. As and when an event arrives to drain data (in virtio terminology its called a kick), the i/o thread drains the packet from the tx (transmission)queue of the guest. The thread then transmits this data to the TAP device via which it makes it available to the underlying bridge/switch to transmit it downstream to any overlay or routing mechanism.

KVM kernel module allows to register eventfd for the guest. This a file descriptor which is registered for the guest (by QEMU) with the KVM kernel module. The fd is registered against an guest i/o exit event.

--

--