Xen On Arm64 and Qemu

Denis Obrezkov
6 min readJun 12, 2019

--

In this post I want to show you how to set up Xen for Arm with Qemu emulator. Mainly, we are going to follow this guide https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/qemu-system-aarch64 with some notes and fixes. In order to accomplish this we need to take few steps:

  1. Download and install Qemu for arm64
  2. Choose Linux Dom0 and Linux DomU configurations
  3. Download and build arm64 toolchain
  4. Cross-compile Linux Dom0 image
  5. Cross-compile Linux DomU image
  6. Build Xen and Xen-tools
  7. Run Dom0 image and start DomU image from Dom0

Download and install Qemu for arm64

Firstly, we should get Qemu. If you don’t want to have the latest stable version you can install qemu from the repo (for Debian):

# apt install qemu-system-aarch64

The following text is for Linux version of Qemu. User documentation can be found here: https://qemu.weilnetz.de/doc/qemu-doc.html

The wiki-page for Linux is here: https://wiki.qemu.org/Hosts/Linux

In order to get the latest stable version we should download Qemu from the official site qemu.org. Now, we can unpack it:

tar -Jxf qemu-3.1.0.tar.xz

And build it:

cd qemu-3.1.0
./configure --target-list=aarch64-softmmu
make -j4

Now, we can check how it works:

./aarch64-softmmu/qemu-system-aarch64 -version

We should obtain the output specifying version of our built qemu.

Choose Linux Dom0 and Linux DomU configurations

We are going to use a set of Linux Vanilla Kernel and Busybox (and if needed ulibc) for both Dom0 and DomU machines. Their configuration will be different.

Download and build arm toolchain

The easiest way to install arm toolchain is to install it from your repo, on Debian do:

# apt install gcc-aarch64-linux-gnu

After that, you can cross-compile everything.

Cross-compile Linux Dom0 image

Let’s create our working directory:

cd ~
export WORK_DIR=~/Projects/xenonarm
mkdir -pv $WORK_DIR
cd $WORK_DIR

First of all, we should get our Buildbox and Linux images:

wget -c https://busybox.net/downloads/busybox-1.30.1.tar.bz2
tar xf busybox-1.30.1.tar.bz2
wget -c https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.20.11.tar.xz
tar xf linux-4.20.11.tar.xz
wget -c https://downloads.xenproject.org/release/xen/4.12.0/xen-4.12.0.tar.gz
tar xf xen-4.12.0.tar.gz

Building Busybox

Let’s create a folder for temporary files:

export BUILD_DIR=~/Projects/xenonarm/build
mkdir -pv $BUILD_DIR/busybox_arm64

and build busybox:

cd $WORK_DIR/busybox-1.30.1
make 0=$BUILD_DIR/busybox_arm64/ ARCH=arm CROSS_COMPILE=aarch64-linux-gnu- defconfig
make 0=$BUILD_DIR/busybox_arm64/ ARCH=arm CROSS_COMPILE=aarch64-linux-gnu- menuconfig

Don’t forget to save the created config, e.g. in new_config. Open created file new_config and add line CONFIG_STATIC=y in order to make Busybox be complied as a static library. Save the file and copy it to .config:

cp new_config .config

and make it:

make -j2 0=$BUILD_DIR/busybox_arm64/ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-
make install 0=$BUILD_DIR/busybox_arm64/ ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-

At the moment we can create a root file system:

cd _install/
mkdir proc sys dev etc etc/init.d
cd ..
vim _install/etc/init.d/rcS

To the created file we should add:

#! /bin/sh
mount -t proc none /proc
mount -t sysfs none /sys
/sbin/mdev -s

and make it executable:

chmod +x _install/etc/init.d/rcS

Let’s create a rootfs image itself:

cd _install
find . | cpio -o --format=newc > ../rootfs.img
cd ..
gzip -c rootfs.img > rootfs.img.gz

Now, we can copy it to the distinct folder:

cp rootfs.img.gz $BUILD_DIR/busybox_arm64/

Building Linux kernel

Now, we can copy all needed files to another folder:

cd $WORK_DIR/linux-4.20.11/
make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig

Now, we should add few lines to .config (though, they should be there by default):

CONFIG_XEN_DOM0=y
CONFIG_XEN=y

Now, we can build the kernel:

make -j4 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-

Let’s copy created files:

cp ./arch/arm64/boot/Image.gz $BUILD_DIR/busybox_arm64/

Running Busybox, Linux and Qemu

qemu-system-aarch64  -machine virt,gic_version=3 -machine virtualization=true -cpu cortex-a57 -machine type=virt -m 4096 -smp 4 -kernel Image.gz -nographic -no-reboot -initrd rootfs.img.gz -append "rw root=/dev/ram rdinit=/sbin/init  earlyprintk=serial,ttyAMA0 console=ttyAMA0"

U-Boot

Let’s download the U-Boot bootloader for embedded Linux.

wget -c ftp://ftp.denx.de/pub/u-boot/u-boot-2019.01.tar.bz2
tar xf u-boot-2019.01.tar.bz2

Now, we can configure it:

cd u-boot-2019.01
make CROSS_COMPILE=aarch64-linux-gnu- qemu_arm64_defconfig

Add the following lines to the u-boot .config file:

CONFIG_ARCH_QEMU=y
CONFIG_TARGET_QEMU_ARM_64BIT=y

And build the u-boot:

make CROSS_COMPILE=aarch64-linux-gnu- -j4

Let’s copy the binary:

cp u-boot.bin $BUILD_DIR/busybox_arm64/
cd $BUILD_DIR/busybox_arm64/

You should be able to run u-boot with Linux:

qemu-system-aarch64  -machine virt,gic_version=3 -machine virtualization=true -cpu cortex-a57 -machine type=virt -m 512M -bios u-boot.bin -device loader,file=Image,addr=0x45000000 -nographic -no-reboot -chardev socket,id=qemu-monitor,host=localhost,port=7777,server,nowait,telnet -mon qemu-monitor,mode=readline

and we can start Linux with this command:

bootm 0x45000000 - 0x40000000

where 0x45000000 is the address of our kernel image and 0x40000000 is the default address of the qemu’s device tree blob.

After execution of the command we should have the kernel booting and failing on trying to load a rootfs (since we haven’t provided it).

Building Xen

Let’s build Xen:

make dist-xen XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-

and copy the raw image:

cp xen/xen $BUILD_DIR/busybox_arm64/

Running everything on Qemu

Let’s generate a device tree blob:

qemu-system-aarch64  -machine virt,gic_version=3 -machine virtualization=true -cpu cortex-a57 -machine type=virt -m 4096 -smp 4 -display none -machine dumpdtb=virt-gicv3.dtb

now we can run xen with Linux as dom0:

qemu-system-aarch64  -machine virt,gic_version=3 -machine virtualization=true -cpu cortex-a57 -machine type=virt -m 4096 -smp 4 -bios u-boot.bin -device loader,file=xen,force-raw=on,addr=0x49000000 -device loader,file=Image.gz,addr=0x47000000 -device loader,file=virt-gicv3.dtb,addr=0x44000000 -nographic -no-reboot -chardev socket,id=qemu-monitor,host=localhost,port=7777,server,nowait,telnet -mon qemu-monitor,mode=readline

In order to start the dom0 we should modify a device tree blob by writing the following commands to u-boot (substitute the right value instead of 0x7F31CE in /chosen/module@0 which is a size of Image.gz):

fdt addr 0x44000000
fdt resize
fdt set /chosen \#address-cells <1>
fdt set /chosen \#size-cells <1>
fdt mknod /chosen module@0
fdt set /chosen/module@0 compatible "xen,linux-zimage" "xen,multiboot-module"
fdt set /chosen/module@0 reg <0x47000000 0x7F31CE>
fdt set /chosen/module@0 bootargs "earlyprintk=serial,ttyAMA0
console=ttyAMA0,115200n8 earlycon=xenboot"
booti 0x49000000 - 0x44000000

The size of Image.gz, can be calculated this way:

printf "0x%x\n" $(stat -c %s Image.gz)

This time you should see booting of xen and then booting of Linux kernel and its failure on the attempt of booting rootfs.

And now we can boot xen, Linux and Busybox on qemu:

qemu-system-aarch64  -machine virt,gic_version=3 -machine virtualization=true -cpu cortex-a57 -machine type=virt -m 4096 -smp 4 -bios u-boot.bin -device loader,file=xen,force-raw=on,addr=0x49000000 -device loader,file=Image.gz,addr=0x47000000 -device loader,file=virt-gicv3.dtb,addr=0x44000000 -device loader,file=rootfs.img.gz,addr=0x42000000 -nographic -no-reboot -chardev socket,id=qemu-monitor,host=localhost,port=7777,server,nowait,telnet -mon qemu-monitor,mode=readline

and run in u-boot (substitute the right value instead of 0x121e65 in /chosen/module@1 which is a size of rootfs.img.gz):

fdt addr 0x44000000
fdt resize
fdt set /chosen \#address-cells <1>
fdt set /chosen \#size-cells <1>
fdt mknod /chosen module@0
fdt set /chosen/module@0 compatible "xen,linux-zimage" "xen,multiboot-module"
fdt set /chosen/module@0 reg <0x47000000 0x7F31CE>
fdt set /chosen/module@0 bootargs "rw root=/dev/ram rdinit=/sbin/init earlyprintk=serial,ttyAMA0 console=hvc0 earlycon=xenboot"
fdt mknod /chosen module@1
fdt set /chosen/module@1 compatible "xen,linux-initrd" "xen,multiboot-module"
fdt set /chosen/module@1 reg <0x42000000 0x121e65>
booti 0x49000000 - 0x44000000

After having an input promt, we can check that the system works:

# uname
Linux

If it works it works!

Running Dom0 and DomU simultaneously on Qemu-arm64

In order to run Dom0 and DomU from a device tree we should use a dom0less technology.

Let’s run Qemu. Note that due to some restrictions we should use uncompressed Linux images (don’t forget to measure their sizes and change them in fdt commands, make sure that your images don’t overlap in memory):

qemu-system-aarch64  -machine virt,gic_version=3 -machine virtualization=true -cpu cortex-a57 -machine type=virt -m 4096 -smp 4 -bios u-boot.bin -device loader,file=xen,force-raw=on,addr=0x49000000 -device loader,file=Image,addr=0x47000000 -device loader,file=Image,addr=0x53000000 -device loader,file=virt-gicv3.dtb,addr=0x44000000 -device loader,file=rootfs.img.gz,addr=0x42000000  -device loader,file=rootfs.img.gz,addr=0x58000000 -nographic -no-reboot -chardev socket,id=qemu-monitor,host=localhost,port=7777,server,nowait,telnet -mon qemu-monitor,mode=readline

Now, we should use the modified device tree:

setenv xen_bootargs 'dom0_mem=512M'fdt addr 0x44000000
fdt resize
fdt set /chosen \#address-cells <1>
fdt set /chosen \#size-cells <1>
fdt set /chosen xen,xen-bootargs \"$xen_bootargs\"
fdt mknod /chosen module@0
fdt set /chosen/module@0 compatible "xen,linux-zimage" "xen,multiboot-module"
fdt set /chosen/module@0 reg <0x47000000 0x1281a00>
fdt set /chosen/module@0 bootargs "rw root=/dev/ram rdinit=/sbin/init earlyprintk=serial,ttyAMA0 console=hvc0 earlycon=xenboot"
fdt mknod /chosen module@1
fdt set /chosen/module@1 compatible "xen,linux-initrd" "xen,multiboot-module"
fdt set /chosen/module@1 reg <0x42000000 0x121e65>
fdt mknod /chosen domU1
fdt set /chosen/domU1 compatible "xen,domain"
fdt set /chosen/domU1 \#address-cells <1>
fdt set /chosen/domU1 \#size-cells <1>
fdt set /chosen/domU1 \cpus <1>
fdt set /chosen/domU1 \memory <0 548576>
fdt set /chosen/domU1 vpl011
fdt mknod /chosen/domU1 module@0
fdt set /chosen/domU1/module@0 compatible "multiboot,kernel" "multiboot,module"
fdt set /chosen/domU1/module@0 reg <0x53000000 0x1281a00>
fdt set /chosen/domU1/module@0 bootargs "rw root=/dev/ram rdinit=/sbin/init console=ttyAMA0"
fdt mknod /chosen/domU1 module@1
fdt set /chosen/domU1/module@1 compatible "multiboot,ramdisk" "multiboot,module"
fdt set /chosen/domU1/module@1 reg <0x58000000 0x121e65>
booti 0x49000000 - 0x44000000

You can see that we added a node for domU. In this node we also asked xen to create a virtual console vpl011. Those machines are lying in different parts of memory and do not interfere.

In order to switch between your machines you can push Ctrl+A for three times. To check, whether dom0 and domU work you can run dmesg on both machines:

dmesg

or to export variables with the same name on different machines , for dom0:

export DOM='dom0'

switch to domU (push CTRL+A for three times to switch between dom0, domU and xen) and type:

export DOM='domU'

Now, you can switch between machines and check the variables:

echo $DOM

if the output for dom0 is dom0 and for domU is domU then xen with dom0 and domU is working!

--

--