Virtualization using KVM

Important

This feature might not be applicable to all Platforms. Please check individual Platform pages, section Supported Features to confirm if this feature is listed as supported.

What is KVM?

Kernel Virtual Machine (KVM) is a virtualization module built in the Linux kernel which lets the user to turn Linux into a hypervisor to allow hosting single/multiple isolated guests or virtual machine. In brief, KVM is a type-2 hypervisor that requires a host OS to boot first, and the KVM module runs on top of that.

KVM requires a processor with hardware virtualization extensions. Some of the architectural features in Arm v8-a profile that support hardware virtualization are -

  • A dedicated Exception level (EL2) for hypervisor code.

  • Support for trapping exceptions that change the core context or state.

  • Support for routing exceptions and virtual interrupts.

  • Two-stage memory translation, and

  • A dedicated exception for Hypervisor Call (HVC).

Currently, KVM is part of Linux kernel. Some of the features of KVM are:

  • Over-committing: KVM allows to allocate more virtualized CPU or memory for the virtual machine than that of the host.

  • Thin provisioning: KVM allows to allocate and optimize the flexible storage for the virtual machines.

  • Disk throttling: KVM allows to set limits for disk I/O requests.

  • Virtual CPU hot plug: KVM allows ability to increase the CPU count of the virtual machine during run time.

Virtualization on Neoverse Reference Design Platforms

Virtualization using KVM hypervisor is supported on the Neoverse reference design plaforms. The subsequent sections below provide detailed instructions about booting up of two or more instances of guest OS’s (or Virtual Machines, VMs) using lkvm tool. Each of these guests can support upto NR_CPUS as vcpus, where NR_CPUS is the number of CPUs booted up by the host OS. There are instructions on using hardare virtualization features on the platform and enable use of virtualized devices, such as console, net, and disk etc.

Overview of Native Linux KVM tool

kvmtool is a lightweight tool for hosting KVM guests. As a pure virtualization tool it only supports guests using the same architecture, though it supports running 32-bit guests on those 64-bit architectures that allow this.

The kvmtool supports a range of arm64 architectural features such as support for GIC-v2, v3, and ITS. It also supports device virtualization using emulated devices such as virtio device support for console, net, and disks, and using VFIO to allow PCI pass-through or direct device assignment.

Booting multiple guests

Virtualization using KVM hypervisor requires a root filesystem from which kvmtool can be launched. Buildroot root filesystem supports the kvmtool package. It fetches the mainline kvmtool source and builds the kvmtool binary out of it. Detailed description on buildroot based booting ia available in Buildroot guide. Follow all the instructions in that document for building the platform software stack and booting upto buildroot before proceeding with the next steps.

To boot two or more virtual machines on the host kernel with a kernel image and an initrd or a disk image, KVMtool virtual machine manager (VMM) (also called as lkvm tool) is used. Check help for ‘lkvm run’ command for options to launch guests.

Launching multiple guests using lkvm:

  • Mount grub disk-image: The buildroot filesystem required to perform kvm test is packaged in such a way that the kernel image, and buildroot ramdisk image are copied to the second partition of grub disk image that gets probed at /dev/vda2 in the host kernel. After booting the platform this partition can be mounted as:

    mount /dev/vda2 /mnt
    
  • Launch VMs using lkvm: For launching multiple VMs, ‘screen’ tool can be used to multiplex console outputs so that one can switch between multiple workspaces. This tool helps by providing a new console output pane for each guest. Use the following command to launch guests using kvmtool with the available kernel and ramdisk images.

    screen -md -S "<screen_name>" /mnt/kvmtool/lkvm run -k <path-to-linux-image> -i <path-to-ramdisk-image> --irqchip gicv3-its -c <nr-cpus> -m <allowed-mem> --console serial --params "console=ttyS0 --earlycon=uart,mmio,0x1000000 root=/dev/vda"
    

    For example, to run the kernel available in mounted disk at /mnt as above use the following command:

    screen -md -S "virt1" /mnt/kvmtool/lkvm run -k /mnt/Image -i /mnt/ramdisk-buildroot.img --irqchip gicv3-its -c 4 -m 512 --console serial --params "console=ttyS0 --earlycon=uart,mmio,0x1000000 root=/dev/vda"
    

    Above command uses an emulated UART device by passing the argument ‘–console serial’. To use virtio based console (prints a bit faster than the emulated UART device) use the below command.

    screen -md -S "virt1" /mnt/kvmtool/lkvm run -k /mnt/Image -i /mnt/ramdisk-buildroot.img --irqchip gicv3-its -c 4 -m 512 --console virtio --params "earltprintk=shm console=hvc0 root=/dev/vda"
    
  • Launch couple of more guests by repeating the above command and updating the screen_name.

    The launched screens can be viewed from the target by using the following command:

    screen -ls
    
  • Select and switch to the desired screen to view boot-up logs from guest. Use the following command to go to a specific screen:

    screen -r <screen_name>
    
    • For example, list of screens are shown below:

    # screen -ls
    There are screens on:
        214.virt1       (Detached)
        200.virt2       (Detached)
    
    • Jump to the screen using:

    screen -r virt1
    
    • Switch between multiple running guests using ‘Ctrl-a d’ to view the bootup logs of various guests executing.

  • Perform simple cpu hotplug test to validate that guest kernel is functional. Use the following command to do that:

    echo 0 > /sys/devices/system/cpu/cpu1/online
    echo 0 > /sys/devices/system/cpu/cpu2/online
    
    echo 1 > /sys/devices/system/cpu/cpu1/online
    echo 1 > /sys/devices/system/cpu/cpu2/online
    

    The CPUs should go offline and come back online with the above set of commands.

  • Jump back to the host by exiting the screen using ‘Ctrl-a d’, and use the following command to see how many guests are managed by lkvm tool:

    # /mnt/kvmtool/lkvm list
    PID NAME                 STATE
    ------------------------------------
    309 guest-309            running
    276 guest-276            running
    
  • Power-off the guests by jumping to the right screen and executing the command:

    poweroff
    
  • The guests would shutdown and the following message would be displayed on the console.

    # KVM session ended normally.
    

This completes the procedure to launch multiple VMs and terminate them.