.. _vlpi_vsgi_label: Virtual Interrupts And VGIC =========================== .. important:: This feature might not be applicable to all Platforms. Please check individual Platform pages, section **Supported Features** to confirm if this feature is listed as supported. Overview of Directly Injected vLPIs ----------------------------------- Locality-specific Peripheral Interrupts (LPIs) are message based interupts which are raised on particular targeted processing elements (PEs) only. These interrupts do not use any physical lines, hence they need additional hardware (H/W) support for raising an event. Arm Generic Interrupt Controller (GIC) Interrupt Transaltion Service (GIC-ITS) hardware provides such support by accepting a MMIO write and raing an interrupt on the target PE. With the advancement in GIC-ITS and rising need of LPIs in virtualization, the support for directly injected virtual LPIs (vLPIs) was added in GICv4. With GICv3 and GICv3-ITS (GIC version 3 with support for ITS hardware) the virtual interrupts injection into the guest VM is done by writing into the GIC List Registers (LRs) which are part of virtualized GIC cpu interface. But use of LRs to inject virtual interrupts calls for hypervisor intervention everytime a physical interrupt is triggered. With KVM hypervisor the LRs are updated only at the next scheduled run of the guest on any physical PE. This introduces further delay in interrupt handling in a guest environment. In GICv4 ITS a new set of redistributor registers are added to hold the addresses of LPI configuration and LPI pending tables of the running VM. These registers are banked for each redistributor corresponding to each PE. Similarly, a new ITS table called as virtual PE (vPE) table is added. This table is equivalent to collection tables used for physical LPIs. A new set of ITS commands is also added to update the ITS device table, interrupt transalation table and the vPE table alongwith redistributor's configuration and pending tables. With these addtions the KVM hypervisor now has to configure these ITS tables only once at the beginning and thereafter whenever a message based physical LPI is raised by a peripheral, GIC-ITS H/W looks up the tables to find any corresponding virtual LPI entry and updates it to the redistributor of the target vPE. From there on redistributor is resposible to trigger it to PE. This avoid any requiremnet of software (KVM) intrusion and makes it almost immediate trigger of vLPIs. Overview of Directly Injected vSGIs ----------------------------------- Software Generated Interrupts (SGIs) are typically used for inter-processor communication among the PEs. As the name suggests, it is generated by software by writing to the GIC cpu interface registers. Software running on one PE writes to one of the per PE banked vsgi register of GIC cpu interface. During the write it provides information about the interrupt ID and the target PE the interrupt is meant for. With older gicv3 and gicv3-its only way for KVM to handle this is to trap the write to SGI register from sender and updates list registers (LRs) to inject it into guest VM which is deferred until the VM rescheduled on the target PE. This problem of deferred interrupts was solved with support of direct vSGI injection using GIC-ITS H/W as offered in GICv4.1. A new GIC-ITS command was added to hold entries of vSGIs configurations for sending vPE. Also a new GIC-ITS register was introduced which can be used to raise vSGI by simply writing to it. And extra redistributor registers to poll the state of vSGIs on target vPEs was also added. With direct vSGI injection, now whenever sender PE writes to SGI register of GIC cpu interface to raise interrupt to target PE, it is trapped by KVM and then a write to one of the GIC-ITS register is done, which immediately raises the interrupt to target vPE, skipping the need to wait until rescheduling of the guest VM and thus avoiding any delays. Build & Install --------------- .. note:: This section assumes the user has completed the chapter :doc:`Getting Started ` and has a functional working environment. Build the platform software ^^^^^^^^^^^^^^^^^^^^^^^^^^^ This section describes the procedure to build the software stack required to perform KVM unit testing. Following software packages from the Neoverse reference platform software stack are needed to do the testing: - Software stack for distro boot as given in :ref:`Distro Boot ` guide, - Refinfra Linux and smmu-test-engine tools. - kvm-unit-tests built for kvmtool target, - Kvmtool VMM. All the above package can be compiled together by buildroot build. Proceed by running the appropriate script from software stack :: ./build-scripts/rdinfra/build-test-buildroot.sh -p Supported command line options are listed below * - Lookup for a platform name in :ref:`Platform Names `. * - Supported commands are - ``clean`` - ``build`` - ``package`` - ``all`` (all of the three above) Examples of the build command are - Command to clean, build and package the software stack for the RD-N2-Cfg1 platform: :: ./build-scripts/rdinfra/build-test-buildroot.sh -p rdn2cfg1 all Setup Satadisk Images ^^^^^^^^^^^^^^^^^^^^^ The direct injection of vLPI and vSGI can be validated on a Linux distributions running as the host OS. Create disk images by following the guidelines from :ref:`Distro Boot ` page. .. note:: For simplicity, the setup instructions where specific, are given for Ubuntu distro host OS. - Boot the host satadisk image on the FVP with network enabled as mentioned in :ref:`Distro Boot `. For example, to boot Ubuntu as the host OS give the follwing command to begin the distro boot from the ``ubuntu.satadisk`` image: :: ./distro.sh -p rdn2cfg1 -d /absolute/path/to/ubuntu.satadisk -n true - Once the host OS is booted up ensure that the KVM and virtualization support is enabled. After booting enable the networking support as well. Follow the :ref:`UEFI supported virtualization guide ` for details on preparing the setup with Linux distribution running as host OS with networking enabled. For example, one might need to run the following commands: :: sudo dhclient -v sudo apt update sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils net-tools libfdt-dev -y .. note:: Below step can be skipped if the host ubuntu distro version is v22.04 or above because it uses linux version 5.15.0 which already has support for GICv4. - For the direct injection vSGI test, GICv4 driver support is required in linux kernel. This is achieved by installing the ``refinfra`` linux kernel to the host OS distribution which is temporarily realized by copying the kernel to the host ``/boot/`` directory as shown below. :: sudo rsync -Wa --progress user@server:TOP_DIR/output//components/linux/Image /boot/vmlinuz-refinfra .. note:: This is not a recommended way to install a new kernel to ubuntu. This approach is chosen only for quick kvm testing and doesn't guarantee stable ubuntu afer the installation. - Under default kernel setup direct injection of vLPI and vSGI isn't activated in KVM. And this is activated by enabling kernel boot parameter `kvm-arm.vgic_v4_enable`. Also to enable display of grub menu during boot make the necessary changes to specific variables in the user grub config file `/etc/default/grub` as shown below. :: #Before Change-> GRUB_TIMEOUT_STYLE=hidden GRUB_TIMEOUT=0 GRUB_CMDLINE_LINUX_DEFAULT="..." #GRUB_TERMINAL=console #After Change-> GRUB_TIMEOUT_STYLE=menu GRUB_TIMEOUT=10 GRUB_CMDLINE_LINUX_DEFAULT="... kvm-arm.vgic_v4_enable=1" GRUB_TERMINAL=console - To reflect all the changes related to grub config and create grub menuentry for the new `refinfra`` kernel. Do a grub update and shutdown the host. :: sudo update-grub sudo poweroff Running The Test ---------------- vSGI Test ^^^^^^^^^ - It is necessary to choose right version of kernel while booting the host satadisk image for this test from the ``GRUB`` boot menu at the boot time. So go ahead and boot the host satadisk image on the FVP as mentioned in :ref:`Distro Boot `. For host ubuntu distro version below v22.04, ensure to select menuentry **"Ubuntu, with Linux refinfra"** from sub-menuentry **"Advanced options for Ubuntu"**. Command to begin the Ubuntu distro boot from the ``ubuntu.satadisk`` image: :: ./distro.sh -p rdn2cfg1 -d /absolute/path/to/ubuntu.satadisk -n true - Executing the testcase will require the ``kvm-unit-tests`` directory, and the ``kvmtool`` binary which were built in section `Build the platform software`_. Copy these to host OS through network and run the test :: rsync -Wa --progress user@server:TOP_DIR/output//components/kvm-ut . cd kvm-ut/ rsync -Wa --progress user@server:TOP_DIR/output//components/rdn2/lkvm . sudo ./lkvm run -m 2048 -f arm/gic.flat --irqchip gicv3-its -p "ipi" If all the tests passes, the logs should output concluding successfull completion of vSGI testing. :: PASS: gicv3: ipi: self: Interrupts received PASS: gicv3: ipi: target-list: Interrupts received PASS: gicv3: ipi: broadcast: Interrupts received SUMMARY: 3 tests - Shutdown the running host OS and move on to the next test. :: sudo poweroff vLPI Test ^^^^^^^^^ - It is necessary to choose right version of kernel while booting the host satadisk image for this test from the ``GRUB`` boot menu at the boot time. It is essential to avoid booting with ``refinfra`` kernel and rather use any other kernel version. So go ahead and boot the host satadisk image on the FVP as mentioned in :ref:`Distro Boot `. For host ubuntu distro version below v22.04, ensure to select any menuentry other than **"Ubuntu, with Linux refinfra"** from sub-menuentry **"Advanced options for Ubuntu"**. Command to begin the Ubuntu distro boot from the ``ubuntu.satadisk`` image: :: ./distro.sh -p rdn2cfg1 -d /absolute/path/to/ubuntu.satadisk -n true - Neoverse reference platforms have few smmu-test-engine devices that are the PCIe endpoint devices that can be used to demonstrate this feature. For this test, one of the smmu-test-engine (smmute) from I/O macro block is used to generate vLPIs. And the generated vLPI is received by a guest virtual machine (VM) running the ``refinfra`` linux kernel with support of smmute driver. To setup a guest virtual machine, KVM hypervisor is employed here. To learn more in detail about KVM and virtualization read through :ref:`Virtualization using KVM ` and :ref:`UEFI supported virtualization guide `. Running the KVM session will require the ``refinfra Linux kernel`` image, the ``ramdisk-buildroot.img`` initrd image and the ``kvmtool`` binary. vLPI test will require the smmute testapp ``smmute`` be executed from guest. Create a test workplace and download all the built binaries and images. :: mkdir -p ~/vlpi-test; cd ~/vlpi-test rsync -Wa --progress user@server:TOP_DIR/output//ramdisk-buildroot.img . rsync -Wa --progress user@server:TOP_DIR/output//components/linux/Image . rsync -Wa --progress user@server:TOP_DIR/output//components/linux/tools/iommu/smmute/smmute . rsync -Wa --progress user@server:TOP_DIR/output//components/rdn2/lkvm . - Run the below command to attach the smmute device to ``vfio-pci`` driver on host. This is required to allow PCI endpoint device passthrough to the guest OS. Please follow through the below commands to quickly setup the device and to learn more in detail about it, read through `Linux vfio`_. :: sudo modprobe vfio-pci echo "vfio-pci" | sudo tee /sys/bus/pci/devices/0000\:08\:00.1/driver_override echo "0000:08:00.1" | sudo tee /sys/bus/pci/drivers_probe - Make sure that the device is attached to vfio-pci driver. :: $ lspci -vv -s 0000:08:00.1 |grep vfio-pci Kernel driver in use: vfio-pci - Launch the virtual machine with a kernel image and initrd image as the guest OS.Run the below command from ``vlpi-test`` workspace directory to start a KVM session with kernel image ``Image``, initrd image ``ramdisk-buildroot.img`` and the PCI device with requester-ID (BDF) ``0000:08:00.1`` used for direct device assignment: :: screen -md -S "virt0" sudo ./lkvm run -m 2048 -k Image -i ramdisk-buildroot.img --irqchip gicv3-its --9p $(pwd),hostshare --console serial -p "console=ttyS0 --earlycon=uart,mmio,0x1000000 ip=dhcp" --vfio-pci 0000:08:00.1 --disable-mte; screen -r virt0; - Enter sudo password if prompted for one. - After the guest boots up, mount the 9p filesytem with mount_tag ``hostshare`` to discover the ``smmute`` testapp in the guest and finally run the smmute testapp as shown below: :: mount -t 9p -o trans=virtio hostshare /tmp/ cd /tmp ./smmute -s 0x100 -n 10 Running the test, outputs the log similar to what is shown below for 10 transactions. If all the transactions has status 0 (success) without any popping kernel log about missed MSI-X transaction, it is safe to say direct injection of vLPI is tested. :: Result: - transaction = 2 - status = 0 Success - value = 0x0 - duration = 2 us Output buffer: 000: f1 f2 f3 f4 f5 f6 f7 f8 f9 fa fb fc fd fe ff 00 010: 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f 10 020: 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 030: 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 ... - At last shutdown the guest :: poweroff And on completion of guest shutdown ``kvmtool`` prints a message denoting error free closing of KVM session. :: # KVM session ended normally. .. _Linux vfio: https://www.kernel.org/doc/Documentation/driver-api/vfio.rst .. _Ubuntu KVM Installation guide: https://help.ubuntu.com/community/KVM/Installation