Neoverse Reference Design Platform Software

About

  • Reference Design
  • Software Stack
    • MSCP Firmware
    • Trusted Firmware
    • EDK2
    • Linux Kernel
    • Other software components
  • Repo Tool & Manifests
    • Manifest File in Detail
    • Manifest (Pinned vs Non-Pinned)
  • Troubleshooting
    • Error while using repo command
    • Builds do not progress to completion
    • FVP closes abruptly
    • Error: “/usr/bin/env: ‘python’: No such file or directory”
  • Report Security Vulnerability

User Guides

  • Getting Started
    • Prerequisites
    • Download Sources
    • Build Environment
      • Host Based
      • Container Based
        • Install Container Engine
        • Build Container Image
        • Run Container Image
        • Rootless Docker Support
    • Enable Network for FVP’s (optional)
      • Host Dependencies
      • Configure TAP Interface
  • Learning Paths
    • Get started with the Neoverse Reference Design software stack
    • Debug Neoverse N2 Reference Design with Arm Development Studio

Platforms

  • RD-V3-R1 Cfg1
    • Overview
    • Platform Specific Details
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-V3-R1
    • Overview
    • Platform Specific Details
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-V3 Cfg2
    • Overview
    • Platform Specific Details
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-V3 Cfg1
    • Overview
    • Platform Specific Details
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-V3
    • Overview
    • Platform Specific Details
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-V2
    • Overview
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-N2 Cfg3
    • Overview
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-N2 Cfg2
    • Overview
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-N2 Cfg1
    • Overview
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-N2
    • Overview
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-V1 MC
    • Overview
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-V1
    • Overview
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-N1 Edge X2
    • Overview
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • RD-N1 Edge
    • Overview
    • Supported Features
    • Obtaining FVP
    • Release Tags
  • SGI-575
    • Overview
    • Supported Features
    • Obtaining the FVP
    • Release Tags

Features

  • AP Boot from BL31 (Reset to BL31 Flow)
    • Overview of Reset to BL31
    • Building the platform software
    • Booting platforms with Reset to BL31 boot flow
  • Boot Operating System(s)
    • Busybox Boot
      • Build the platform software
      • Boot upto Busybox
    • Buildroot Boot
      • Build the platform software
      • Modifying buildroot target filesystem (optional)
      • Booting with Buildroot as the filesystem
    • Distro Boot (and Install)
      • Build the platform software
      • Boot a Linux Distribution
        • Pre-Installed (Raw) images
      • Install a Linux Distribution
        • Additional distribution specific instructions (if any)
    • UEFI Secure Boot
      • Generate key pairs
      • Build the platform software
      • Securely boot upto Busybox
    • WinPE Boot
      • Build the platform software
      • Obtain the WinPE disk image
      • Boot WinPE
  • Compute Express Link
    • CXL Software Overview
    • CXL with CEDT and Decoder configuration
    • Download and build the required platform software
    • Validating CXL capabilities in Kernel
    • CEDT and CXL ACPI configuration in Kernel sysfs
  • MCP sideband channel
    • Overview
    • What does MCP sideband channel showcase?
    • Building and running MCP sideband channel
    • Decoding output logs
    • MCP sideband channel design
  • Memory system resource Partitioning And Monitoring (MPAM)
    • MPAM-resctrl - A quick glance
    • Exploring resctrl file-system
    • Configuring MPAM via resctrl file-system
    • A closer look at MPAM software
    • MPAM and task scheduling
  • Power Management
    • ACPI Low Power Idle (LPI)
      • Overview of LPI test
      • Download and build the required platform software
      • Procedure for validating LPI states
    • Collaborative Processor Performance Control (CPPC)
      • Overview of CPPC test
      • Download and build the required platform software
      • Changing the scaling governor
      • Validating CPPC functionality
      • Additional precautions for FVP based platforms
    • Reboot and Shutdown
      • Overview of the reboot modes supported
      • Power-down sequence for RD-V3 platform
        • AP side
        • Shutdown
        • Cold reboot
        • Warm reboot
      • Download and build the required platform software
      • Validating Shutdown/Reboot
        • Shutdown
        • Cold reboot
        • Warm reboot
    • System Monitoring Control Framework (SMCF)
      • Overview of SMCF
      • SMCF Software Flow and Configuration
      • Download and build the required platform software
      • Validating the SMCF
      • Optional Changes for FVP based platforms
  • Reliability, Availability, and Serviceability (RAS)
    • Overview
    • Component Definitions by RAS System Architecture
      • Node
      • Error Record
    • Error Handling
      • Firmware First Error Handling
      • Kernel First Error Handling
    • Error Injection
      • Error Injection via Kernel
        • CPU Error Injection
        • Shared RAM Error Injection
      • Error Injection via SCP Utility
        • Procedure to Perform Error Injection into Various Components
        • Various Error Injection Scenarios
    • Rasdaemon
      • Overview
      • Enabling Rasdaemon
      • Test to validate rasdaemon
      • Other components supporting RAS
    • CMN Cyprus Kernel First Handling (KFH)
      • CMN Cyprus RAS support
      • Error/Fault injection in CMN Cyprus
      • CMN KFH Software
        • SSDT Table
        • AEST table
        • AEST CMN driver for CMN
  • SystemReady Compliance Program
    • SystemReady Band
    • System Architecture Compliance Suites (ACS)
      • Build the Platform Software
      • Prepare Test Image
      • Execute Test Image
      • Retrieve Test Results
      • Select a SBSA Compliance Level (Optional)
  • TF-A Tests
    • Overview of tf-a-tests
    • Build the platform software
    • Boot TF-A-Tests
  • UEFI Self-Certification Test
    • Overview of SCT Standalone test
    • Build the platform software
    • Run UEFI SCT
  • Virtualization
    • Virtualization using KVM
      • What is KVM?
      • Virtualization on Neoverse Reference Design Platforms
      • Overview of Native Linux KVM tool
      • Booting multiple guests
    • KVM Unit Tests
      • Overview of kvm-unit-tests
      • Build the platform software
      • Booting the platform for validation
        • Running Unit Testcases
    • Using non-discoverable devices connected to I/O virtualization block
      • Overview
      • Build the platform software
      • Running tests for non-PCI devices on busybox
        • PL011 UART
        • PL330 DMA
        • SRAM Memory
    • PCIe I/O virtualization
      • What is I/O virtualization?
      • PCIe pass-through based device virtualization
    • Virtual Interrupts And VGIC
      • Overview of Directly Injected vLPIs
      • Overview of Directly Injected vSGIs
      • Build & Install
        • Build the platform software
        • Setup Satadisk Images
      • Running The Test
        • vSGI Test
        • vLPI Test
    • UEFI Based KVM Virtualization
      • Overview of Virtualization support
      • Objective
      • Overview of ArmVirtKvmTool
      • Build the platform software
      • Setup Satadisk Images
      • Booting the platform for validation
        • Boot Host OS
        • Network Support
        • Emulate Flash Memory
        • Enable PCIe pass-through based device virtualization
        • Obtain the built binaries
        • Launch VMs with multiple Linux distributions
  • Virtio-P9
    • Overview of P9 filesystem
    • Overview of Virtio-P9 device
    • Build the platform software
    • Running the test to validate Virtio-P9 device

Release Notes

  • RD-INFRA-2025.02.04
    • Release Description
    • Change Log
    • Supported Features
    • Known Limitations
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2024.12.20
    • Release Description
    • Change Log
    • Supported Features
    • Known Limitations
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2024.09.30
    • Release Description
    • Supported Features
    • Known Limitations
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2024.07.15
    • Release Description
    • Supported Features
    • Known Limitations
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2024.04.17
    • Release Description
    • Supported Features
    • Known Limitations
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2024.01.16
    • Release Description
    • Supported Features
    • Known Limitations
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2023.12.22
    • Release Description
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2023.09.29
    • Release Description
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2023.09.28
    • Release Description
    • Known Limitations
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2023.06.30
    • Release Description
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2023.06.28
    • Release Description
    • Known Limitations
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2023.03.31
    • Release Description
    • Test Coverage
    • Source Repositories
  • RD-INFRA-2023.03.29
    • Release Description
    • Test Coverage
    • Source Repositories
Neoverse Reference Design Platform Software
  • »
  • Compute Express Link
  • View page source

Compute Express Link

Important

This feature might not be applicable to all Platforms. Please check individual Platform pages, section Supported Features to confirm if this feature is listed as supported.

Compute Express Link (CXL) is an open standard interconnection for high-speed central processing unit (CPU)-to-device and CPU-to-memory, designed to accelerate next-generation data center performance. CXL is built on the PCI Express (PCIe) physical and electrical interface with protocols in three key areas: input/output (I/O), memory, and cache coherence.

../_images/cxl-type3.png

Fig. 3 CXL Type-3 device modeled on Neoverse N2 reference design platform.

This document explains CXL 2.0 Type-3 device (Memory expander) handling on Neoverse N2 reference design platform. At present, CXL support has been verified on ‘rdn2cfg1’ platform. CXL Type-3 device supports CXL.io and CXL.mem protocol and acts as a Memory expander to the Host SOC.

CXL Software Overview

System Control Processor (SCP) firmware

  1. At Host address space 8GB address space, starting at, 3FE_0000_0000h is reserved for CXL Memory. This address space is part of SCG and configured as Normal cacheable memory region.

  2. CMN-700 is the main interconnect, which will be configured for PCIe enumeration and topology discovery.

  3. pcie_enumeration module performs PCIe enumeration and as part of the enumeration process it is also checked whether a PCIe device supports CXL Extended Capability. pcie_enumeration module invokes CXL module API to determine the same for each of the detected PCIe device.

  4. CXL module will also determine whether CXL device has DOE capability. Once found, execute DOE operations to fetch CDAT structure and understand CXL device memory range supported. DOE operation sequence is implemented following DOE-ECN 12Mar-2020.

    Check for CXL object’s DOE busy bit and initiate DOE operation accordingly for fetching CXL CDAT Structures(DSMAS supported at latest FVP model). Read the CXL device DPA base, DPA length from DSMAS structures and save the same into internal Remote Memory software Data Structure.

  5. After completing the enumeration process pcie_enumeration module would invoke CXL module API to map remote CXL memory region into Host address space and do necessary CMN configuration.

    Software data structure for remote memory will have information regarding CXL Type-3 Device Physical memory address, size and memory attributes. CXL module would call CMN module API for doing the necessary interconnect configuration.

  6. CMN module configures HN-F Hashed Target Region(HTG) with the address region reserved for Remote CXL Memory usage, based on the discovered remote device memory size. Configured HN-F CCG SA node IDs and CXL.Mem region in HNF-SAM HTG in following order-

HNF_SAM_CCG_SA_NODEID_REG
HNF_SAM_HTG_CFG3_MEMREGION
HNF_SAM_HTG_CFG2_MEMREGION
HNF_SAM_HTG_CFG1_MEMREGION

Program por_ccg_ra_sam_addr_region_reg. with target HAID, host memory base address and size for accessing remote CXL memory.

EDK2 Platform

  1. A new CXL.Dxe is introduced that looks for PCIe device with CXL and DOE capability. This discovery process begins based on notification received on installation of gEfiPciEnumerationCompleteProtocolGuid.

  2. It first looks for PCIe devices with extended capability and then check whether the device supports DOE. If DOE operation is supported then send DOE command and get remote memory details in the form of CDAT tables (DSMAS). The operation is similar to what is done in SCP firmware, that’s explained above.

  3. After enumerating complete PCIe topology, all remote memory node details will be stored in local data structure and CXLPlatformProtocol interface will be installed.

  4. ACPITableGenerator module dynamically prepares ACPI tables. It will use CXLPlatformProtocol interfaces and get the previously discovered remote CXL memory details. It would prepare SRAT table with both Local memory, remote CXL memory nodes, along with other necessary details.

    Prepare HMAT table with required proximity, latency info.

  5. The remote CXL memory will be represented to kernel as Memory only NUMA node.

  6. Also, CEDT structures, CHBS and CFMWS are created and passed to kernel. In CFMWS structure, Interleave target number is considered 1 for demonstrating a reference solution with CEDT structures in the absence of interleaving capability in current FVP model. There is no real interleaving address windows across multiple ports with this configuration. It is same as single port CXL Host bridge.

  7. ACPI0016 and ACPI0017 objects are created using PcieAcpiTableGenerator.Dxe at runtime and passed to kernel. ACPI0016 would indicated the presence of CXL Host bridge and ACPI0017 would correspond to CMFWS and CHBS structures.

Kernel

  1. All firmware work is validated using CXL framework present in Kernel.

CXL with CEDT and Decoder configuration

../_images/cxl-with-decoder-config.png

Download and build the required platform software

For downloading and building the platform firmware, refer Buildroot boot or Busybox Boot. Any other boot mechanism, like Distro boot may also be fine for CXL capability test.

Ensure that the model parameter “-C pcie_group_0.pciex16.pcie_rc.add_cxl_type3_device_to_default_hierarchy=true” is present in “rdinfra/platforms/<rd platform>/run_model.sh”

Validating CXL capabilities in Kernel

In following explanation, ‘buildroot’ boot is taken as an example. With buildroot there are more utility options available.

  1. Boot the platform to buildroot command line prompt.

  2. Run the command ‘lspci -k’, which will list out the all PCIe devices and associated kernel driver. Showing below, the output for CXL device. Please note that BDF position of CXL device may vary based on the PCIE topology of the model.

    00:18.0 Memory controller [0502]: ARM Device ff82 (rev 0f)
    Subsystem: ARM Device 000f
    Kernel driver in use: cxl_pci
    

    One point to note here that ensure CXL is enabled in kernel ‘defconfig’.

    CONFIG_CXL_BUS=y
    CONFIG_CXL_MEM_RAW_COMMANDS=y
    
  3. As a next command to check the capabilities of CXL device, execute ‘lspci -vv -s 00:18.0’, which would display following output.

    00:18.0 Memory controller [0502]: ARM Device ff82 (rev 0f) (prog-if 10)
      Subsystem: ARM Device 000f
      Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
      Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
      IOMMU group: 10
      Region 0: Memory at 60800000 (32-bit, non-prefetchable) [size=64K]
      Capabilities: [40] Power Management version 1
              Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot+,D3cold-)
              Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
    
      ....
    
      Capabilities: [118 v1] Extended Capability ID 0x2e
      Capabilities: [130 v1] Designated Vendor-Specific: Vendor=1e98 ID=0000 Rev=1 Len=40: CXL
              CXLCap: Cache- IO+ Mem+ Mem HW Init- HDMCount 1 Viral-
              CXLCtl: Cache- IO+ Mem- Cache SF Cov 0 Cache SF Gran 0 Cache Clean- Viral-
              CXLSta: Viral-
      Capabilities: [158 v1] Designated Vendor-Specific: Vendor=1e98 ID=0008 Rev=0 Len=20 <?>
      Kernel driver in use: cxl_pci
    
  4. For checking the CXL device memory capabilities NUMA utilities can be used. Enable NUMACTL package in buildroot ‘defconfig’.

    For example, in 'configs/rdn2cfg1/buildroot/aarch64_rdinfra_defconfig' enable 'BR2_PACKAGE_NUMACTL=y'
    

    With NUMA utilities available in buildroot, execute command ‘numactl -H’, which would show all the available NUMA nodes and it’s capacities.

    numactl -H
    available: 2 nodes (0-1)
    node 0 cpus: 0 1 2 3 4 5 6 7
    node 0 size: 7930 MB
    node 0 free: 7824 MB
    node 1 cpus:
    node 1 size: 8031 MB
    node 1 free: 8010 MB
    node distances:
    node   0   1
      0:  10  20
      1:  20  10
    

    Here it shows that Node-1(CXL device) has memory capacity of 8031MB, which adds to the total available memory for the system. This extended memory regions is available for kernel usage, which can be verified using NUMA utilities ‘numademo’, ‘numastat’.

#numastat -n

 Per-node numastat info (in MBs):
                            Node 0          Node 1           Total
                       --------------- --------------- ---------------
 Numa_Hit                  215.21           84.72          299.93
 Numa_Miss                   0.00            0.00            0.00
 Numa_Foreign                0.00            0.00            0.00
 Interleave_Hit             25.98           26.68           52.66
 Local_Node                215.21            0.00          215.21
 Other_Node                  0.00           84.72           84.72
  1. If NUMA utilities are not present then CXL device memory information can be verified using numa node1 sysfs entries.

[ceoss@localhost ~]$ cat /sys/devices/system/node/node1/meminfo
Node 1 MemTotal:        8224032 kB
Node 1 MemFree:         8203836 kB
Node 1 MemUsed:           20196 kB
Node 1 Active:                0 kB
Node 1 Inactive:              0 kB
...
Node 1 KReclaimable:       2180 kB
Node 1 Slab:               6060 kB
Node 1 SReclaimable:       2180 kB
Node 1 SUnreclaim:         3880 kB
Node 1 HugePages_Total:     0
Node 1 HugePages_Free:      0
Node 1 HugePages_Surp:      0

Above examples demonstrate how CXL Type-3 device is used as Memory expander and the device memory region can be utilized by kernel.

CEDT and CXL ACPI configuration in Kernel sysfs

  1. Checking CXL mem device size through CXL sysfs interface. (Showing the CXL.Mem device size 8GB)

# cat /sys/bus/cxl/devices/mem0/ram/size
  0x200000000
  1. CXL Mem device at root device downstream port.

# cat /sys/bus/cxl/devices/root0/dport0/physical_node/0000\:00\:18.0/mem0/ram/size
  0x200000000
  1. Decoder configurations passed through CFMWS and seen in kernel.

# cat /sys/bus/cxl/devices/root0/decoder0.0/start
  0x3fe00000000

# cat /sys/bus/cxl/devices/root0/decoder0.0/size
  0x200000000

# cat /sys/bus/cxl/devices/root0/decoder0.0/target_list
  0

# cat /sys/bus/cxl/devices/root0/decoder0.0/interleave_ways
  1
Previous Next

© Copyright 2020-2024, Arm Limited. All rights reserved.