Setup the Neoverse Reference Design software stack workspace

Introduction

The page describes the procedure to sync (download) Arm’s Neoverse Reference Design (RD) platform software stack.

Note

AArch64 or x86-64 host machine with Ubuntu 22.04, 64GB of free disk space and 32GB of RAM is minimum requirement to sync and build the platform software stack. 48GB of RAM is recommended though.

Git and Repo tool setup

The Neoverse RD software stack is available in multiple git repositories. In order to simplify downloading the software stack, repo tool can be used. This section explains the procedure to setup git and repo tool.

  • Install Git by using the following command

sudo apt install git
  • Git installation can be confirmed by checking the version

  git --version

This should return the git version in a format such as ``git version 2.7.4``
  • Configure name and email address

git config --global user.name "<your-name>"
git config --global user.email "<[email protected]>"
  • Install the repo tool by following these instructions.

This completes the setup of git and repo tool.

Platform Manifest Names

The repo tool uses a manifest file in order to download the source code. The manifest file lists the location of the various repositories and the branches in those repositories from which the code has to be downloaded. Each of the Neoverse RD platform has a unique manifest that is supplied to the repo tool to download the corresponding platform software. The following table lists the platform names and the corresponding manifest file names. Make a note of the manifest file name for the platform of your choice as that is required for the subsequent instructions.

Reference Platform

Manifest File Name

RD-Fremont

pinned-rdfremont.xml

RD-Fremont-Cfg1

pinned-rdfremontcfg1.xml

RD-Fremont-Cfg2

pinned-rdfremontcfg2.xml

RD-V2

pinned-rdv2.xml

RD-N2

pinned-rdn2.xml

RD-N2-Cfg1

pinned-rdn2cfg1.xml

RD-N2-Cfg2

pinned-rdn2cfg2.xml

RD-N2-Cfg3

pinned-rdn2cfg3.xml

RD-V1 (Single Chip)

pinned-rdv1.xml

RD-V1 (Quad Chip)

pinned-rdv1mc.xml

RD-N1-Edge (Single Chip)

pinned-rdn1edge.xml

RD-N1-Edge (Dual Chip)

pinned-rdn1edgex2.xml

SGI-575

pinned-sgi575.xml

Downloading the software stack

The manifest files, which contain the location of all the git repositories of RD platform software stack, are available here. This section explains the procedure to sync the software stack.

  • Switch to a new empty folder

mkdir rd-infra
cd rd-infra
  • To obtain the latest stable software stack, use the following commands listed below.

Note

In order to reduce the size of the commit history that is downloaded (and reduce the time taken to download the platform software stack), append “–depth=1” to the repo init command (without the quotes).

repo init -u https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests.git -m <manifest-file-name> -b refs/tags/<RELEASE_TAG>
repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle

Note

The manifest file name for the required platform can be found in the Platform Manifest Names section. The RELEASE_TAG can be found in the Release Tags section of the corresponding platform’s user guide or from the release notes, if obtained.

Note

The repo tool requires at least Python 3.6 to be installed on the development machine. On machines where python3 is not the default, the repo init command will fail to complete. Refer the troubleshooting guide on resolving this issue.

Setting up the Build Environment

There are two methods to build the reference stack - host based and container based. The host based build is the traditional one in which a script is executed to install all the build dependencies on the host machine. The contained based build is an another method in which container image is built from a container configuration file and has all the build dependencies satisfied and isolated from the host machine.

Host Based

For setting up the build environment in this method, execute the following command before building the software stack. The execution of this script installs all the build dependencies.

sudo ./build-scripts/rdinfra/install_prerequisites.sh

Note

This command installs additional packages on the host machine and so the user is expected to have sufficient privileges on the host machine.

Container Based

The supported container engine is docker and this setup is verified using Ubuntu 22.04 LTS as host OS.

The container image is designed to allow a user to have the sources directory in its host machine and offload the build stage to the container, thus a user is created inside the container with the same username, user-id and user-group as the user on the linux host machine.

This means that if you have already followed the section Downloading the software stack you can mount the folder inside the running container.

Install Container Engine

Please refer to Docker official instructions as there are several methods available, ensuring you install the following docker-engine and buildx-plugin.

After installation is complete, refer to the post-installation steps on how to manage docker as non-root user. The container file, wrapper and utility scripts are located in this repository.

Note

Do not execute the wrapper script with root permissions.

The wrapper script container.sh defines the container file and image name by default and this can be changed either with flags -f and -i or by editing the file itself. To see all options available, change to the directory where you cloned the repository above and execute

cd container-scripts
./container.sh -h

Build Container Image

Note

Do not execute the wrapper script with root permissions.

To build the container image, execute

./container.sh build

Run Container Image

Note

Do not execute the wrapper script with root permissions.

Mount the directory, in which the software stack has been downloaded, inside the container. This is achieved by using the flag -v. The mount point inside the container is /home/$USER/workspace

Note

The path needs to be an absolute path.

To run the container image, execute

./container.sh -v /absolute/path/to/rd-infra run

You are now inside the container and the prompt shall look like

USER:HOSTNAME:~/workspace$

This completes the procedure to setup the container-based build environment.

Setting up TAP interface (optional)

The platform FVP supports virtual ethernet interface to allow networking support to be usable for the software executed by the FVP. If support for networking is required, the host TAP interface has to be setup before the FVP is launched. To setup the TAP interface, execute the following commands on the host machine.

  • Install libvirt and other packages

sudo apt-get install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils
  • Ensure that the libvirtd service is active

sudo systemctl start libvirtd
  • Use the ifconfig command and ensure that a virtual bridge interface named ‘virbrX’ (where X is a number 0,1,2,….) is created. If there are no instances of virtual bridge available, use the following command to create it.

sudo brctl addbr virbr0
  • Create a tap interface named ‘tap0’

sudo ip tuntap add dev tap0 mode tap user $(whoami)
sudo ifconfig tap0 0.0.0.0 promisc up
sudo brctl addif virbr0 tap0

This completes the procedure to download the platform software stack, setup of the GCC toolchain binaries and installation of the other prerequisites. Refer the troubleshooting guide for solutions to known issues that might arise during use of the platform software stack.


Copyright (c) 2020-2023, Arm Limited. All rights reserved.