*************** Getting Started *************** Prerequisites ============= .. important:: - Neoverse software stack builds are only supported in linux operating systems. - The operating system used to validate these instructions is Ubuntu 22.04 (althought any modern linux distribution should work). - The following sections and chapters assume the commands are executed in a bash shell environment. .. _host-requirements: Host machine recommended hardware configuration: - AArch64 or x86-64 architecture host. - 64GB of free disk space. - 48GB of RAM (32GB minimum). The host machine needs the following packages installed. .. code-block:: shell sudo apt update sudo apt install curl git Configure git as follows. .. code-block:: shell git config --global user.name "" git config --global user.email "" Install repo tool via *'manual method'*. Refer to `repo install`_ official documentation as this might change. Instructions provided here for convinience. .. code-block:: shell export REPO=$(mktemp /tmp/repo.XXXXXXXXX) curl -o ${REPO} https://storage.googleapis.com/git-repo-downloads/repo gpg --recv-keys 8BB9AD793E8E6153AF0F9A4416530D5E920F5C65 curl -s https://storage.googleapis.com/git-repo-downloads/repo.asc | gpg --verify - ${REPO} && install -m 755 ${REPO} ~/bin/repo .. _python-version-required: .. warning:: The repo tool requires at least Python 3.6 to be installed on the development machine. On machines where python3 is not the default, the repo init command will fail to complete. Refer the :ref:`troubleshooting guide `. .. _download-sources: Download Sources ================ In the previous section the host machine is configured with the minimum set of tools to allow the user to prepare and *sync* a workspace. This workspace will then configure a build environment, but more on that in the next section. This workspace is a folder in the user host machine that contains all of the software sources, as well as, build products once a build is successful and complete. This guide refers to this folder as ```` but the user is encouraged to provide a meaningful name. Create a folder, and change directory to it. .. code-block:: shell mkdir cd Initialise and sync (download) the sources. The command below is the generic form and requires ```` and ```` to be replaced by valid arguments. - Manifest file names can be found :ref:`here `. - Release tags are located in *Release Tags* section of each supported platform user guide or from the release notes. .. code-block:: shell repo init -u https://git.gitlab.arm.com/infra-solutions/reference-design/infra-refdesign-manifests.git -m -b refs/tags/ --depth=1 repo sync -c -j $(nproc) --fetch-submodules --force-sync --no-clone-bundle .. hint:: To reduce the size of the commit history that is downloaded (thus reducing the time taken to download the platform software stack), the repo init command above is append with ``--depth=1``. If the user requires more commit history, the argument can be removed before executing the command. Build Environment ================= There are two methods to build the reference stack - host based and container based. The host based build is the traditional one in which a script is executed to install all the build dependencies on the host machine. The container based build is an another method in which a container image is built from a container configuration file and has all the build dependencies satisfied and isolated from the host machine. Both of the methods assume the user has completed the section :ref:`Download Sources `. Host Based ---------- For setting up the build environment in this method, execute the following command before building the software stack. The execution of this script installs all the build dependencies. .. note:: This command installs additional packages on the host machine and so the user is expected to have sufficient privileges on the host machine. .. code-block:: shell sudo ./build-scripts/rdinfra/install_prerequisites.sh Container Based --------------- .. note:: The supported container engine is docker. The container image is designed to allow a user to have the sources directory (````) in the host machine and offload the build stage to the container, thus a user is created inside the container with the same username, user-id and user-group as the user on the linux host machine. This approach allows a user to have the binaries built by the container and use IDE's like ARM DS to execute debug sessions, as paths and permissions are the same wether inside or outside the container. Install Container Engine ^^^^^^^^^^^^^^^^^^^^^^^^ Please refer to `docker install`_ instructions as there are several methods available, ensuring you install the following *docker-engine* and optionally the *buildx-plugin*. After installation is complete, refer to the `post-installation steps`_ on how to manage docker as non-root user. Build Container Image ^^^^^^^^^^^^^^^^^^^^^ .. warning:: Do **not** execute the wrapper script with root permissions. As doing so, interferes with permissions and will lead to errors when building and executing software. The wrapper script *container.sh* sets the container file and image name by default and this can be changed with options *-f* and *-i* respectively or by editing the file itself. To see all options available, execute the script with the help flag. .. code-block:: shell cd /container-scripts ./container.sh -h To build the container image, execute: .. code-block:: shell ./container.sh build Run Container Image ^^^^^^^^^^^^^^^^^^^ Mount the ```` directory in the container by using the option **-v** followed by the absolute path to ````. The mount point inside the container is the exact same path as the host system. To run the container image, execute the following: .. code-block:: shell ./container.sh -v /absolute/path/to/rd-infra run The container shall be running and the shell prompt display: .. code-block:: shell $USER:$HOSTNAME:/$ As this is designed to have the same user and hostname as the host, it is not straightforward to see the container is executing, but a way to verify it is to confirm the current working directory, thus execute: .. code-block:: shell pwd And the output shall be ``/``, meaning the root folder of the container file system. This completes the procedure to setup the container-based build environment. Enable Network for FVP's (optional) =================================== If networking is required, the platform FVP's support a virtual ethernet interface that can be configured via TAP mode interface. This mode allows the FVP to be directly connected to the network via a bridge. All ports are forwarded to the FVP networking interface as if it was connected to the network. Host Dependencies ----------------- .. note:: This command installs additional packages on the host machine and so the user is expected to have sufficient privileges on the host machine. .. code-block:: shell sudo apt update sudo apt install qemu-kvm libvirt-daemon-system iproute2 Configure TAP Interface ----------------------- Ensure that the libvirtd service is running .. code-block:: shell sudo systemctl start libvirtd Create a network bridge and change state to up. This step is only required once, so the user can skip if a bridge exists. This example uses ``virbr0`` for the bridge name. .. code-block:: shell sudo ip link add name virbr0 type bridge sudo ip link set dev virbr0 up Finally, the TAP interface is created, configured and attached to ``virbr0``. .. code-block:: shell sudo ip tuntap add dev tap0 mode tap user $(whoami) sudo ip link set tap0 promisc on sudo ip addr add 0.0.0.0 dev tap0 sudo ip link set tap0 up sudo ip link set tap0 master virbr0 This completes the environment setup to have a working workspace so the user can proceed to build, and experiment with Neoverse reference designs features. Refer the :ref:`Troubleshooting Section ` for solutions to known issues that might arise during use of the platform software stack. .. _repo install: https://source.android.com/setup/develop#installing-repo .. _docker install: https://docs.docker.com/engine/install/ .. _post-installation steps: https://docs.docker.com/engine/install/linux-postinstall/