Setting up your Manager Instance

Launching a “Manager Instance”


These instructions refer to fields in EC2’s new launch instance wizard. Refer to version 1.13.4 of the documentation for references to the old wizard, being wary that specifics, such as the AMI ID selection, may be out of date.

Now, we need to launch a “Manager Instance” that acts as a “head” node that we will ssh or mosh into to work from. Since we will deploy the heavy lifting to separate z1d.2xlarge and f1 instances later, the Manager Instance can be a relatively cheap instance. In this guide, however, we will use a c5.4xlarge, running the AWS FPGA Developer AMI. (Be sure to subscribe to the AMI if you have not done so. See Subscribe to the AWS FPGA Developer AMI. Note that it might take a few minutes after subscribing to the AMI to be able to launch instances using it.)

Head to the EC2 Management Console. In the top right corner, ensure that the correct region is selected.

To launch a manager instance, follow these steps:

  1. From the main page of the EC2 Management Console, click Launch Instance ▼ button and click Launch Instance in the dropdown that appears. We use an on-demand instance here, so that your data is preserved when you stop/start the instance, and your data is not lost when pricing spikes on the spot market.

  2. In the Name field, give the instance a recognizable name, for example firesim-manager-1. This is purely for your own convenience and can also be left blank.

  3. In the Application and OS Images search box, search for FPGA Developer AMI - 1.12.2-40257ab5-6688-4c95-97d1-e251a40fd1fc and select the AMI that appears under the Community AMIs tab (there should be only one).

    • If you find that there are no results for this search, you can try incrementing the last part of the version number (Z in X.Y.Z) in the search string, e.g., 1.12.2 -> 1.12.3. Other parts of the search string should be unchanged.

    • Do not use FPGA Developer AMI from the AWS Marketplace AMIs tab, as you will likely get an incorrect version of the AMI.

  4. In the Instance Type drop-down, select the instance type of your choosing. A good choice is a c5.4xlarge (16 cores, 32 GiB DRAM) or a z1d.2xlarge (8 cores, 64 GiB DRAM).

  5. In the Key pair (login) drop-down, select the firesim key pair we set up earlier.

  6. In the Network settings drop-down click edit and modify the following settings:

    1. Under VPC - required, select the firesim VPC. Any subnet within the firesim VPC is fine.

    2. Under Firewall (security groups), click Select existing security group and in the Common security groups dropdown that appears, select the firesim security group that was automatically created for you earlier. Do NOT select the for-farms-only-firesim security group that might also be in the list (it is also fine if this group does not appear in your list).

  7. In the Configure storage section, increase the size of the root volume to at least 300GB. The default of 120GB can quickly become too small as you accumulate large Vivado reports/outputs, large waveforms, XSim outputs, and large root filesystems for simulations. You should remove the small (5-8GB) secondary volume that is added by default.

  8. In the Advanced details drop-down, change the following:

    1. Under Termination protection, select Enable. This adds a layer of protection to prevent your manager instance from being terminated by accident. You will need to disable this setting before being able to terminate the instance using usual methods.

    2. Under User data, paste the following into the provided textbox:

      export HOME="${HOME:-/root}"
      CONDA_CMD="conda" # some installers install mamba or micromamba
          echo "Usage: $0 [options]"
          echo "Options:"
          echo "[--help]                  List this help"
          echo "[--prefix <prefix>]       Install prefix for conda. Defaults to $CONDA_INSTALL_PREFIX."
          echo "                          If <prefix>/bin/conda already exists, it will be used and install is skipped."
          echo "[--env <name>]            Name of environment to create for conda. Defaults to '$CONDA_ENV_NAME'."
          echo "[--dry-run]               Pass-through to all conda commands and only print other commands."
          echo "                          NOTE: --dry-run will still install conda to --prefix"
          echo "[--reinstall-conda]       Repairs a broken base environment by reinstalling."
          echo "                          NOTE: will only reinstall conda and exit without modifying the --env"
          echo "[--shell]                 Run initialization for a specific shell. Defaults to $CONDA_SHELL_TYPE."
          echo "Examples:"
          echo "  % $0"
          echo "     Install into default system-wide prefix (using sudo if needed) and add install to system-wide /etc/profile.d"
          echo "  % $0 --prefix ~/conda --env my_custom_env"
          echo "     Install into $HOME/conda and add install to $CONDA_SHELL_TYPE init files (i.e. ~/.*rc)"
          echo "  % $0 --prefix \${CONDA_EXE%/bin/conda} --env my_custom_env"
          echo "     Create my_custom_env in existing conda install"
          echo "     NOTES:"
          echo "       * CONDA_EXE is set in your environment when you activate a conda env"
          echo "       * my_custom_env will not be activated by default at login see /etc/profile.d/ & $CONDA_SHELL_TYPE init files (i.e. ~/.*rc)"
      while [ $# -gt 0 ]; do
          case "$1" in
                  exit 1
                  if [[ "$CONDA_ENV_NAME" == "base" ]]; then
                      echo "::ERROR:: best practice is to install into a named environment, not base. Aborting."
                      exit 1
                  DRY_RUN_ECHO=(echo "Would Run:")
                  echo "Invalid Argument: $1"
                  exit 1
      if [[ $REINSTALL_CONDA -eq 1 && -n "$DRY_RUN_OPTION" ]]; then
          echo "::ERROR:: --dry-run and --reinstall-conda are mutually exclusive.  Pick one or the other."
      set -ex
      set -o pipefail
          # uname options are not portable so do what
          # suggests and iteratively probe the system type
          if ! type uname >&/dev/null; then
              echo "::ERROR:: need 'uname' command available to determine if we support this sytem"
              exit 1
          if [[ "$(uname)" != "Linux" ]]; then
              echo "::ERROR:: $0 only supports 'Linux' not '$(uname)'"
              exit 1
          if [[ "$(uname -mo)" != "x86_64 GNU/Linux" ]]; then
              echo "::ERROR:: $0 only supports 'x86_64 GNU/Linux' not '$(uname -io)'"
              exit 1
          if [[ ! -r /etc/os-release ]]; then
              echo "::ERROR:: $0 depends on /etc/os-release for distro-specific setup and it doesn't exist here"
              exit 1
          OS_FLAVOR=$(grep '^ID=' /etc/os-release | awk -F= '{print $2}' | tr -d '"')
          OS_VERSION=$(grep '^VERSION_ID=' /etc/os-release | awk -F= '{print $2}' | tr -d '"')
          echo "machine launch script started" > "$MACHINE_LAUNCH_DIR/machine-launchstatus"
          chmod ugo+r "$MACHINE_LAUNCH_DIR/machine-launchstatus"
          # platform-specific setup (pre-conda install)
          case "$OS_FLAVOR" in
                  echo "::ERROR:: Unknown OS flavor '$OS_FLAVOR'. Unable to do platform-specific setup."
                  exit 1
          # everything else is platform-agnostic and could easily be expanded to Windows and/or OSX
          prefix_parent=$(dirname "$CONDA_INSTALL_PREFIX")
          if [[ ! -e "$prefix_parent" ]]; then
              mkdir -p "$prefix_parent" || SUDO=sudo
          elif [[ ! -w "$prefix_parent" ]]; then
          if [[ -n "$SUDO" ]]; then
              echo "::INFO:: using 'sudo' to install conda"
              # ensure files are read-execute for everyone
              umask 022
          if [[ -n "$SUDO"  || "$(id -u)" == 0 ]]; then
          # to enable use of sudo and avoid modifying 'secure_path' in /etc/sudoers, we specify the full path to conda
          if [[ -x "$CONDA_EXE" && $REINSTALL_CONDA -eq 0 ]]; then
              echo "::INFO:: '$CONDA_EXE' already exists, skipping conda install"
              wget -O "$CONDA_INSTALLER"  || curl -fsSLo "$CONDA_INSTALLER"
              if [[ $REINSTALL_CONDA -eq 1 ]]; then
                  echo "::INFO:: RE-installing conda to '$CONDA_INSTALL_PREFIX'"
                  echo "::INFO:: installing conda to '$CONDA_INSTALL_PREFIX'"
              # -b for non-interactive install
              $SUDO bash ./ -b -p "$CONDA_INSTALL_PREFIX" $conda_install_extra
              rm ./
              # get most up-to-date conda version
              "${DRY_RUN_ECHO[@]}" $SUDO "$CONDA_EXE" update $DRY_RUN_OPTION -y -n base -c conda-forge conda
              # see
              # for more information on strict channel_priority
              "${DRY_RUN_ECHO[@]}" $SUDO "$CONDA_EXE" config --system --set channel_priority flexible
              # by default, don't mess with people's PS1, I personally find it annoying
              "${DRY_RUN_ECHO[@]}" $SUDO "$CONDA_EXE" config --system --set changeps1 false
              # don't automatically activate the 'base' environment when initializing shells
              "${DRY_RUN_ECHO[@]}" $SUDO "$CONDA_EXE" config --system --set auto_activate_base false
              # automatically use the ucb-bar channel for specific packages
              "${DRY_RUN_ECHO[@]}" $SUDO "$CONDA_EXE" config --system --add channels ucb-bar
              # conda-build is a special case and must always be installed into the base environment
              $SUDO "$CONDA_EXE" install $DRY_RUN_OPTION -y -n base conda-build
              # conda-libmamba-solver is a special case and must always be installed into the base environment
              # see
              $SUDO "$CONDA_EXE" install $DRY_RUN_OPTION -y -n base conda-libmamba-solver
              # Use the fast solver by default
              "${DRY_RUN_ECHO[@]}" $SUDO "$CONDA_EXE" config --system --set solver libmamba
              if [[ "$INSTALL_TYPE" == system ]]; then
                  # if we're installing into a root-owned directory using sudo, or we're already root
                  # initialize conda in the system-wide rcfiles
                  conda_init_extra_args=(--no-user --system)
              # run conda-init and look at its output to insert 'conda activate $CONDA_ENV_NAME' into the
              # block that conda-init will update if ever conda is installed to a different prefix and
              # this is rerun.
              $SUDO "${CONDA_EXE}" init $DRY_RUN_OPTION "${conda_init_extra_args[@]}" $CONDA_SHELL_TYPE 2>&1 | \
                  tee >(grep '^modified' | grep -v "$CONDA_INSTALL_PREFIX" | awk '{print $NF}' | \
                  "${DRY_RUN_ECHO[@]}" $SUDO xargs -r sed -i -e "/<<< conda initialize <<</iconda activate $CONDA_ENV_NAME")
              if [[ $REINSTALL_CONDA -eq 1 ]]; then
                  echo "::INFO:: Done reinstalling conda. Exiting"
                  exit 0
          #   filterable list of all conda-forge packages
          #   instructions on adding a recipe
          #   documentation on package_spec syntax for constraining versions
          # minimal specs to allow cloning of firesim repo and access to the manager
              bash-completion \
              ca-certificates \
              mosh \
              vim \
              git \
              screen \
              argcomplete \
              "conda-lock=1.4" \
              expect \
              "python>=3.8" \
              boto3 \
              pytz \
              mypy-boto3-s3 \
              mypy_boto3_ec2 \
              "azure-mgmt-resource>=18" \
              azure-identity \
              azure-mgmt-compute \
              azure-mgmt-network \
              fsspec \
              "s3fs==0.4.2" \
              "cryptography<41" \
          if [[ "$CONDA_ENV_NAME" == "base" ]]; then
              # NOTE: arg parsing disallows installing to base but this logic is correct if we ever change
              if [[ -d "${CONDA_INSTALL_PREFIX}/envs/${CONDA_ENV_NAME}" ]]; then
                  # 'create' clobbers the existing environment and doesn't leave a revision entry in
                  # `conda list --revisions`, so use install instead
          # to enable use of sudo and avoid modifying 'secure_path' in /etc/sudoers, we specify the full path to conda
          # to enable use of sudo and avoid modifying 'secure_path' in /etc/sudoers, we specify the full path to pip
          # Install python packages using pip that are not available from conda
          # Installing things with pip is possible.  However, to get
          # the most complete solution to all dependencies, you should
          # prefer creating the environment with a single invocation of
          # conda
          PIP_PKGS=( \
              "fab-classic>=1.19.2" \
              azure-mgmt-resourcegraph \
          if [[ -n "$PIP_PKGS[*]" ]]; then
              "${DRY_RUN_ECHO[@]}" $SUDO "${CONDA_PIP_EXE}" install "${PIP_PKGS[@]}"
          if [[ "$INSTALL_TYPE" == system ]]; then
              "${DRY_RUN_ECHO[@]}" $SUDO mkdir -p "${BASH_COMPLETION_COMPAT_DIR}"
              argcomplete_extra_args=( --dest "${BASH_COMPLETION_COMPAT_DIR}" )
              # if we aren't installing into a system directory, then initialize argcomplete
              # with --user so that it goes into the home directory
              argcomplete_extra_args=( --user )
          set +o pipefail
          "${DRY_RUN_ECHO[@]}" yes | $SUDO "${CONDA_ENV_BIN}/activate-global-python-argcomplete" "${argcomplete_extra_args[@]}"
          set -o pipefail
          # emergency fix for buildroot open files limit issue:
          if [[ "$INSTALL_TYPE" == system ]]; then
              "${DRY_RUN_ECHO[@]}" echo "* hard nofile 16384" | $SUDO tee --append /etc/security/limits.conf
              "${DRY_RUN_ECHO[@]}" echo "::WARN:: Unable to set open files limit without sudo."
          # final platform-specific setup
          case "$OS_FLAVOR" in
                  echo "::INFO:: using 'sudo' to install NICE DCV"
                  chmod +x
                  sudo ./
                  echo "firesim" | sudo passwd ec2-user --stdin # default password is 'firesim'
                  echo "::ERROR:: Unknown OS flavor '$OS_FLAVOR'. Unable to do platform-specific setup."
                  exit 1
      } 2>&1 | tee "$MACHINE_LAUNCH_DIR/machine-launchstatus.log"
      chmod ugo+r "$MACHINE_LAUNCH_DIR/machine-launchstatus.log"
      echo "machine launch script completed" >> "$MACHINE_LAUNCH_DIR/machine-launchstatus"

    When your instance boots, this will install a compatible set of all the dependencies needed to run FireSim on your instance using Conda.

  9. Double check your configuration. The most common misconfigurations that may require repeating this process include:

    1. Not selecting the firesim vpc.

    2. Not selecting the firesim security group.

    3. Not selecting the firesim key pair.

    4. Selecting the wrong AMI.

  10. Click the orange Launch Instance button.


Recently, some AWS users been having issues with the launch process (after you click Launch Instance) getting stuck trying to “Subscribe” to the AMI even when the account is already subscribed. We have been able to bypass this issue by going to the FPGA Developer AMI page on AWS Marketplace, clicking subscribe (even if already subscribed), then clicking “Continue to Configuration”, then verify the correct AMI version and region are selected and click “Continue to Launch”. Finally, change the dropdown that says “Launch from Website” to “Launch through EC2” and click “Launch”. At this point, you will be brought back to the usual launch instance page, but the AMI will be pre-selected and you will be able to successfully launch at the end, after updating the rest of the options as noted above.

Access your instance

We HIGHLY recommend using mosh instead of ssh or using ssh with a screen/tmux session running on your manager instance to ensure that long-running jobs are not killed by a bad network connection to your manager instance. On this instance, the mosh server is installed as part of the setup script we pasted before, so we need to first ssh into the instance and make sure the setup is complete.

In either case, ssh into your instance (e.g. ssh -i firesim.pem centos@YOUR_INSTANCE_IP) and wait until the /tmp/machine-launchstatus file contains all the following text:

$ cat /tmp/machine-launchstatus
machine launch script started
machine launch script completed

You can also view the live output of the installation process by running tail -f /tmp/machine-launchstatus.log.

Once machine launch script completed appears in /tmp/machine-launchstatus, exit and re-ssh into the system. If you want to use mosh, mosh back into the system.

Key Setup, Part 2

Now that our manager instance is started, copy the private key that you downloaded from AWS earlier (firesim.pem) to ~/firesim.pem on your manager instance. This step is required to give the manager access to the instances it launches for you.

Setting up the FireSim Repo

We’re finally ready to fetch FireSim’s sources. Run:

git clone
cd firesim
# checkout latest official firesim release
# note: this may not be the latest release if the documentation version != "stable"
git checkout |overall_version|

The script will validate that you are on a tagged branch, otherwise it will prompt for confirmation. This will have initialized submodules and installed the RISC-V tools and other dependencies.

Next, run:


This will have initialized the AWS shell, added the RISC-V tools to your path, and started an ssh-agent that supplies ~/firesim.pem automatically when you use ssh to access other nodes. Sourcing this the first time will take some time – however each time after that should be instantaneous. Also, if your firesim.pem key requires a passphrase, you will be asked for it here and ssh-agent should cache it.

Every time you login to your manager instance to use FireSim, you should cd into your firesim directory and source this file again.

Completing Setup Using the Manager

The FireSim manager contains a command that will interactively guide you through the rest of the FireSim setup process. To run it, do the following:

firesim managerinit --platform f1

This will first prompt you to setup AWS credentials on the instance, which allows the manager to automatically manage build/simulation nodes. You can use the same AWS access key you created when running setup commands on the t2.nano instance earlier (in Run scripts from the t2.nano). When prompted, you should specify the same region that you’ve been selecting thus far (one of us-east-1, us-west-2, or eu-west-1) and set the default output format to json.

Next, it will prompt you for an email address, which is used to send email notifications upon FPGA build completion and optionally for workload completion. You can leave this blank if you do not wish to receive any notifications, but this is not recommended. Next, it will create initial configuration files, which we will edit in later sections.

Now you’re ready to launch FireSim simulations! Hit Next to learn how to run single-node simulations.