Warning

⚠️ We highly recommend using the XDMA-based U250 flow instead of this Vitis-based flow. You can find the XDMA-based flow here: Xilinx Alveo U250 XDMA-based Getting Started Guide. The Vitis-based flow does not support DMA-based FireSim bridges (e.g., TracerV, Synthesizable Printfs, etc.), while the XDMA-based flows support all FireSim features. If you’re unsure, use the XDMA-based U250 flow instead: Xilinx Alveo U250 XDMA-based Getting Started Guide

Running a Single Node Simulation

Now that we’ve completed all the basic setup steps, it’s time to run a simulation! In this section, we will simulate a single target node, for which we will use a single Xilinx Vitis-enabled U250.

Make sure you have sourced sourceme-manager.sh --skip-ssh-setup before running any of these commands.

Building target software

In this guide, we’ll boot Linux on our simulated node. To do so, we’ll need to build our RISC-V SoC-compatible Linux distro. For this guide, we will use a simple buildroot-based distribution. We can build the Linux distribution like so:

# assumes you already cd'd into your firesim repo
# and sourced sourceme-manager.sh
#
# then:
cd sw/firesim-software
./init-submodules.sh
./marshal -v build br-base.json

Once this is completed, you’ll have the following files:

  • YOUR_FIRESIM_REPO/sw/firesim-software/images/firechip/br-base/br-base-bin - a bootloader + Linux kernel image for the RISC-V SoC we will simulate.

  • YOUR_FIRESIM_REPO/sw/firesim-software/images/firechip/br-base/br-base.img - a disk image for the RISC-V SoC we will simulate

These files will be used to form base images to either build more complicated workloads (see the Defining Custom Workloads section) or directly as a basic, interactive Linux distribution.

Setting up the manager configuration

All runtime configuration options for the manager are set in a file called firesim/deploy/config_runtime.yaml. In this guide, we will explain only the parts of this file necessary for our purposes. You can find full descriptions of all of the parameters in the Manager Configuration Files section.

If you open up this file, you will see the following default config (assuming you have not modified it):

# RUNTIME configuration for the FireSim Simulation Manager
# See https://docs.fires.im/en/stable/Advanced-Usage/Manager/Manager-Configuration-Files.html for documentation of all of these params.

run_farm:
  base_recipe: run-farm-recipes/externally_provisioned.yaml
  recipe_arg_overrides:
    # REQUIRED: default platform used for run farm hosts. this is a class specifying
    # how to run simulations on a run farm host.
    default_platform: VitisInstanceDeployManager

    # REQUIRED: default directory where simulations are run out of on the run farm hosts
    default_simulation_dir: /home/buildbot/FIRESIM_RUNS_DIR

    # REQUIRED: List of unique hostnames/IP addresses, each with their
    # corresponding specification that describes the properties of the host.
    #
    # Ex:
    # run_farm_hosts_to_use:
    #     # use localhost which is described by "four_fpgas_spec" below.
    #     - localhost: four_fpgas_spec
    #     # supply IP address, which points to a machine that is described
    #     # by "four_fpgas_spec" below.
    #     - "111.111.1.111": four_fpgas_spec
    run_farm_hosts_to_use:
        - localhost: one_fpgas_spec

metasimulation:
  metasimulation_enabled: false
  # vcs or verilator. use vcs-debug or verilator-debug for waveform generation
  metasimulation_host_simulator: verilator
  # plusargs passed to the simulator for all metasimulations
  metasimulation_only_plusargs: "+fesvr-step-size=128 +max-cycles=100000000"
  # plusargs passed to the simulator ONLY FOR vcs metasimulations
  metasimulation_only_vcs_plusargs: "+vcs+initreg+0 +vcs+initmem+0"

target_config:
    topology: no_net_config
    no_net_num_nodes: 1
    link_latency: 6405
    switching_latency: 10
    net_bandwidth: 200
    profile_interval: -1

    # This references a section from config_hwdb.yaml for fpga-accelerated simulation
    # or from config_build_recipes.yaml for metasimulation
    # In homogeneous configurations, use this to set the hardware config deployed
    # for all simulators
    default_hw_config: firesim_rocket_quadcore_no_nic_l2_llc4mb_ddr3

    # Advanced: Specify any extra plusargs you would like to provide when
    # booting the simulator (in both FPGA-sim and metasim modes). This is
    # a string, with the contents formatted as if you were passing the plusargs
    # at command line, e.g. "+a=1 +b=2"
    plusarg_passthrough: ""

tracing:
    enable: no

    # Trace output formats. Only enabled if "enable" is set to "yes" above
    # 0 = human readable; 1 = binary (compressed raw data); 2 = flamegraph (stack
    # unwinding -> Flame Graph)
    output_format: 0

    # Trigger selector.
    # 0 = no trigger; 1 = cycle count trigger; 2 = program counter trigger; 3 =
    # instruction trigger
    selector: 1
    start: 0
    end: -1

autocounter:
    read_rate: 0

workload:
    workload_name: linux-uniform.json
    terminate_on_completion: no
    suffix_tag: null

host_debug:
    # When enabled (=yes), Zeros-out FPGA-attached DRAM before simulations
    # begin (takes 2-5 minutes).
    # In general, this is not required to produce deterministic simulations on
    # target machines running linux. Enable if you observe simulation non-determinism.
    zero_out_dram: no
    # If disable_synth_asserts: no, simulation will print assertion message and
    # terminate simulation if synthesized assertion fires.
    # If disable_synth_asserts: yes, simulation ignores assertion firing and
    # continues simulation.
    disable_synth_asserts: no

# DOCREF START: Synthesized Prints
synth_print:
    # Start and end cycles for outputting synthesized prints.
    # They are given in terms of the base clock and will be converted
    # for each clock domain.
    start: 0
    end: -1
    # When enabled (=yes), prefix print output with the target cycle at which the print was triggered
    cycle_prefix: yes
# DOCREF END: Synthesized Prints

We’ll need to modify a couple of these lines.

First, let’s tell the manager to use the single Xilinx Vitis-enabled U250 FPGA. You’ll notice that in the run_farm mapping which describes and specifies the machines to run simulations on. First notice that the base_recipe maps to run-farm-recipes/externally_provisioned.yaml. This indicates to the FireSim manager that the machines allocated to run simulations will be provided by the user through IP addresses instead of automatically launched and allocated (e.g. launching instances on-demand in AWS). Let’s modify the default_platform to be VitisInstanceDeployManager so that we can launch simulations using Xilinx XRT/Vitis. Next, modify the default_simulation_dir to a directory that you want to store temporary simulation collateral to. When running simulations, this directory is used to store any temporary files that the simulator creates (e.g. a uartlog emitted by a Linux simulation). Next, lets modify the run_farm_hosts_to_use mapping. This maps IP addresses (i.e. localhost) to a description/specification of the simulation machine. In this case, we have only one Xilinx Vitis-enabled U250 FPGA so we will change the description of localhost to one_fpga_spec.

Now, let’s verify that the target_config mapping will model the correct target design. By default, it is set to model a single-node with no network. It should look like the following:

target_config:
    topology: no_net_config
    no_net_num_nodes: 1
    link_latency: 6405
    switching_latency: 10
    net_bandwidth: 200
    profile_interval: -1

    # This references a section from config_hwdb.yaml
    # In homogeneous configurations, use this to set the hardware config deployed
    # for all simulators
    default_hw_config: firesim_rocket_quadcore_no_nic_l2_llc4mb_ddr3

Note topology is set to no_net_config, indicating that we do not want a network. Then, no_net_num_nodes is set to 1, indicating that we only want to simulate one node. Lastly, the default_hw_config is firesim_rocket_quadcore_no_nic_l2_llc4mb_ddr3. Let’s modify the default_hw_config (the target design) to “vitis_firesim_rocket_singlecore_no_nic”. This new hardware configuration does not have a NIC and is pre-built for the Xilinx Vitis-enabled U250 FPGA. This hardware configuration models a Single-core Rocket Chip SoC and no network interface card.

We will leave the workload mapping unchanged here, since we do want to run the buildroot-based Linux on our simulated system. The terminate_on_completion feature is an advanced feature that you can learn more about in the Manager Configuration Files section.

As a final sanity check, in the mappings we changed, the config_runtime.yaml file should now look like this (with PATH_TO_SIMULATION_AREA replaced with your simulation collateral temporary directory):

 run_farm:
   base_recipe: run-farm-recipes/externally_provisioned.yaml
   recipe_arg_overrides:
     default_platform: VitisInstanceDeployManager
     default_simulation_dir: <PATH_TO_SIMULATION_AREA>
     run_farm_hosts_to_use:
         - localhost: one_fpga_spec

 target_config:
     topology: no_net_config
     no_net_num_nodes: 1
     link_latency: 6405
     switching_latency: 10
     net_bandwidth: 200
     profile_interval: -1
     default_hw_config: vitis_firesim_rocket_singlecore_no_nic
     plusarg_passthrough: ""

 workload:
     workload_name: linux-uniform.json
     terminate_on_completion: no
     suffix_tag: null

Building and Deploying simulation infrastructure to the Run Farm Machines

The manager automates the process of building and deploying all components necessary to run your simulation on the Run Farm, including programming FPGAs. To tell the manager to setup all of our simulation infrastructure, run the following:

firesim infrasetup

For a complete run, you should expect output like the following:

$ firesim infrasetup
FireSim Manager. Docs: https://docs.fires.im
Running: infrasetup

Building FPGA software driver.
...
[localhost] Checking if host instance is up...
[localhost] Copying FPGA simulation infrastructure for slot: 0.
[localhost] Clearing all FPGA Slots.
The full log of this run is:
.../firesim/deploy/logs/2023-03-06--01-22-46-infrasetup-35ZP4WUOX8KUYBF3.log

Many of these tasks will take several minutes, especially on a clean copy of the repo. The console output here contains the “user-friendly” version of the output. If you want to see detailed progress as it happens, tail -f the latest logfile in firesim/deploy/logs/.

At this point, our single Run Farm machine has all the infrastructure necessary to run a simulation, so let’s launch our simulation!

Running the simulation

Finally, let’s run our simulation! To do so, run:

firesim runworkload

This command boots up a simulation and prints out the live status of the simulated nodes every 10s. When you do this, you will initially see output like:

$ firesim runworkload
FireSim Manager. Docs: https://docs.fires.im
Running: runworkload

Creating the directory: .../firesim/deploy/results-workload/2023-03-06--01-25-34-linux-uniform/
[localhost] Checking if host instance is up...
[localhost] Starting FPGA simulation for slot: 0.

If you don’t look quickly, you might miss it, since it will get replaced with a live status page:

FireSim Simulation Status @ 2018-05-19 00:38:56.062737
--------------------------------------------------------------------------------
This workload's output is located in:
.../firesim/deploy/results-workload/2018-05-19--00-38-52-linux-uniform/
This run's log is located in:
.../firesim/deploy/logs/2018-05-19--00-38-52-runworkload-JS5IGTV166X169DZ.log
This status will update every 10s.
--------------------------------------------------------------------------------
Instances
--------------------------------------------------------------------------------
Hostname/IP:   localhost | Terminated: False
--------------------------------------------------------------------------------
Simulated Switches
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Simulated Nodes/Jobs
--------------------------------------------------------------------------------
Hostname/IP:   localhost | Job: linux-uniform0 | Sim running: True
--------------------------------------------------------------------------------
Summary
--------------------------------------------------------------------------------
1/1 instances are still running.
1/1 simulations are still running.
--------------------------------------------------------------------------------

This will only exit once all of the simulated nodes have powered off. So, let’s let it run and open another terminal on the manager machine. From there, cd into your FireSim directory again and source sourceme-manager.sh --skip-ssh-setup.

Next, let’s ssh into the Run Farm machine. If your Run Farm and Manager Machines are the same, replace RUN_FARM_IP_OR_HOSTNAME with localhost, otherwise replace it with your Run Farm Machine’s IP or hostname.

source ~/.ssh/AGENT_VARS
ssh RUN_FARM_IP_OR_HOSTNAME

Next, we can directly attach to the console of the simulated system using screen, run:

screen -r fsim0

Voila! You should now see Linux booting on the simulated system and then be prompted with a Linux login prompt, like so:

[truncated Linux boot output]
[    0.020000] VFS: Mounted root (ext2 filesystem) on device 254:0.
[    0.020000] devtmpfs: mounted
[    0.020000] Freeing unused kernel memory: 140K
[    0.020000] This architecture does not have kernel memory protection.
mount: mounting sysfs on /sys failed: No such device
Starting logging: OK
Starting mdev...
mdev: /sys/dev: No such file or directory
modprobe: can't change directory to '/lib/modules': No such file or directory
Initializing random number generator... done.
Starting network: ip: SIOCGIFFLAGS: No such device
ip: can't find device 'eth0'
FAIL
Starting dropbear sshd: OK

Welcome to Buildroot
buildroot login:

You can ignore the messages about the network – that is expected because we are simulating a design without a NIC.

Now, you can login to the system! The username is root and there is no password. At this point, you should be presented with a regular console, where you can type commands into the simulation and run programs. For example:

Welcome to Buildroot
buildroot login: root
Password:
# uname -a
Linux buildroot 4.15.0-rc6-31580-g9c3074b5c2cd #1 SMP Thu May 17 22:28:35 UTC 2018 riscv64 GNU/Linux
#

At this point, you can run workloads as you’d like. To finish off this guide, let’s power off the simulated system and see what the manager does. To do so, in the console of the simulated system, run poweroff -f:

Welcome to Buildroot
buildroot login: root
Password:
# uname -a
Linux buildroot 4.15.0-rc6-31580-g9c3074b5c2cd #1 SMP Thu May 17 22:28:35 UTC 2018 riscv64 GNU/Linux
# poweroff -f

You should see output like the following from the simulation console:

# poweroff -f
[   12.456000] reboot: Power down
Power off
time elapsed: 468.8 s, simulation speed = 88.50 MHz
*** PASSED *** after 41492621244 cycles
Runs 41492621244 cycles
[PASS] FireSim Test
SEED: 1526690334
Script done, file is uartlog

[screen is terminating]

You’ll also notice that the manager polling loop exited! You’ll see output like this from the manager:

FireSim Simulation Status @ 2018-05-19 00:46:50.075885
--------------------------------------------------------------------------------
This workload's output is located in:
.../firesim/deploy/results-workload/2018-05-19--00-38-52-linux-uniform/
This run's log is located in:
.../firesim/deploy/logs/2018-05-19--00-38-52-runworkload-JS5IGTV166X169DZ.log
This status will update every 10s.
--------------------------------------------------------------------------------
Instances
--------------------------------------------------------------------------------
Hostname/IP:   172.30.2.174 | Terminated: False
--------------------------------------------------------------------------------
Simulated Switches
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Simulated Nodes/Jobs
--------------------------------------------------------------------------------
Hostname/IP:   172.30.2.174 | Job: linux-uniform0 | Sim running: False
--------------------------------------------------------------------------------
Summary
--------------------------------------------------------------------------------
1/1 instances are still running.
0/1 simulations are still running.
--------------------------------------------------------------------------------
FireSim Simulation Exited Successfully. See results in:
.../firesim/deploy/results-workload/2018-05-19--00-38-52-linux-uniform/
The full log of this run is:
.../firesim/deploy/logs/2018-05-19--00-38-52-runworkload-JS5IGTV166X169DZ.log

If you take a look at the workload output directory given in the manager output (in this case, .../firesim/deploy/results-workload/2018-05-19--00-38-52-linux-uniform/), you’ll see the following:

$ ls -la firesim/deploy/results-workload/2018-05-19--00-38-52-linux-uniform/*/*
-rw-rw-r-- 1 centos centos  797 May 19 00:46 linux-uniform0/memory_stats.csv
-rw-rw-r-- 1 centos centos  125 May 19 00:46 linux-uniform0/os-release
-rw-rw-r-- 1 centos centos 7316 May 19 00:46 linux-uniform0/uartlog

What are these files? They are specified to the manager in a configuration file (deploy/workloads/linux-uniform.json) as files that we want automatically copied back from the Run Farm Machine into the results-workload directory on our manager machine, which is useful for running benchmarks automatically. The Defining Custom Workloads section describes this process in detail.

Congratulations on running your first FireSim simulation! At this point, you can check-out some of the advanced features of FireSim in the sidebar to the left. For example, we expect that many people will be interested in the ability to automatically run the SPEC17 benchmarks: SPEC 2017.

Click Next if you’d like to continue on to building your own bitstreams.

Warning

In some cases, simulation may fail because you might need to update the Xilinx Vitis-enabled U250 DRAM offset that is currently hard coded in both the FireSim Xilinx XRT/Vitis driver code and platform shim. To verify this, run xclbinutil --info --input <YOUR_XCL_BIN>, obtain the bank0 MEM_DDR4 offset. If it differs from the hardcoded 0x40000000 given in driver code (u250_dram_expected_offset variable in sim/midas/src/main/cc/simif_vitis.cc) and platform shim (araddr/awaddr offset in sim/midas/src/main/scala/midas/platform/VitisShim.scala) replace both areas with the new offset given by xclbinutil and regenerate the bitstream.