For each job step launched with srun, this program prints the hardware thread IDs that each MPI rank and OpenMP thread runs on, and the GPU IDs that each rank/thread has access to.
This program is used to test process, thread, and GPU binding for job steps launched with Slurm's `srun` command. It prints the hardware thread IDs that each MPI rank and OpenMP thread runs on as well as the GPU IDs that each rank/thread has access to.
## Compiling
To compile, you'll need to have HIP and MPI installed, and you'll need to use an OpenMP-capable compiler. Modify the Makefile accordingly.
To compile, MPI, HIP, and an OpenMP-capable compiler will be needed. Modify the Makefile according to your needs.
| `HWT` | CPU hardware thread the MPI rank or OpenMP thread ran on |
| `Node` | Compute node the MPI rank or OpenMP thread ran on |
| `GPU_ID` | The node-level GPU ID the rank or thread had access to |
| `RT_GPU_ID` | The runtime GPU ID the rank or thread had access to |
| `Bus_ID` | The physical bus ID associated with a GPU |
#### ADDITIONAL NOTES:
*`GPU_ID` is the node-level (or global) GPU ID read from `ROCR_VISIBLE_DEVICES`. If this environment variable is not set (either by the user or by Slurm), the value of `GPU_ID` will be set to `N/A`.
*`RT_GPU_ID` is the HIP runtime GPU ID (as reported from, say `hipGetDevice`).
*`Bus_ID` is the physical bus ID associated with the GPUs. Comparing the bus IDs is meant to definitively show that different GPUs are being used.
> NOTE: Although the two GPU IDs (`GPU_ID` and `RT_GPU_ID`) are the same in the example above, they do not have to be. See the Spock Quick-Start Guide for such examples.
## Examples
For examples, please see the [GPU Mapping section](https://docs.olcf.ornl.gov/systems/frontier_user_guide.html#gpu-mapping) of the Frontier User Guide.