Commit 00eec477 authored by Parete-Koon, Suzanne's avatar Parete-Koon, Suzanne
Browse files

Update readme.md

parent 9a0dde11
......@@ -112,18 +112,18 @@ Standard SBATCH directives:
#SBATCH -p gpu
#SBATCH --gres=gpu:2
```
You must use --gres=gpu:2 to tell the job to use the node's GPUs.
We will also use NVprof, NVIDIA's built-in profiler. It will show you that your code is running on the GPU and also give you performance information about the code.
To use nvprof issue:
```
mpirun nvprof ./vecAdd.o
You must use --gres=gpu:2 to tell the job to use the node's GPUs.
```
### Challenge 2 Submit a GPU Job with SLRUM
Below is a batch script to run the VecAdd.o that you compiled on the GPU. Fill in the blanks using your knowledge of the SLURM batch directives
Below is a batch script to run the VecAdd.o, that you compiled on the GPU, in challange 1. Note that we are using NVProf. Fill in the blanks using your knowledge of the SLURM batch directives
and the CADES software environment.
**run_vecadd.sbatch**
......@@ -149,3 +149,5 @@ module load pgi/19.4
mpirun nvprof ./vecAdd.o
```
When you code runs open your output files and see if the code ran on the GPU.
How long did it spend copying data to the GPU?
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment