Commit 2b8bb7b8 authored by Liu, Hong's avatar Liu, Hong
Browse files

Update slurm

parent 091c9195
......@@ -2,13 +2,8 @@
## What is Slurm?
Slurm is a job scheduler and resource management program with the combined functionality of Moab and Torque. Queues, accounts, reservations, limits, preemption, job priority, and many other facets of scheduling systems in Moab and Torque are nearly identical in Slurm.
Slurm is a job scheduler and resource management program. Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.
There are a few key differences to be aware of between Moab/Torque and Slurm:
- Terminology
- This is largely the same with a few key differences
- Moab Queues are referred to as Partitions in Slurm
- PBS parameters in Moab job scripts are analogus to SBATCH parameters in Slurm
</br>
- Scheduler Policy
......@@ -22,20 +17,20 @@ There are a few key differences to be aware of between Moab/Torque and Slurm:
- Commands
- Functions from multiple Moab/Torque commands are typically combined in Slurm commands
- `qsub -> sbatch`
- `qsub/pbsdsh -> srun`
- `qstat/showq -> squeue`
- `checknode/showbf -> sinfo`
- `checkjob/mschedctl -> scontrol`
- `sbatch`
- `srun`
- `squeue`
- `sinfo`
- `scontrol`
</br>
- Command Examples
- Here are a few examples of equivalent commands betwene the two schedulers
- `qsub test.sh -> sbatch test.sh`
- `showq -u <uid> -> squeue -u <uid>`
- `checkjob <job_id> -> scontrol show job <job_id>`
- `showbf -f gpu -> sinfo -p gpu`
- `qsub -I -A cades-birthright -w group_list=birthright -q gpu -> srun -A birthright -p gpu --pty /bin/bash`
- `sbatch test.sh`
- `squeue -u <uid>`
- `scontrol show job <job_id>`
- `sinfo -p gpu`
- `srun -A birthright -p gpu --pty /bin/bash`
</br>
## Slurm Challenge 1
......@@ -63,7 +58,7 @@ These modification must be made to existing PBS jobs scripts to make them compat
1. Switch to your terminal that is logged in to or-slurm-login01.ornl.gov
2. Navigate to:
```
/lustre/or-hydra/cades-birthright/<user_id>/cades-spring-training-master/slurm/example1/
/lustre/or-scratch/cades-birthright/<user_id>/cades-spring-training-master/slurm/example1/
```
- This directory contains ex1_job_script.pbs, an example “Hello World” PBS job script
3. Make a copy of the example script and name it ex1_job_script.sbatch
......@@ -108,7 +103,7 @@ echo “Hello World” ...................... echo "Hello World"
1. Switch to your terminal that is logged in to or-slurm-login01.ornl.gov
2. Navigate to:
```
/lustre/or-hydra/cades-birthright/<user_id>/cades-spring-training-master/slurm/example2/
/lustre/or-scratch/cades-birthright/<user_id>/cades-spring-training-master/slurm/example2/
```
* This directory contains ex2_job_script.pbs, an example PBS job script for running Quantum Espresso
3. Make a copy of the example script and name it ex2_job_script.sbatch
......@@ -161,7 +156,7 @@ mpirun pw.x -in "../data/${input_files[$PBS_ARRAYID]}"
1. Switch to your terminal that is logged in to or-slurm-login01.ornl.gov
2. Navigate to:
```
/lustre/or-hydra/cades-birthright/<user_id>/cades-spring-training-master/slurm/example3/
/lustre/or-scratch/cades-birthright/<user_id>/cades-spring-training-master/slurm/example3/
```
* This directory contains ex3_job_script.pbs, an example PBS job script for running Quantum Espresso with a job array
3. Make a copy of the example script and name it ex3_job_script.sbatch
......
#!/bin/bash
#PBS -N spring-training-ex1
#PBS -A birthright
#PBS -W group_list=cades-birthright
#PBS -q gpu
#PBS -l nodes=1:ppn=32
cd $PBS_O_WORKDIR
echo “Hello World”
#!/bin/bash
#SBATCH -J hello-world-example
#SBATCH -A birthright
#SBATCH -p burst
#SBATCH -N 1
#SBATCH -n 32
#SBATCH -c 1
#SBATCH --mem=0
#SBATCH -t 00:30:00
echo “Hello World”
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment