1 . 1216 on -q batch (includes both hw32 and bw36)
2 . 1200~ on -q high_mem (these are all bw36)
2 . 1152 on -q high_mem (these are all bw36)
* unless stated modules are optimized for hw32 but run just as well on bw36
* use of no feature code results in std
* high_mem and gpu nodes are now on separate queues. Neglect the feature code and use the correct queue
** MOAB Torque Cluster **
# CNMS CADES resources have moved to the slurm scheduler, Read Below!
using the old PBS headnode will just waste your time
** Slurm Cluster **
## Job Submission
### queues
There are now two **queues** for cnms jobs.
There are now two **partitions** for cnms jobs.
* batch
* high_mem
To use high_mem be sure to replace the -q batch with -q high_mem
To use high_mem be sure to replace the -p batch with -p high_mem
### Quality of service (QOS)
* std - generally this is what you want
* devel - short debug, build and experimental runs
* burst - premptable jobs that run on unused CONDO resources (you must request access from @epd)
If you need to run wide relatively short jobs, are experiencing long waits for std and can deal with them being occassionally prempted (i.e. killed) you can request access to qos: **burst** via [XCAMS](https://xcams.ornl.gov/xcams/groups/cades-cnms-burst)
unfortunately there's more to it than this if you expect to launch an mpi job interactively.
## CODES
These are the codes that have been installed so far. You can request additional codes.
Instructions for codes:
Please read these, you can waste a great deal of resources if you do not understand how to run even familiar codes optimally in this hardware environment.
[**VASP**](VASP) -- Be Careful with this one: The vanilla optimized version can experience matrix issues. If you have them try the 5.4.1.2 build or the debug build. This is a known issue with high optimizations and the intel compiler.
These are all being revised due to the slurm migration.
[**ESPRESSO**](ESPRESSO) -- Performs well on cades, consider it as an alternative to VASP
[**VASP**](VASP) -- Much greater care needs to be taken to get proper distribution of tasks with slurm, recompilation should eventually ease this.
In theory there are two QOS levels useable on Cades:
```shell
#!/bin/bash
#PBS -S /bin/bash
#PBS -m be
#PBS -N nameofjob
#PBS -q batch
#PBS -l nodes=2:ppn=32
#PBS -l walltime=01:00:00
#PBS -A cnms-burst
#PBS -W group_list=cades-user
#PBS -l qos=burst
#PBS -l naccesspolicy=singlejob
export OMP_NUM_THREADS=1
cd$PBS_O_WORKDIR
module load env/cades-cnms
```
Burst QOS will work somewhat differently with slurm, see Cades docs.
The default action when this occurs is to resubmit the job. If your code cannot recover from a dirty halt this is method should not be used. In the near future it will be possible to alter this behavior.