Note if you are batching jobs you need to look at the example for that. It does not work like at NERSC
## Multiple mpiruns within a Job
It does not work like with srun or aprun on the "real" Cray's at NERSC or OLCF
The most robust option is to give each mpirun in your job its own pbs nodefile to work with and pay attention to the binding and mapping you are using, there is an example in the repo like this.
See: [Multiple Runs in a Job](VASP_OneJob_Parallel_Runs/vasp_batched.pbs)
More simply this mpirun idiom should work but you need to be careful and check it in an interactive job first.
up to the : it is just as you would always do an mpirun, after the colon(s) list the options specific to the next program and its invocation.
\ No newline at end of file
one instance of mpirun within a pbs job does not know what the other mpirun is doing so they need to be given different segments of the allocation to deal with otherwise they will attempt to run on top of each other.
The allocation is available in a file format from $PBS_NODEFILE within a job. There is one line per slot per node.
The : syntax for Multiple Job Multiple Data isn't worth the trouble (perhaps MPI_WORLD clashes?)