... | ... | @@ -40,12 +40,27 @@ bash $ ssh or-condo-login.ornl.gov |
|
|
```
|
|
|
|
|
|
|
|
|
## Environment ##
|
|
|
Haswell/Broadwell Based
|
|
|
MOAB Torque Cluster
|
|
|
## Environment
|
|
|
**32+ core Haswell/Broadwell Based**
|
|
|
|
|
|
## Job Submission ##
|
|
|
There is only one queue: batch
|
|
|
| type | memory |feature code |
|
|
|
| ----------------- | ------------ | ------------|
|
|
|
| Haswell 2x16 | 125G | hw32, std |
|
|
|
| Broadwell 2x18 | 125G | bw36, std |
|
|
|
| High mem (bw36) | 251G | high_mem |
|
|
|
|GPU haswell | 251G |k80_hw32, gpu,std |
|
|
|
|GPU Broadwell | 251G |k80_bw36, gpu,std |
|
|
|
|
|
|
Most plentiful node is Haswell 2x16, i.e. hw32
|
|
|
|
|
|
* unless stated modules are optimized for hw32.
|
|
|
* use of no feature code results in std
|
|
|
* mixes of node types can cause crashes
|
|
|
|
|
|
** MOAB Torque Cluster **
|
|
|
|
|
|
## Job Submission
|
|
|
There is only one non-experimental queue: batch
|
|
|
|
|
|
See this gitlab repo's examples for the pbs commands, there are more than on most clusters and they matter.
|
|
|
|
... | ... | @@ -72,7 +87,7 @@ There are examples for most of the installed codes in the repo. |
|
|
```
|
|
|
**You can contribute to the examples.**
|
|
|
|
|
|
## File System ##
|
|
|
## File System
|
|
|
Run your jobs from
|
|
|
```
|
|
|
/lustre/or-hydra/cades-cnms/you
|
... | ... | @@ -82,7 +97,7 @@ If your directory is missing ask @nathangrod or @michael.galloway in #general fo |
|
|
The old lustre file system *pfs1* will be decommissioned and all data cleared in the near future. You must migrate you old data soon.
|
|
|
**Use a pbs job or an interactive job, do not use the login nodes.**
|
|
|
|
|
|
## CODES ##
|
|
|
## CODES
|
|
|
These are the codes that have been installed so far. You can request additional codes.
|
|
|
|
|
|
Instructions for codes:
|
... | ... | @@ -118,3 +133,4 @@ Use them by adding: |
|
|
```
|
|
|
|
|
|
[Slack to uid](Slack to uid)
|
|
|
|
|
|
\ No newline at end of file |