Wynton HPC is non-operational
Complete outage due to scheduled maintenance during October 14-18, 2024
More details ...
Available Queues #
The cluster provides different queues (“running areas”) that each is optimized for a different purpose.
- short.q:
- Maximum runtime: 30 minutes
- Process priority: 10 (medium)
- Availability: all compute nodes
- Quota: 100 (queued or active) jobs per user (all users)
- Purpose: Low-latency needs, e.g. pipeline prototyping and quick turn-around analysis
- long.q:
- Maximum runtime: 2 weeks (336 hours)
- Process priority: 19 (lowest)
- Availability: all compute nodes
- Quota: Unlimited (all users)
- Purpose: General needs
- member.q:
- Maximum runtime: 2 weeks (336 hours)
- Process priority: 0 (highest)
- Availability: all compute nodes except GPU and institutionally purchased nodes
- Compute power: 20186 processing units
- Number of slots: 7023
- Quota: Proportional to your lab’s contributed share on the cluster. When a lab has exhausted all its available member.q slots, additional jobs scheduled by lab members will spill over to the long.q queue
- Purpose: Research groups who need more computational resources than the above communal queues can contribute resources to the Wynton HPC cluster and gain priority access corresponding to the contribution
- gpu.q:
- Maximum runtime on communal GPU nodes: 2 weeks (336 hours)
- Maximum runtime on contributed GPU nodes: 2 weeks (336 hours) if you are the contributor, otherwise 2 hours
- Process priority: 0 (highest)
- Availability: 235 GPUs on 61 GPU nodes (88/22 GPUs/nodes are communal and 147/39 GPUs/nodes are contributed)
- Number of GPU slots: 235
- Quota: Unlimited (all users)
- Purpose: For software that utilize Graphics Processing Units (GPUs)
- 4gpu.q:
- Maximum runtime on contributed “All-4-GPU” nodes: 2 weeks (336 hours) if you are the contributor, otherwise 2 hours
- Process priority: 0 (highest)
- Availability: 28 GPUs on 7 GPU nodes (all are contributed nodes)
- Number of GPU slots: 7
- Quota: Unlimited (all users)
- Purpose: For software that need to exclusively, utilize all four Graphics Processing Units (GPUs) on the node
- Comment: Only MSG members are contributors as of May 2022
- ondemand.q:
- Maximum runtime: 2 weeks (336 hours)
- Process priority: 0 (highest)
- Availability: Institutionally purchased nodes only
- Quota: Available upon application and approval by the Wynton Steering Committee
- Purpose: Intended for scheduled, high-priority computing needs and / or temporary paid priority access
Comment: Here “runtime” means “walltime”, i.e. the runtime of a job is how long it runs according to the clock on the wall, not the amount of CPU time.
Usage #
Except for the gpu.q and 4gpu.q queues, there is often no need to explicitly specify what queue your job should be submitted to. Instead, it is sufficient to specify the resources that your jobs need, e.g. the maximum processing time (e.g. -l h_rt=00:10:00
for ten minutes), the maximum memory usage (e.g. -l mem_free=1G
for 1 GiB of RAM), and the number of cores (e.g. -pe smp 2
for two cores). When the scheduler knows about your job’s resource need, it will allocate your job to a compute node that better fits your needs and your job is likely to finish sooner.
Only in rare cases there should be a need to specify through what queue your job should run. To do this, you can use the -q <name>
option of qsub
, e.g. qsub -q long.q my_script
.