# Slurm

To solve problems 1. and 2. we use a program called Slurm on our cluster. Slurm is a "job scheduler"; essentially, it receives "jobs" and then sends them to computing nodes in a way that utilizes resources in the most efficient way possible.

**You never want to run any resource-intensive program on the head node. Always delegate resource-intensive jobs to Slurm so that it will send your job to a computing cluster. This benefits you and all other users, as it gives your job more processing power, and leaves the head node's processing power free for Slurm to delegate people's jobs.**

Our Luria cluster has the following nodes:

| Nodes | CPU Cores | CPU Model                                                        |
| ----- | --------- | ---------------------------------------------------------------- |
| c1-4  | 16 cores  | Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz                        |
| c5-40 | 8 cores   | Intel(R) Xeon(R) CPU E5620 @ 2.40GHz                             |
| b1-16 | 48 cores  | Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz or 5220R CPU @ 2.20GHz |

These nodes are organized into the following partitions:

| kellis | bcc          | normal |
| ------ | ------------ | ------ |
| b1-12  | b13-17, c1-4 | c5-40  |

The `kellis` and `bcc` partitions are reserved for their respective labs. The `normal` partition is the default partition that can be used by any lab. You should never use the `kellis` and `bcc` partitions unless you have been given express permission to do so.
