Other Resources

A brief summary of local and campus-wide computing resources

  • The summary below is based on information collected as of Oct, 2022. Data of a cluster computing node are from typical configurations for public accessible nodes/resources.

Cluster Name

# of nodes

CPU count/node

Memory/node

Job time Limit

OS

GPU Computing

Contact

# of User Accounts

Luria

58

16 - 96

96G - 768G

14 days

Centos 7.2/7.8

No

400

Engaging

292

16 - 20

64G

12 hrs

Centos 7.7

Yes

2600

Satori

64

80

1TB

12 hrs

RedHat 8.3

Yes

1300

C3DDB

123

40-64

250G - 1TB

5 days

Centos 6.6

No

270

SuperCloud

844

32-48

120G - 386G

Unlimited

GridOS/Ubuntu 18.04.6

Yes

2400

MIT Office of Research Computing and Data (ORCD)

  • Launched in May 2022 MIT News Article

  • All resources listed above provide high-performance computing

  • ORCD is now working to provide a standard mode of access to them.

  • Mailing List: email orgd-admin with request to join mailing list

  • Leadership:

    • Peter Fisher - Head, ORCD

    • Heather Williams - Assistant Provost for Strategic Projects

    • Chris Hill - Principle Research Scientist, EAPS, and Director of the Research Computing Project

Campus-wide Resources

  • You can purchase additional data storage for your own lab at a very reasonable cost

  • Private computing nodes can be purchased and added to the cluster in your own partition to get higher priority for your jobs

Engaging

  • To get an account, please visit Engaging OnDemand and Usage Instructions

  • 292 nodes. For most nodes, each provides 16 to 20 CPU counts (threads) and 64 G memory. 256G ram nodes also available.

  • Useful features: MATLAB GUI, Interactive Jupyter Notebook sessions and Interactive Rstudio Server sessions

  • Limitation: Slurm jobs are limited to 12 hrs in the normal partition.

Satori

  • To get an account, please visit Satori Portal and usage instructions

  • 64 nodes, Each node providing 80 CPU counts (threads), and 1 TB memory

  • Useful features: Large memory, GPU Computing and interactive Jupyter notebook.

  • Limitation: Slurm jobs are limited to 12 hrs by default

C3DDB

  • Request accounts by filling out web form

  • Login managed by keys that are provided upon account approval

  • About ~120 nodes. For most nodes, each node provides 40-64 CPU counts and 250G - 1TB memory

  • Jobs can run up to 5 days

Supercloud

  • Runs slurm as the scheduler. Commands to submit jobs and monitor system/job status are modified by Lincoln Lab.

  • New accounts get very limited resources (2 CPU nodes and 1 GPU node).

  • Need to take the Practical HPC training course to get standard resource allocation (16 CPU nodes and 12 GPU nodes).

Last updated

Massachusetts Institute of Technology