System Announcements


June 18, 2019:
We experienced issues with the slurm database today causing slurm to be down from 08:59 until 17:30.   We are now back in full production.  If you experience any issues please open a SN ticket by sending an email  to hpcshelp@lbl.gov.

Now in Production:

LR6 partition: 72 nodes with Intel Xeon Skylake processors. Each node has 32 cores and FDR infiniband, Among those nodes, 48 have 96 GB memory and 24 nodes have larger memory of total of 192GB.

CM1 partition: 14 node with AMD EPYC processors. Each node has 48 cores, 256 GB of memory and FDR infiniband.

ES1 GPU partition: 24 nodes in total. 12 nodes each equipped with 4 NVIDIA  GTX1080TI cards and another 12 nodes each quipped with 2 NVIDIA V100 cards with NVLink.

Scheduled Shutdowns and Downtimes


May 21, 2019 : 

The Lawrencium Supercluster maintenance is complete as of 17:00 5/21/2019.


May 13, 2019 : 

The Mako cluster has been shutdown and decommissioned. 

Lawrencium Clusters

LR6


 

LR5

LR4


LR3

LR2



ES1

CM1