LABORATORY RESEARCH COMPUTING
Berkeley Lab provides Lawrencium, a 524-node (5376 computational cores) Linux cluster, equipped
with a high performance, low latency Infiniband interconnect, to
Berkeley Lab researchers needing access to scientific
computational resources. The system, which consists of shared core nodes and PI-contributed Condo nodes, has a theoretical peak performance
rating of 79 teraflops and delivers over 33M processor hours to researchers every year.
SCIENTIFIC CLUSTER SUPPORT
With over 35 clusters in production, HPC Services also offers comprehensive
Linux cluster support, including data center resource planning, pre-purchase
consulting, procurement assistance, installation, and ongoing support,
for PI-owned clusters. Our HPC User Services consultants can help you to
get your application running well to make best use of your new cluster.
UC Berkeley PIs can also make use of our services through the Cal HPC
program available through IST.
Mar 21, 2013 - Supporting Science with HPC
HPC Services manager Gary Jung gave a presentation on "Supporting Science with HPC" at the
"Enabling Discovery and Production Innovation with Dell HPC Solutions" workshop held
in Santa Clara where he talked about how different Berkeley Lab researchers from EETD, ESD, NSD, and Physics are using HPC data pipelines to accomplish their science.
Mar 18, 2013 - GPU Accelerated Synchrotron Radiation Calculation
Today, HPC Services staffer Yong Qin will be presenting his work during a poster session at this week's GPU Technology Conference, GTC 2013, in San Jose, California. Yong's work demonstrates how data parallelism can be applied to spectrum calculation of undulator radiation, which is widely used at synchrotron light facilities across the world. More
Mar 13, 2013 - The Science of Clouds computed on Lawrencium
The climate models that scientists use to understand and project climate change are improving constantly; however the largest source of uncertainty in today’s climate models are clouds. As the source of rain and wind, clouds are important in modeling climate. Berkeley Lab scientist David Romps discusses his work to develop better cloud resolution models. More
Nov 13, 2012 - Node Health Check
HPCS staffers Jackie Scoggins and Michael Jennings gave a
well-attended presentation on Jenning's Node Health Check (NHC) software
today at the Adaptive Computing booth at SC12. NHC works in conjunction
with the job scheduler and resource manager to ensure clean job runs on
large HPC systems.
Nov 12, 2012 - Warewulf wins Intel Cluster Ready "Explorer" award
The Berkeley Lab Warewulf Cluster Toolkit development team has been
honored with the 'Explorer Award' from the Intel(R) Cluster Ready
team at Intel, which recognizes organizations who have continued
to explore and implement Intel Cluster Ready (ICR) certified systems. The
award was presented to lead developer Greg Kurtzer, and co-developers
Michael Jennings, and Bernard Li of the IT Division's HPC Services Group
at the annual Intel Partners meeting.
Oct 24, 2012 - HPCS at 2012 Data Center Efficiency Summit
HPCS staff member Yong Qin will be part of a panel, along with other Berkeley
Lab scientists, at the 2012 Data Center Efficiency Summit today in San Jose talking about
Berkeley Lab's recently released study to understand the feasibility of
implementing Demand Response and control strategies in Data Centers. Yong will discuss the issues and our experiences related to reducing or geographically
shifting computational workload to a remote datacenter as a response to a demand to lower electrical usage.
Sept 18, 2012 - Warewulf featured in HPC Admin Magazine
This month's issue of HPC Admin Magazine features the last in a 4-part
series on how to best use the latest version of the Warewulf Cluster Toolkit.
Warewulf, developed by LBNL's Greg Kurtzer and recently certified as Intel Cluster Ready, is a zero-cost, open source solution that guarantees integration and compatibility with Intel products as well as 3rd-party hardware and software solutions. Read this article to learn how to use it.
May 21, 2012 - Cloud Bursting for Particle Tracking
physicists Changchun Sun and Hiroshi Nishimura along with HPCS staff Kai
Song, Susan James, Krishna Muriki, Gary Jung, Bernard Li and Yong Qin recently
explored the use of Amazon's VPC service to transparently extend the ALS compute cluster and software environment, into the public Cloud to provide on-demand
compute resources for particle tracking and NGLS APEX development.
Their work was presented during the poster session at the International Particle Accelerator Conference (IPAC12) in New Orleans this week.
Jan 24, 2012 - Bootstrapping Institutional Capability
HPC Services Manager Gary Jung talks about the issues institutions may encounter when developing new or enhancing existing infrastructure to support data intensive science at the Winter 2012 ESCC/Internet2 Joint Techs Conference in Baton Rouge, LA this week.
Nov 3, 2011 - Supercomputers Accelerate Development of Advanced Materials
Researchers from Berkeley Lab and MIT have teamed up to develop a new tool,as part of the Materials Project, to speed up the development for new materials. The project incorporates the use of supercomputing resources including
Lawrencium to characterize the properties of inorganic compounds.
Oct 25, 2011 - Supercomputing As A Service
LBL CIO Rosio Alvarez and HPC Services Manager Gary Jung present
their experiences using cloud services for HPC at InformationWeek's
GovCloud 2011 conference in Washington DC. More
Sep 19, 2011 - Lawrencium LR1 relocated to SDSC
Did you know that the Lawrencium LR1 compute nodes have been relocated to the
San Diego Supercomputing Center? This was done as part of the IT Division's quest to best optimize data center space. The LR1 nodes are connected via a dedicated 10 gigabit ESNET virtual circuit into the Lab's HPC infrastructure here in Berkeley. The impact of having LR1 remote should be minimal for most users.
Apr 19, 2011 - UC Cloud Summit
HPC Services staffers Greg Kurtzer talks about the new Warewulf Version 3
Cluster Toolkit release and Krishna Muriki and Kai Song present
their work using Amazon CCI and Amazon GPU at the first annual UC Cloud Summit hosted
GPU Compute Nodes Now Available. We have a total of 4 second generation (lr2) nodes each equipped with two Nvidia C2050 GPU Fermi cards. Each lr2 node has a total 12 cores and 24GB of memory. Use these PBS job submission arguments to get one more of these nodes:
" -l nodes=X:ppn=Y:gpu -q lr_batch "
where X - num of nodes from 1 to 3 & Y - num of cores/processors per node from 1 to 12. One of the four GPU nodes is available for quick 30 mins debug jobs and users can access this node by using:
"-l nodes=1:ppn=12:gpu -q lr_debug"
Lawrencium Large Memory Nodes. Three large memory nodes are now available as part of the Lawrence segment of the Lawrencium cluster. Each node has 48 AMD processor cores and 256GB(2) and 512GB(1) of memory. Use these PBS job submission arguments to get one of the nodes:
" -l nodes=1:ppn=Y:bigmem_256 -q lr_batch "
" -l nodes=1:ppn=Y:bigmem_512 -q lr_batch "
" -l nodes=1:ppn=Y:bigmem -q lr_batch " # Use any bigmem node
HPC Services has formed a new general interest HPC user mailing list for
users of Lawrencium and of clusters maintained by HPC Services. Members of the
Berkeley Lab community working with high-performance computing technologies are also invited to
participate in this user mailing list to discuss topis like HPC applications, best practices and
future technologies. If you like to participate in the discussion we encourage you to subscribe
to the mailing list here: https://lists.lbl.gov/sympa/info/hpc-users
UID Request | GID Request | Search UID Database