Spring 2024 Purchase

We are excited to share significant updates to our High Performance Computing Facility aimed at enhancing its capabilities and aligning with the research community’s evolving needs.

 

 

Options for New Hardware


Based on the feedback and needs expressed by faculty, we have obtained and summarized quotes for the latest hardware configurations to add to our HPC cluster. The prices given below are based on a variety of vendors. Both options below feature 2x 32-core CPUs and 2TB of local ‘scratch’ storage, backed by a 5-year warranty:

8GB/core machines (512GB total ram): Priced at an average of $14,850
16GB/core machines (1TB total ram): Priced at an average of $18,700

Based on the needs of researchers that have been communicated to us, these offer both performance and value, ensuring that our HPC resources remain cutting-edge and competitively priced.

University Matching Funds


For each node a faculty member purchases, another node of the most common type will be purchased with university funds (not to exceed $500K) and placed into the contrib partition.

Proposed SLURM Model


In response to the valuable and varied input we received regarding job scheduling and resource allocation, we will be proposing the following change to our SLURM model for the new Research Computing Governance Steering Committee to discuss and vote on when they meet in mid-April:

Dual-Partition Scheme

  • Pre-existing hardware will be in a general partition for all users.
  • A contrib partition which offers preemptive access rights to each PI’s purchased nodes. This means the nodes purchased by the PI will always be available to the PI and any jobs running on those nodes will be preempted within a few minutes.
  • Contributors gain higher SLURM scheduling priority across the entire system—in both the general and contrib partitions—at a rate commensurate with the twice the number of nodes purchased to reflect the university matching funds referenced above.
  • After five years, these nodes in the contrib partition will transition to the general partition, removing preemptive privileges.

Next Steps


We invite your feedback on these proposals and would be happy to discuss any specific needs or concerns you may have. To facilitate planning and investment in our HPC resources, we are asking interested faculty to send an email to research-computing@umbc.edu with (1) their preferred hardware options and (2) any potential contributions by April 19, 2024.

Your support and collaboration have been instrumental in the success of our HPC system, and we look forward to continuing this partnership to achieve our shared research goals.

Thank you for your patience with this process and dedication to the growth of UMBC’s Research Computing.


How We Got Here

20240119

HPCF User Meeting scheduled for 20240126

20240126

Many points are discussed, but the need for new equipment is highlighted
Faculty mention University of Hawai’i HPC workload manager setup
Faculty call for meeting to discuss potential new partition purchase

20240130

DoIT releases call for interest in discussion of potential new partition purchase

20240215

Meeting is held to discuss desired partition specifications
Faculty call for more detailed quotes and update to SLURM model

20240301

Meeting is held to review price ranges, university price matching, and SLURM model proposals

20240410

DoIT sends email with price points for different CPU HW, based on amount of total RAM

20240412

This page is created to hold email content and summarize

 


Written by Roy Prouty

Posted 20240412