As of April 2025, all of the on-premises and centrally managed advanced compute resources are available through the chip
HPC cluster. The enterprise research storage that’s connected to the cluster and made available to researchers is discussed on the Storage page. The advanced networking that enables high performance computing is discussed on the Networking page. For help using the cluster, including questions about access and accounts, the “User Support” menu item above.
Compute Hardware
Year Purchased | CPU Cores | CPU Mem | GPU Cards | GPU Mem | Number | CPU Arch |
2018 | 36 | 376 GB | 0 | N/A | 49 | Intel Skylake |
2020 | 48 | 384 GB | 8 (RTX 2080Ti) | 11 GB | 4 | Intel Cascade Lake |
2020 | 48 | 384 GB | 8 (RTX 6000) | 24 GB | 7 | Intel Cascade Lake |
2020 | 48 | 768 GB | 8 (RTX 8000) | 48 GB | 2 | Intel Cascade Lake |
2021 | 48 | 187 GB | 0 | N/A | 18 | Intel Cascade Lake |
2024 | 64 | 1024 GB | 0 | N/A | 13 | Intel Emerald Rapids |
2024 | 64 | 512 GB | 0 | N/A | 38 | Intel Emerald Rapids |
2024 | 32 | 256 GB | 2 (H100) |
100GB |
2 | Intel Emerald Rapids |
2024 | 32 | 256 GB | 4 (L40S) | 48GB | 10 | Intel Emerald Rapids |
Table 1: Overview of Hardware-level partitions configured in slurm on chip
This cluster compute hardware is a mix of NSF-MRI funded, faculty-funded, and university-funded hardware. It represents more than $2M of investments over the last decade and into the future.
Contribution Model
Why contribute to the UMBC HPCF?
By contributing to the cluster, you gain access to the community of campus researchers guiding the growth of research computing as well as access to additional resources from the DoIT Research Computing Team, namely …
- System Administration. DoIT orders, racks, installs, powers, cools, secures, patches, and performs any required hardware maintenance on these machines.
- User Support. DoIT provides resources to support you and your students. This web site also provides an overview of what others have purchased and what is available.
- Dedicated Access. Machines may be used by others when idle, but contributors will have explicit, dedicated access to the machine(s) within minutes of any request. At the same time, you gain access to the rest of the cluster hardware affording opportunities for scaling-up and testing workflows.
What machines can be contributed?
In order to maintain a quality software stack that is optimized for specific computer architectures, we limit the options to a few community-vetted choices. The DoIT Research Computing Team worked with cluster researchers and HPCF System Administrators to identify four different hardware options supporting CPU and GPU workflows. Depending on vendor availability and new technology releases, these machines discussed below will likely only be available through the start of the fall semester. Ideally, we would like to order these machines in the month of June. For new faculty with start-up funding that is not available till next fiscal year when they start their appointment, DoIT will pay for the machines and charge the departments next fiscal year.
Once equipment is ordered, the delivery date depends on the vendor and outside pressures, but is approximately 2-8 weeks from purchase, the DoIT Research Computing Team will work to rack, power, network, provision, and otherwise get the machines online. DoIT will handle all costs associated with the continued delivery of power, cooling, physical and software security, and any hardware maintenance within the vendor warranty. Once “online”, the machine will only be accessible via the UMBC HPCF chip cluster. The contributed machine may be used by others when idle, but contributors will have dedicated access to the machines within a few minutes of any request via slurm. Before the end of the vendor warranty period, the DoIT Research Computing Team will work with the contributors to either extend the warranty or migrate the machine to the general use hardware partitions of the cluster. DoIT will not be responsible for paying for the extended warranty.
CPU Hardware
Geared toward workflows requiring fast memory access, symbolic computation, complex control flows, and many types of simulations.
$14,500: “Low Memory” option is a Dell R660 server with:
- 2x Intel Xeon Gold 6548Y+ 2.5G CPUs, each with 32 cores
- 512 GB of CPU RAM
- 1.75TB NVMe Drive for local storage (called “scratch” within the UMBC HPCF
$18,000: “High Memory” option is a Dell R660 server with:
- 2x Intel Xeon Gold 6548Y+ 2.5G CPUs with 32 cores
- 1TB GB of CPU RAM
- 1.75TB NVMe Drive for local storage (called “scratch” within the UMBC HPCF)
GPU Hardware
Geared toward machine learning, artificial intelligence, and certain types of simulation well-suited for tensor math.
$60,000: “H100” option is a Dell R760XA server with:
- 2x Intel Xeon Gold 6526Y 2.8G CPUs, each with 16 cores
- 256 GB of CPU RAM
- 7 TB NVMe Drive for local storage (called “scratch” within the UMBC HPCF)
- 2x NVidia H100 NVL, PCIe with 94GB GPU Memory
$35,000: “L40S” option is a Dell R760XA server with:
- 2x Intel Xeon Gold 6526Y 2.8G CPUs, each with 16 cores
- 256 GB of CPU RAM
- 7 TB NVMe Drive for local storage (called “scratch” within the UMBC HPCF)
- 4x NVidia L40S NVL, PCIe with 48GB GPU Memory
Any questions around what type of hardware might best suit your research workflows, how to fund these purchases, or about the cluster in general can be directed to the DoIT Research Computing Team via email: research-computing@umbc.edu . We’re happy to help and learn about what your research computing goals are.