Below is the current description of the UMBC HPCF. This text is available for download via this link.
Facilities, Equipment, and Other Resources
The UMBC High Performance Computing Facility (HPCF) is the community-based, interdisciplinary core facility for scientific computing and research on parallel algorithms at UMBC. Started in 2008 by more than 20 researchers from ten academic departments and research centers from all academic colleges at UMBC, it is supported by faculty contributions, federal grants, and the UMBC administration.
Since its inception, over 900 users across 200 research groups have benefited from its computing clusters, including undergraduate and graduate students. The users have generated over 500 publications, including 200 papers in peer-reviewed journals (including Nature, Science, and other top-tier journals in their fields), 80 refereed conference papers, and 60 theses.
Administration and Access
The facility is open to UMBC researchers at no charge. Researchers can contribute funding for long-term priority access. Administration and maintenance is provided by the UMBC Division of Information Technology (DoIT) advised by the faculty-run Shared Infrastructure Governance committee, and users have access to consulting support provided by a dedicated team organized under the Research Computing & Data group of DoIT.
Any UMBC staff, faculty, or student may request an account on the cluster to access resources. Accounts must belong to one or more cluster groups, which are created for research groups, grant awards, or classes and must be requested by faculty or staff.
System Architecture: The “chip” Cluster
As of April 2025, the UMBC HPCF consists of a single high-performance cluster named chip. The cluster comprises 145 nodes connected to a central storage system of more than 2 PetaBytes.
The system architecture distinguishes between Shared Storage, Login Nodes, and Compute Nodes:
Shared Storage. The UMBC HPCF is connected to the Retriever Research Storage System (RRStor). RRStor features a Ceph filesystem that is network-connected within the university’s enterprise infrastructure, which includes the DoIT-administered computing clusters. Since 2025, RRStor grants each faculty or staff requested cluster group 10TB of group storage. Temporary increases to group storage are available at cost to researchers. RRStor is currently able to support more than 2 PB of file and block storage and is readily expandable within university infrastructure.
Login Nodes. The cluster utilizes two dedicated login nodes managed by a load balancer, intended for file editing and navigating the cluster filesystem.
Compute Nodes: CPU. The cluster includes highly capable CPU-only nodes designated for serial and parallel processing:
- 49 Nodes (Skylake, 2018): 2x 18-core Intel Xeon Gold 6140 CPU @ 2.30GHz, 376 GB memory, 68 GB local storage.
- 18 Nodes (Cascade Lake, 2021): 2x 24-core Intel Xeon Gold 6240R CPU @ 2.40GHz, 187 GB memory, 396GB local storage.
- 13 Nodes (Emerald Rapids, 2024): 2x 32-core Intel Xeon Gold 6548Y+, 1024 GB memory, 1.8TB local storage.
- 38 Nodes (Emerald Rapids, 2024): 2x 32-core Intel Xeon Gold 6548Y+, 512 GB memory, 1.8TB local storage.
Compute Nodes: GPU. The cluster features extensive GPU acceleration capabilities for AI/ML, molecular dynamics, and rendering workflows:
- 2 Nodes (H100, 2024):
- Processors: 2x 16-core Intel Xeon Gold 6548Y+ (Emerald Rapids), 256 GB memory.
- Accelerators: 2x NVIDIA H100 (100 GB memory each) connected via NVLink.
- Storage: 7TB local storage.
- 10 Nodes (L40S, 2024):
- Processors: 2x 16-core Intel Xeon Gold 6548Y+ (Emerald Rapids), 256 GB memory.
- Accelerators: 4x NVIDIA L40S (48 GB memory each).
- Storage: 7TB local storage.
- 2 Nodes (RTX 8000, 2020):
- Processors: 2x 24-core Intel Xeon Gold 6240R (Cascade Lake), 768 GB memory.
- Accelerators: 8x NVIDIA RTX 8000 (48 GB memory each).
- Storage: 1.9TB local storage.
- 7 Nodes (RTX 6000, 2020):
- Processors: 2x 24-core Intel Xeon Gold 6240R (Cascade Lake), 384 GB memory.
- Accelerators: 8x NVIDIA RTX 6000 (24 GB memory each).
- Storage: 1.9TB local storage.
- 4 Nodes (RTX 2080Ti, 2020):
- Processors: 2x 24-core Intel Xeon Gold 6240R (Cascade Lake), 384 GB memory.
- Accelerators: 8x NVIDIA RTX 2080Ti (11 GB memory each).
- Storage: 1.9TB local storage.