Welcome
The UMBC High Performance Computing Facility is the community-based, interdisciplinary core facility for scientific computing and research on parallel algorithms at UMBC. Started in 2008 by more than 20 researchers from ten academic departments and research centers from all academic colleges at UMBC, it is supported by faculty contributions, federal grants, and the UMBC administration.
Since HPCF’s inception, over 400 users have benefited from its computing clusters, including undergraduate and graduate students. The users have generated over 400 publications, including 150 papers in peer-reviewed journals (including Nature, Science, and other top-tier journals in their fields), 50 refereed conference papers, and 50 theses. The facility is open to UMBC researchers at no charge.
Researchers can contribute funding for long-term priority access. Administration and maintenance is provided by the UMBC Division of Information Technology (DoIT) advised by the faculty-run Shared Infrastructure Group, and users have access to consulting support provided by dedicated team organized under the Research Computing Group of DoIT. The purchase of the two current clusters taki and ada were supported by several NSF grants from the MRI programs, see the About tab for precise information.
HPCF currently consists of two clusters, taki and ada, and both are comprised of several types of machines. This structure is reflected in the tabs on top of this page:
- The taki cluster consists of
- 18 compute nodes with two 24-core Intel Cascade Lake CPUs and 196 GB of memory each
- 51 compute nodes with two 18-core Intel Skylake CPUs and 384 GB of memory each, two of these are reserved for the “development” partition for testing workflows
- 1 compute node with four NVIDIA Tesla V100 GPUs connected by NVLink
- 1 compute node dedicated to interactive use
- On their way: 13 compute nodes with two 32-core Intel Emerald Rapids CPUs with 512GB of memory each
- On their way: 38 compute nodes with two 32-core Intel Emerald Rapids CPUs with 1024GB of memory each
- The ada cluster consists of 13 nodes with two 24-core Intel Cascade Lake CPUs and 384 GB of memory each, as well as various GPU architecture
- Four nodes with eight 2080 Ti GPUs each
- Seven nodes with eight RTX 6000 GPUs each
- Two nodes have eight RTX 8000 GPUs with an extra 384 GB of memory each
All nodes of both machines are connected to the same central storage of more than 3 PetaBytes.
This webpage provides information about the facility, its systems, research projects, publications, resources for users, and contact information.