circ100The UMBC High Performance Computing Facility (HPCF) is the community-based, interdisciplinary core facility for scientific computing and research on parallel algorithms at UMBC. Started in 2008 by more than 20 researchers from ten academic departments and research centers from all academic colleges at UMBC, it is supported by faculty contributions, federal grants, and the UMBC administration. Since HPCF’s inception, over 400 users have benefited from its computing clusters, including undergraduate and graduate students. The users generated over 400 publications, including 150 papers in peer-reviewed journals (including Nature, Science, and other top-tier journals in their fields), 50 refereed conference papers, and 50 theses. The facility is open to UMBC researchers at no charge. Researchers can contribute funding for long-term priority access. System administration is provided by the UMBC Division of Information Technology (DoIT), and users have access to consulting support provided by dedicated full-time graduate assistants. The purchase of the two current clusters taki and ada were supported by several NSF grants from the MRI programs, see the About tab for precise information.

HPCF currently consists of two machines, taki and ada, and both are comprised of several types of nodes. This structure is reflected in the tabs on top of this page:

  • The taki cluster consists of
    • 18 compute nodes with two 24-core Intel Cascade Lake CPUs and 196 GB of memory each
    • 51 compute nodes with two 18-core Intel Skylake CPUs and 384 GB of memory each, two of these are reserved for the “development” partition for testing workflows
    • 1 compute node with four NVIDIA Tesla V100 GPUs connected by NVLink
    • 1 compute node dedicated to interactive use
  • The ada cluster consists of 13 nodes with two 24-core Intel Cascade Lake CPUs and 384 GB of memory each, as well as various GPU architecture
    • Four nodes with eight 2080 Ti GPUs each
    • Seven nodes with eight RTX 6000 GPUs each
    • Two nodes have eight RTX 8000 GPUs with an extra 384 GB of memory each

The nodes are connected to each other by EDR (extended data rate) InfiniBand interconnect. All nodes of both machines are connected to the same central storage of more than 3 PetaBytes.

This webpage provides information about the facility, its systems, research projects, publications, resources for users, and contact information.