High Performance Computing @ UMBC

The following high performance computers have been operated by the UMBC High Performance Computing Facility:

  • maya.rs.umbc.edu was a heterogeneous cluster composed of equipment purchased in 2009 through 2013 and released to the user community in Spring 2014. It contains several groups of nodes with an InfiniBand interconnect:
    • 34 Dell PowerEdge R620 CPU-only compute nodes. Each node has two Intel Ivy Bridge 2650v2 (2.6 GHz, 20 MB cache), processors with eight cores apiece, for a total of 16 cores per node.
    • 19 Dell PowerEdge R720 CPU/GPU nodes, each with the above CPUs plus two (2) NVIDIA Tesla K20 GPUs.
    • 19 Dell PowerEdge R720 CPU/Phi nodes, each with the above CPUs plus two (2) Phi 5110P coprocessors.
    • 168 IBM iDataPlex CPU nodes featuring two (2) quad core Intel Nehalem X5560 processors (2.8 GHz, 8 MB cache) and 24 GB of memory.
    • 86 IBM iDataPlex CPU-only nodes featuring two (2) quad core Intel Nehalem X5550 processors (2.66 GHz, 8 MB cache) and 24 GB of memory.

    The purchase of maya equipment was supported by faculty contributions, by grants from the National Science Foundation, and by the UMBC administration. Researchers can purchase nodes for long-term priority access. If you are interested in joining the effort, see the contact page. A list of resources pertaining to maya follow.

  • tara.rs.umbc.edu was an 86-node distributed-memory cluster installed in Fall 2009 with two quad-core Intel Nehalem processors and 24 GB per node, an InfiniBand interconnect, and 160 TB central storage. For a detailed description and more information see 2013 resources for tara users or 2010 resources for tara users. The purchase of tara was supported by faculty contributions, by grants from the National Science Foundation, and by the UMBC administration.
  • hpc.rs.umbc.edu was a distributed-memory cluster with 33 compute nodes plus 1 development and 1 combined user/management node, each equipped with two dual-core AMD Opteron processors and at least 13 GB of memory, connected by an InfiniBand network and with an Infiniband-accessible 14 TB parallel file system. The purchase in 2008 pooled funds from several researchers with seed funding from UMBC. This machine was replaced by tara in 2009. For the historical record of hpc and the projects that used it, see the archived hpc webpage.
  • kali.math.umbc.edu was a distributed-memory cluster with 32 compute nodes including a storage node connecting the 0.5 TB central storage, each with two Intel Xeon 2.0 GHz processors and 1 GB of memory, connected by a Myrinet interconnect, plus 1 combined user/management node. This machine was purchased in 2003 by researchers from the Department of Mathematics and Statistics using a grant from the National Science Foundation. It became part of HPCF in 2008 and was shut down in early 2009 after the cluster hpc had become operational. For the historical record of kali and the projects that used it, see the kali webpage.

Other high performance computers available on the UMBC campus are:

  • bluegrit.cs.umbc.edu is a distributed-memory cluster with 47 blades using IBM PowerPC chips (33 blades with 0.5 GB memory and 14 dual-processor blades with 2 GB memory) and 12 blades with two IBM Cell Processors and 0.5 GB memory. See the bluegrit webpage for more information. This machine is part of the Multicore Computational Center (MC2) at UMBC.

Other systems of note:

  • XSEDE, or the Extreme Science and Engineering Discovery Environment is a collection of integrated advanced digital resources and services. It is a single virtual system that scientists can use to interactively share computing resources, data, and expertise. We provide an overview that points users to web pages of how to use XSEDE as a faculty and student on our How to use XSEDE page. It also includes one sample of running on a particular XSEDE resource.