Overview

As of April 2025, all of the on-premises and centrally managed advanced compute resources are available through the chip HPC cluster. The enterprise research storage that’s connected to the cluster and made available to researchers is discussed on the Storage page. The advanced networking that enables high performance computing is discussed on the Networking page. For help using the cluster, including questions about access and accounts, see the “User Support” menu item above.

Compute Hardware

Year Purchased CPU Cores CPU Mem GPU Cards GPU Mem Number CPU Arch
2018 36 376 GB 0 N/A 49 Intel Skylake
2020 48 384 GB 8 (RTX 2080Ti) 11 GB 4 Intel Cascade Lake
2020 48 384 GB 8 (RTX 6000) 24 GB 7 Intel Cascade Lake
2020 48 768 GB 8 (RTX 8000) 48 GB 2 Intel Cascade Lake
2021 48 187 GB 0 N/A 18 Intel Cascade Lake
2024 64 1024 GB 0 N/A 13 Intel Emerald Rapids
2024 64 512 GB 0 N/A 38 Intel Emerald Rapids
2024 32 256 GB 2 (H100)

100GB

2 Intel Emerald Rapids
2024 32 256 GB 4 (L40S) 48GB 10 Intel Emerald Rapids

Table 1: Overview of Hardware-level partitions configured in slurm on chip

This cluster compute hardware is a mix of NSF-MRI funded, faculty-funded, and university-funded hardware. It represents more than $2M of investments over the last decade.