Accessing the System
Users should remotely connect to chip.rs.umbc.edu
with their favorite remote connection client (terminal/wsl/command prompt/PuTTY/VS Code). Login credentials are identical to UMBC credentials. If you are not on the campus VPN or on the “eduroam” wifi, you will need to interact with the Duo 2FA Application to complete your login.
User Environment
By default, the system will display the Message Of The Day (motd) after a successful login and generate a bash prompt within the user’s home directory. Every user is associated with a UNIX group normally beginning with “pi_”, e.g. pi_professor
.
Home Directory Storage
Every user’s home directory has a storage quota of 500MB. The amount of available space in your home directory can be found by running df -h /home/username
.
Research Storage
Any research storage volume associated with your UNIX group is located at /umbc/rs/
, e.g., /umbc/rs/professor
for a UNIX group pi_professor
. This research storage volume is the same volume that is shared across the taki.rs.umbc.edu
system.
An alias to these research volumes has been created for you. To access these research volumes enter the command <professorName>_common
for the common research volumes, or <professorName>_user
for your user specific research volumes. You can view all aliases available to you by running the command alias
from anywhere on the system.
Research Storage from ada
Any research storage volume associated with your UNIX group is located at /umbc/ada/
, e.g., /umbc/ada/professor
for a UNIX group pi_professor
. This research storage volume is the same volume that is shared across the ada.rs.umbc.edu
system.
An alias to these research volumes has been created for you. To access these research volumes enter the command <professorName>_ada
for the root-level of the ada research volume. You can view all aliases available to you by running the command alias
from anywhere on the system.
Compute Hardware
Year Purchased | CPU Cores | CPU Mem | GPU Cards | GPU Mem | Number | CPU Arch |
2018 | 36 | 376GB | 0 | N/A | 0 | Intel Skylake |
2021 | 48 | 187GB | 0 | N/A | 0 | Intel Cascade Lake |
2024 | 64 | 1024GB | 0 | N/A | 13 | Intel Emerald Rapids |
2024 | 64 | 512GB | 0 | N/A | 38 | Intel Emerald Rapids |
2024 | 32 | 256GB | 2 (H100) |
100GB |
2 | Intel Emerald Rapids |
2024 | 32 | 256GB | 4 (L40S) | 48GB | 8 | Intel Emerald Rapids |
Table 1: Overview of Hardware-level partitions configured in slurm on chip
Workload Management (slurm)
Two slurm clusters are accessible within the chip
cluster. The CPU Hardware is dedicated to the chip-cpu
slurm cluster and the GPU Hardware is dedicated to the chip-gpu
slurm cluster. Each of these slurm clusters has its own set of slurm usage rules, see the pages dedicated to either cluster for more details.