Skip to Main Content

Using your HPCF account

Table of Contents

The following page gives a tour through a typical maya account. While it is a standard Unix account, there are several special features to note, including the location and intent of the different storage areas and the availability of software. If you’re having trouble with any of the material, or believe that your account may be missing something, contact user support.

Connecting to maya

The only nodes with a connection to the outside network are the user nodes. Internally to the system, their full hostnames are maya-usr1.rs.umbc.edu and maya-usr2.rs.umbc.edu (notice the “-usr1” and “-usr2”). From the outside, we must refer to the hostname maya.rs.umbc.edu. To log in to the system, you must use a secure shell like SSH from Unix/Linux, PuTTY from Windows, or similar. You connect to the user node, which is the only node visible from the internet. For example, suppose we’re connecting to maya from the Linux machine “linux1.gl.umbc.edu”. We will take user “araim1” as our example throughout this page.

araim1@linux1.gl.umbc.edu[16]$ ssh araim1@maya.rs.umbc.edu
WARNING: UNAUTHORIZED ACCESS to this computer is in violation of Criminal
         Law Article section 8-606 and 7-302 of the Annotated Code of MD.

NOTICE:  This system is for the use of authorized users only. 
         Individuals using this computer system without authority, or in
         excess of their authority, are subject to having all of their
         activities on this system monitored and recorded by system
         personnel.


Last login: Mon Mar  3 14:17:05 2014 from ...

  UMBC High Performance Computing Facility	     http://www.umbc.edu/hpcf
  --------------------------------------------------------------------------
  If you have any questions or problems using this system please send mail to 
  hpc-support@lists.umbc.edu.  System technical issues should be reported
  via RT ticket to the "Research Computing" queue at https://rt.umbc.edu/

  Remember that the Division of Information Technology will never ask for
  your password. Do NOT give out this information under any circumstances.
  --------------------------------------------------------------------------

[araim1@maya-usr1 ~]$ 

Replace “araim1” with your UMBC username (that you use to log into myUMBC). You will be prompted for your password when connecting; your password is your myUMBC password. Notice that connecting to maya.rs.umbc.edu puts us on maya-usr1. We may connect to the other user node with the following.

[araim1@maya-usr1 ~]$ ssh maya-usr2
... same welcome message ...
[araim1@maya-usr2 ~]$

As another example, suppose we’re SSHing to maya from a Windows machine with PuTTY. When setting up a connection, use “maya.rs.umbc.edu” as the hostname. Once you connect, you will be prompted for your username and password, as mentioned above.

If you intend to do something requiring a graphical interface, such as view plots, then see running X Windows programs remotely.

Storage areas

The directory structure that DoIT will set up as part of your account creation is designed to facilitate the work of research groups consisting of several users and also reflects the fact that all HPCF accounts must be sponsored by a faculty member at UMBC. This sponsor will be referred to as PI (short for principal investigator) in the following. A user may be a member of one or several research groups on maya. Each research group has several storage areas on the system in the following specified locations. See System Description for a higher level overview of the storage and the cluster architecture.

Note that some special users, such as students in MATH 627, may not belong to a research group and therefore may not have any of the group storage areas set up.

Storage Area Location Description
User Home /home/username/ This is where the user starts after logging in to maya. Only accessible to the user by default. Default size is 100 MB, storage is located on the management node. This area is backed up nightly.
User Workspace Symlink: /home/username/pi_name_user
Mount point: /umbc/xfs1/pi_name/users/username/
A central storage area for the user’s own data, accessible only to the user and with read permission to the PI, but not accessible to other group members by default. Ideal for storing output of parallel programs, for example. This area is not backed up.
Group Workspace Symlink: /home/username/pi_name_common
Mount point: /umbc/xfs1/pi_name/common/
The same functionality and intent for use as user workspace, except this area is accessible with read and write permission to all members of the research group.
Scratch space /scratch/NNNNN Each compute node on the cluster has local /scratch storage. On nodes 1-69 the total scratch space available is 322GB, on nodes 70-155 this is 132GB, and on nodes 156-237 this is 361GB. The space in this area is shared among current users of the node so the total amount available will vary based on system usage. This storage is convenient temporary space to use while your job is running, but note that your files here persist only for the duration of the job. Use of this area is encouraged over /tmp, which is also needed by critical system processes. Note that a subdirectory NNNNN (e.g. 22704) is created for your job by the scheduler at runtime.For information on how to access scratch space from your job, see the how to run page.
Tmp Space /tmp/ Each machine on the cluster has its own local /tmp storage, as is customary on Unix systems. On all nodes the tmp space available is 25GB. This scratch area is shared with other users, and is purged periodically by the operating system, therefore is only suitable for temporary scratch storage. Use of /scratch is encouraged over /tmp (see above)
AFS /afs/umbc.edu/users/u/s/username/ Your AFS storage is conveniently available on the cluster, but can only be accessed from the user node. The “/u/s” in the directory name should be replaced with the first two letters of your username (for example user “straha1” would have directory /afs/umbc.edu/users/s/t/straha1).

“Mount point” indicates the actual location of the storage on maya’s filesystem. Traditionally, many users prefer to have a link to the storage from their home directory for easier navigation. The field “symlink” gives a suggested location for this link. For example, once the link is created, you may use the command “cd ~/pi_name_user” to get to User Workspace for the given PI. These links may be created for users as part of the account creation process; however, if they do not yet exist, simple instructions are provided below to create them yourself.

The amount of space available in the PI-specific areas depend on the allocation given to your research group. Your AFS quota is determined by DoIT. The quota for everyone’s home directory is generally the same.

Some research groups have additional storage areas, or have storage organized in a different way than shown above. For more information, contact your PI or user support.

Note that listing the contents of /umbc/xfs1 may not show storage areas for all PIs. This is because PI storage is only loaded when it is in use. If you attempt to access a PI’s subdirectory in /umbc/xfs1 or /umbc/lustre, it should be loaded (seamlessly) if it was previously offline.

The tutorial below will walk you through your home directory, and the specialized storage areas on maya.

A brief tour of your account

This section assumes that you already have an account, and you’re a member of a research group. If you need to apply for an account, see the account request form. If you’re not a member of a research group, you won’t have access to the various group spaces.

Home directory

First, log in to maya from your local machine by SSH:

me@mymachine:~> ssh username@maya.rs.umbc.edu
Password: (type your password)
WARNING: UNAUTHORIZED ACCESS to this computer is in violation of Criminal
         Law Article section 8-606 and 7-302 of the Annotated Code of MD.

NOTICE:  This system is for the use of authorized users only. 
         Individuals using this computer system without authority, or in
         excess of their authority, are subject to having all of their
         activities on this system monitored and recorded by system
         personnel.


Last login: Sat Dec  5 01:39:23 2009 from hpc.rs.umbc.edu

  UMBC High Performance Computing Facility	     http://www.umbc.edu/hpcf
  --------------------------------------------------------------------------
  If you have any questions or problems regarding this system, please send
  mail to hpc-support@lists.umbc.edu.

  Remember that the Division of Information Technology will never ask for
  your password. Do NOT give out this information under any circumstances.
  --------------------------------------------------------------------------

[araim1@maya-usr1 ~]$

The Bash shell as the default shell for maya users – this will be the shell that you are assumed to be in for purposes of documentation and examples on this webpage. Check your shell with the command “echo $SHELL” or by using “env” and searching for SHELL in the resulting lines of output.

[araim1@maya-usr1 ~]$ echo $SHELL
/bin/bash
[araim1@maya-usr1 ~]$

At any given time, the directory that you are currently in is referred to as your current working directory. Since you just logged in, your home directory is your current working directory. The “~” symbol is shorthand for your home directory. The program “pwd” tells you the full path of the current working directory, so let’s run pwd to see where your home directory really is:

araim1@maya-usr1:~$ pwd
/home/araim1

Now let’s use ls to get more information about your home directory.

araim1@maya-usr1:~$ ls -ld ~
drwx------ 23 araim1 pi_nagaraj 4096 Oct 29 22:35 /home/araim1

There is quite a bit of information on this line. If you’re not sure of what it means, this would be a good time to find a Linux/Unix reference. One example available on the web is The Linux Cookbook. What we wanted to emphasize was the string of permissions. The string “drwx——” indicates that only you have read, write, or execute access to this directory. (For a directory, “execute” access means ability to browse inside of it). Therefore your home is private. The space in your home directory is limited though. In your home directory, you are only allowed to create up to 10,000 files that take up a total of 250,000 kB of storage space. That isn’t much space for high performance computing and so you should plan on using the special storage areas that have been set up for you.

Modules

Modules are a simple way of preparing your environment to use many of the major applications on maya. Modules are normally loaded for the duration of an SSH session. They can be unloaded as well, and can also be set to automatically load each time you log in. The following shows the modules which are loaded for you by default (version numbers will change as the cluster is upgraded).

[av02016@maya-usr1 ~]$ module list
Currently Loaded Modulefiles:
  1) dot                  4) gcc/4.8.4          7) intel/compiler/64/15.0/full  10) quoter         13) tmux/2.1
  2) matlab/r2016b        5) hwloc/1.9.1        8) intel-mpi/64/5.0.3/048       11) monitor        14) default-environment
  3) comsol/5.1           6) slurm/14.11.11     9) texlive/2014                 12) git/2.0.4
[av02016@maya-usr1 ~]$
                                     

This means that SLURM, GCC, matlab, texlive, comsol and the Intel compiler + Intel MPI implementation are usable by default as soon as you log in. If we wish to use other software such as R (for statistical computing), we must first load the approriate module.

[av02016@maya-usr1 ~]$ Rscript -e 'exp(1)'
-bash: Rscript: command not found
[av02016@maya-usr1 ~]$ module load R/3.2.2
[av02016@maya-usr1 ~]$ Rscript -e 'exp(1)'
[1] 2.718282
[av02016@maya-usr1 ~]$

To use compilers other than default you need to unload and load modules from time to time. If you lost track and want to get back to the default status, try the following command:

[hu6@maya-usr1 ~]$ module purge
[hu6@maya-usr1 ~]$ module load default-environment

Complete documentation of module commands and options can be found by

[av02016@maya-usr1 ~]$ man module

We can list all available modules which have been defined by the system administrators. This is not the complete list of modules available in maya. This is just a small part of it. (Note: your listing may differ, depending on the current configuration).

[av02016@maya-usr1 ~]$ module avail

-------------------------------------------- /cm/shared/modulefiles -----------------------------------------
acml/gcc/64/5.3.1                                           intel/gdb/32/7.8.0/2016.3.210
acml/gcc/fma4/5.3.1                                         intel/gdb/64/7.8.0/2016.3.210
acml/gcc/mp/64/5.3.1                                        intel/ipp/32/8.1/2013_sp1.3.174
acml/gcc/mp/fma4/5.3.1                                      intel/ipp/32/8.2/2015.5.223
acml/gcc-int64/64/5.3.1                                     intel/ipp/32/9.0.3/2016.3.210
acml/gcc-int64/fma4/5.3.1                                   intel/ipp/64/7.1/2013.5.192
acml/gcc-int64/mp/64/5.3.1                                  intel/ipp/64/8.1/2013_sp1.3.174
acml/gcc-int64/mp/fma4/5.3.1                                intel/ipp/64/8.2/2015.5.223
acml/intel/64/5.3.1                                         intel/ipp/64/9.0.3/2016.3.210
acml/intel/fma4/5.3.1                                       intel/mkl/32/11.1/2013_sp1.3.174
acml/intel/mp/64/5.3.1                                      intel/mkl/32/11.2/2015.5.223
acml/intel/mp/fma4/5.3.1                                    intel/mkl/32/11.3.3/2016.3.210
acml/intel-int64/64/5.3.1                                   intel/mkl/64/11.0/2013.5.192
acml/intel-int64/fma4/5.3.1                                 intel/mkl/64/11.1/2013_sp1.3.174
acml/intel-int64/mp/64/5.3.1                                intel/mkl/64/11.2/2015.5.223
acml/intel-int64/mp/fma4/5.3.1                              intel/mkl/64/11.3.3/2016.3.210
acml/open64/64/5.3.1                                        intel/mkl/mic/11.3.3/2016.3.210
acml/open64/fma4/5.3.1                                      intel/mpi/32/16.0.3/2016.3.210
acml/open64/mp/64/5.3.1                                     intel/mpi/64/5.1.3/2016.3.210
acml/open64/mp/fma4/5.3.1                                   intel/mpi/mic/5.1.3/2016.3.210

We can see availability of a specific module by,

[av02016@maya-usr1 ~]$ module avail matlab

--------------------------------- /usr/cluster/modulefiles ------------------
matlab/r2013b          matlab/r2014b          matlab/r2015b          matlab/r2016b(default) matlab/r2017b
matlab/r2014a          matlab/r2015a          matlab/r2016a          matlab/r2017a

We can see the “show” command to see what a module does

[av02016@maya-usr1 ~]$ module show matlab/r2013b
-------------------------------------------------------------------
/usr/cluster/modulefiles/matlab/r2013b:

prepend-path     PATH /usr/cluster/matlab/r2013b/bin
prepend-path     MLM_LICENSE_FILE 1701@license5.umbc.edu,1701@license6.umbc.edu,1701@license7.umbc.edu
setenv           MLROOT /usr/cluster/matlab/r2013b
-------------------------------------------------------------------

To unload a module use,

[av02016@maya-usr1 ~]$ module unload R/3.2.2

Another useful module command is “swap” which will replace one loaded module with another.

[av02016@maya-usr1 ~]$ module list
Currently Loaded Modulefiles:
  1) dot             4) gcc/4.8.4        7) intel/compiler/64/15.0/full  10) quoter         13) tmux/2.1
  2) matlab/r2016b   5) hwloc/1.9.1      8) intel-mpi/64/5.0.3/048       11) monitor        14) default-environment
  3) comsol/5.1      6) slurm/14.11.11   9) texlive/2014                 12) git/2.0.4
[av02016@maya-usr1 ~]$ module swap gcc/5.5.0
[av02016@maya-usr1 ~]$ module list
Currently Loaded Modulefiles:
  1) dot             4) gcc/5.5.0      7) intel/compiler/64/15.0/full    10) quoter         13) tmux/2.1
  2) matlab/r2014b   5) hwloc/1.9.1    8) intel-mpi/64/5.0.3/048         11) monitor        14) default-environment
  3) comsol/5.1      6) slurm/14.11.11 9) texlive/2014                   12) git/2.0.4

How to automatically load modules at login

The “initadd” command can be used to make a module be loaded on every login. This is useful for applications you use on a regular basis.

[av02016@maya-usr1 ~]$ module initadd R/3.2.2
[av02016@maya-usr1 ~]$ logout
Connection to maya.rs.umbc.edu closed.
[av02016@localhost ~]$ ssh -X av02016@maya.rs.umbc.edu
...
[av02016@maya-usr1 ~]$ module list
Currently Loaded Modulefiles:
  1) dot            5) hwloc/1.9.1                   9) texlive/2014           13) tmux/2.1        17) R/3.1.2
  2) matlab/r2016b  6) slurm/14.11.11               10) quoter                 14) default-environment
  3) comsol/5.1     7) intel/compiler/64/15.0/full  11) monitor                15) R/3.2.2
  4) gcc/4.8.4      8) intel-mpi/64/5.0.3/048       12) git/2.0.4              16) openmpi/gcc/64/1.8.5

[av02016@maya-usr1 ~]$

To view your initial list:

[av02016@maya-usr1 ~]$ module initlist

bash initialization file $HOME/.bash_profile loads modules:
        openmpi/gcc/64 R/3.2.2
        R/3.1.2


bash initialization file $HOME/.bashrc loads modules:
        default-environment R R/3.2.2

To remove a module from your initial list:

[av02016@maya-usr1 ~]$ module initrm R/3.2.2
Removed R/3.2.2
[av02016@maya-usr1 ~]$

More information on modules is available here.

Compilers and MPI implementations

Our system is run on CentOS 6.8. We support only the bash shell. The following explains how to access the available compiler suites and MPI implementations on maya:

We supply three compiler suites:

  • Intel compiler suite (Default) with Composer XE – C, C++, Fortran 77, 90, 95, and 2003. This includes the Intel Math Kernel Library (LAPACK/BLAS)
  • GNU compiler suite – C, C++, Fortran 70, 90, and 95
  • Portland Group compiler suite – C, C++, Fortran 77, 90, and 95 plus limited Fortran 2003 support. This includes a commercial, optimzized ACML (LAPACK/BLAS/FFT) math library.

Maya gives the opportunity for the user to choose any combination of compiler suites and MPI implementations. MPI implementations available in maya are listed below in the parallel compiling section.

Serial Compiling

The command used to compile code depends on the language and compiler used.

Language Intel GNU PGI
C icc gcc pgcc
C++ icpc g++ pgc++
Fortran ifort gfortran pgf77/pgf90/pgf95

Since Intel compiler suite is the default setting in Maya we can directly use the commands in the second column of the above table without loading any extra module. However let’s say we are trying to compile a serial C program and we want to use PGI compiler suite for the task. Since PGI compiler suites are not loaded by default we need to load the required module using “module load” command as given below.

[av02016@maya-usr1 ~]$ pgcc
-bash: pgcc: command not found
[av02016@maya-usr1 ~]$ module avail pgi

------------------------------------------------ /cm/shared/modulefiles --------------------------------------
pgi/64/16.5
[av02016@maya-usr1 ~]$ module load pgi/64/16.5
[av02016@maya-usr1 ~]$

Note that therea re several versions of GNU compiler suites (gcc) available in maya,

[av02016@maya-usr1 ~]$ module avail gcc

----------------------------------------------------- /cm/shared/modulefiles -------------------
gcc/4.8.4

---------------------------------------------------------- /cm/local/modulefiles --------------------
gcc/5.1.0

----------------------------------------------------- /usr/cluster/contrib/modulefiles --------------
gcc/5.5.0

Parallel Compiling

For parallel computing, the module utility needs to be used to switch between different MPI implementations available on maya. We also provide three implementations of Infiniband-enabled MPI:

By default, your account is set up to use the Intel compiler with the Intel MPI implementation. To verify this, issue the following command.

[av02016@maya-usr1 ~]$ module list
Currently Loaded Modulefiles:
  1) dot                           4) gcc/4.8.4                     7) intel/compiler/64/15.0/full  10) quoter                       13) tmux/2.1
  2) matlab/r2016b                 5) hwloc/1.9.1                   8) intel-mpi/64/5.0.3/048       11) monitor                      14) default-environment
  3) comsol/5.1                    6) slurm/14.11.11                9) texlive/2014                 12) git/2.0.4
[av02016@maya-usr1 ~]$

In order to load the MPI implementations we first have to load the required compiler suite. Given below is available MPI implementation under each compiler suite.

Intel compiler suite

For the intel compiler, there are the intel mpi, openmpi version 1.8.5, and mvapich2 version 2.1 on maya.

Intel MPI

From the above “module list” output, you can see intel compiler suite and Intel MPI implementation is available by default, so you can directly use this compiler and the MPI implementation. The table provides the command necessary to compile MPI code for each implementation of MPI in C, C++, and Fortran is available at the end of this section. Example on using Intel MPI avilable here

MVAPICH2

The way of loading MVAPICH2 version 2.1 under intel compiler suite is,

[av02016@maya-usr1 ~]$ module load mvapich2/intel/64/2.1
[av02016@maya-usr1 ~]$

Notice that the format of path is “MPI implementation/ compiler/Architecture/ version”.

OpenMPI

We can load OpenMPI implementation under intel compiler suite as below,

[av02016@maya-usr1 ~]$ module load openmpi/intel/64/1.8.5
[av02016@maya-usr1 ~]$

IMPORTANT:It is important to be aware of how each MPI interface interacts with SLURM as sometimes the will require particular command and command syntax to work! Please check out this page, it is Lawrence Livermore National Laboratories’ official document on how to get certain MPI interfaces to work with SLURM.

IMPORTANT:  If you have multiple modules loaded for MPI, the last one loaded will be first on your path. As an example, lets say we have both intel-mpi module and after that we loaded mvapich2. Here, the version of “mpirun” we get is from mvapich2, not intel-mpi. When we remove the mvapich2 module by unloading, we then get mpirun from intel-mpi.

The module files set a number of environment variables, and in cases where they conflict, the last loaded module win.

GNU compiler suite

For the gcc compiler, there are the openmpi version 1.8.5 and mvapich version 2.1 on maya.

MVAPICH2

To load the combination (mvapich2+gcc) first load the mpi implementation and then the compiler suite.

[av02016@maya-usr1 ~]$ module load mvapich2/gcc/64/2.1
[av02016@maya-usr1 ~]$
[av02016@maya-usr1 ~]$ module load gcc
[av02016@maya-usr1 ~]$

OpenMPI

To load the openMPI with gcc compiler use the below command,

[av02016@maya-usr1 ~]$ module load openmpi/gcc/64/1.8.5
[av02016@maya-usr1 ~]$
[av02016@maya-usr1 ~]$ module load gcc
[av02016@maya-usr1 ~]$

Portland Group compiler suite

For the pgi compiler, there are the openmpi version 1.8.5 and mvapich version 2.1 on maya.

MVAPICH2

To load the mvpich2 with pgi compiler use,

[av02016@maya-usr1 ~]$ module load mvapich2/pgi/64/2.1
[av02016@maya-usr1 ~]$
[av02016@maya-usr1 ~]$ module load pgi
[av02016@maya-usr1 ~]$

OpenMPI

To load OpenMPI with pgi compiler suite use,

[av02016@maya-usr1 ~]$ module load openmpi/pgi/64/1.8.5
[av02016@maya-usr1 ~]$
[av02016@maya-usr1 ~]$ module load pgi
[av02016@maya-usr1 ~]$

The following table provides the command necessary to compile MPI code for each implementation of MPI in C, C++, and Fortran.

Language Intel MPI MVAPICH2 OpenMPI
C mpiicc mpicc mpicc
C++ mpiicpc mpic++ mpiCC
Fortran mpiifort mpif77/mpif90 mpif77/mpif90

To access modern architectures we supply:

  • CUDA for GPU programming
  • Intel compiler suite and Intel MPI for Phi programming

We use the SLURM cluster management and job scheduling system.

See resources for maya for a more complete list of the available software, along with tutorials to help you get started. For more details, Bright computing offers the manual.

Group membership

Your account has membership in one or more Unix groups. On maya, groups are usually (but not always) organized by research group and named after the PI. The primary purpose of these groups is to facilitate sharing of files with other users, through the Unix permissions system. To see your Unix groups, try the following command:

[araim1@maya-usr1 ~]$ groups
pi_nagaraj contrib alloc_node_ssh hpcreu pi_gobbert
[araim1@maya-usr1 ~]$ 

In the example above, the user is a member of five groups – two of them correspond to research groups.

Special storage areas

A typical account on maya will have access to several central storage areas. These can be classified as “not backed up”. They can also be classified as “user” or “group” storage. See above for the complete descriptions. For each research group, you should have access to the following areas:

[jongraf1@maya-usr1 ~]$ ls -d /umbc/xfs1/gobbert/users/
/umbc/xfs1/gobbert/users/
[jongraf1@maya-usr1 ~]$ ls -d /umbc/xfs1/gobbert/common/
/umbc/xfs1/gobbert/common/
[jongraf1@maya-usr1 ~]$ 

We recommend creating the following symlinks to your home directory for easier navigation.

jongraf1@maya-usr1:~$ ls -l ~/gobbert_common ~/gobbert_user 
lrwxrwxrwx 1 jongraf1 pi_gobbert 33 Jan 18 15:48 gobbert_common -> /umbc/xfs1/gobbert/common
lrwxrwxrwx 1 jongraf1 pi_gobbert 33 Jan 18 15:48 gobbert_user -> /umbc/xfs1/gobbert/users/jongraf1

If any of these do not exist, you may create them using the following commands. You only need to do this once. We suggest that you repeat it for each PI if you are a member of multiple research groups.

[jongraf1@maya-usr1 ~]$ ln -s /umbc/xfs1/gobbert/common ~/gobbert_common
[jongraf1@maya-usr1 ~]$ ln -s /umbc/xfs1/gobbert/users/ ~/gobbert_user

In the “ls” command output, we see that these are symbolic links instead of normal directories. Whenever you access “/home/jongraf1/gobbert_common”, you are actually redirected to “/umbc/xfs1/gobbert/common”. If the link “/home/jongraf1/gobbert_common” is removed, the actual directory “/umbc/xfs1/gobbert/common” is not affected. Note that certain research groups may need different links than the (standard) ones shown. Check with your PI.

Group Workspace

The intention of Group Workspace is to store reasonably large volumes of data, such as large datasets from computations, which can be accessed by everyone in your group. By default, the permissions of Group Workspace are set as follows to enable sharing among your group

jongraf1@maya-usr1:~$ ls -ld /umbc/xfs1/gobbert/common
drwxrws--- 2 pi_gobbert pi_gobbert 2 Jan 18 14:56 /umbc/xfs/gobbert/common/

The string “drwxrws—” indicates that the PI, who is the owner of the group, has read, write, and execute permissions in this directory. In addition, other members of the group also have read, write, and execute permissions. The “s” indicates that all directories created under this directory should inherit the same group permissions. (If this attribute were set but execute permissions were not enabled for the group, this would be displayed as a capital letter “S”).

User Workspace

Where Group Workspace is intended as an area for collaboration, User Workspace is intended for individual work. Again, it is intended to store reasonably large volumes of data. Your PI and other group members can see your work in this area, but cannot edit it.

jongraf1@maya-usr1:~$ ls -ld /umbc/xfs1/gobbert/users/jongraf1
drwxr-sr-x 3 jongraf1 pi_gobbert 3 Jan 18 21:59 /umbc/xfs1/gobbert/users/araim1

The string “drwxr-sr-x”, means that only you may make changes inside this directory, but anyone in your group can list or read the contents. Other users appear to also have this access, but they are restricted further up the directory tree from accessing your PI’s storage

jongraf1@maya-usr1:~$ ls -ld /umbc/xfs1/gobbert/
drwxrws--- 3 pi_gobbert pi_gobbert 3 Jan 18 21:59 /umbc/xfs1/gobbert/

Checking disk usage vs. storage limits

There are two types of storage limits to be aware of: quotas and physical limits of the filesystems where your space is hosted. The following command will check the space on User Workspace, and Group Workspace:

[jongraf1@maya-usr1 ~]$ df -h ~/gobbert_user ~/gobbert_common/
Filesystem            Size  Used Avail Use% Mounted on
xfs1:/export/gobbert   10T  9.1T 1004G  91% /umbc/xfs1/gobbert
xfs1:/export/gobbert   10T  9.1T 1004G  91% /umbc/xfs1/gobbert

Of course your output will depend on which research group(s) you are a member. The first column indicates where the data is physically stored. In the last column, “/umbc/xfs1/gobbert” indicates where this storage is mounted on the local machine. If you want to check the overall usage for the /umbc/xfs1 storage, follow the example below:

[jongraf1@maya-usr1 ~]$ df -h | grep gobbert
xfs1:/export/gobbert   10T  9.1T 1004G  91% /umbc/xfs1/gobbert

For tips on managing your disk space usage, see How to check disk usage

More about permissions

Standard Unix permissions are used on maya to control which users have access to your files. We’ve already seen some examples of this. It’s important to emphasize that this is the mechanism that determines the degree of sharing, and on the other hand privacy, of your work on this system. In setting up your account, we’ve taken a few steps to simplify things, assuming you use the storage areas for the basic purposes they were designed. This should be sufficient for many users, but you can also customize your use of the permissions system if you need additional privacy, to share with additional users, etc.

Changing a file’s permissions

For existing files and directories, you can modify permissions with the “chmod” command. As a very basic example:

[araim1@maya-usr1 ~]$ touch tmpfile
[araim1@maya-usr1 ~]$ ls -la tmpfile 
-rwxrwxr-x 1 araim1 pi_nagaraj 0 Jun 14 17:50 tmpfile
[araim1@maya-usr1 ~]$ chmod 664 tmpfile 
[araim1@maya-usr1 ~]$ ls -la tmpfile 
-rw-rw-r-- 1 araim1 pi_nagaraj 0 Jun 14 17:50 tmpfile
[araim1@maya-usr1 ~]$ 

See “man chmod” for more information, or the Wikipedia page for chmod

Changing a file’s group

For users in multiple groups, you may find the need to change a file’s ownership to a different group. This can be accomplished on a file-by-file basis by the “chgrp” command

[araim1@maya-usr1 ~]$ touch tmpfile 
[araim1@maya-usr1 ~]$ ls -la tmpfile 
-rw-rw---- 1 araim1 pi_nagaraj 0 Jun 14 18:00 tmpfile
[araim1@maya-usr1 ~]$ chgrp pi_gobbert tmpfile 
[araim1@maya-usr1 ~]$ ls -la tmpfile 
-rw-rw---- 1 araim1 pi_gobbert 0 Jun 14 18:00 tmpfile
[araim1@maya-usr1 ~]$ 

You may also change your “currently active” group using the “newgrp” command

[araim1@maya-usr1 ~]$ id
uid=28398(araim1) gid=1057(pi_nagaraj) groups=1057(pi_nagaraj),32296(pi_gobbert)
[araim1@maya-usr1 ~]$ newgrp pi_gobbert
[araim1@maya-usr1 ~]$ id
uid=28398(araim1) gid=32296(pi_gobbert) groups=1057(pi_nagaraj),32296(pi_gobbert)

Now any new files created in this session will belong to the group pi_gobbert

[araim1@maya-usr1 ~]$ touch tmpfile2
[araim1@maya-usr1 ~]$ ls -la tmpfile2 
-rw-rw---- 1 araim1 pi_gobbert 0 Jun 14 18:05 tmpfile2
[araim1@maya-usr1 ~]$ 

Umask

By default, your account will have a line in ~/.bashrc which sets your “umask”

umask 007

The umask is traditionally set to 022 on Unix systems, so this is a customization on maya. The umask helps to determine the permissions for new files and directories you create. Usually when you create a file, you don’t specify what its permissions will be. Instead some defaults are used, but they may be too liberal. For example, suppose we created a file that got the following default permissions.

[araim1@maya-usr1 ~]$ ls -la secret-research.txt
-rwxrwxr-x 1 araim1 pi_nagaraj 0 Jun 14 17:02 secret-research.txt

All users on the system could read this file if they had access to its directory. The umask allows us to turn off specific permissions for newly created files. Suppose we want all new files to have “rwx” turned off for anyone who isn’t us (araim1) or in our group (pi_nagaraj). A umask setting of “007” accomplishes this. To quickly illustrate what this means, notice that 007 is three digit number in octal (base 8). We can also represent it as a 9 digit binary number 000000111. We can also represent “rwxrwxr-x” (from our file above) as a 9 digit binary number 111111101; dashes correspond to 0’s and letters correspond to 1’s. The umask is applied the following way to set the new permissions for our file

        111111101    <-- proposed permissions for our new file
AND NOT(000000111)   <-- the mask
------------------
=       111111000
=       rwxrwx---    <-- permissions for our new file

In other words, umask 007 ensures that outside users have no access to your new files. See the Wikipedia entry for umask for more explanation and examples. On maya, the storage areas’ permissions are already set up to enforce specific styles of collaboration. We’ve selected 007 as the default umask to not prevent sharing with your group, but to prevent sharing with outside users. If you generally want to prevent your group from modifying your files (for example), even in the shared storage areas, you may want to use a more restrictive umask.

If you have any need to change your umask, you can do so permanently by editing ~/.bashrc, or temporarily for the current SSH session by using the umask command directly.

[araim1@maya-usr1 ~]$ umask
0007
[araim1@maya-usr1 ~]$ umask 022
[araim1@maya-usr1 ~]$ umask
0022
[araim1@maya-usr1 ~]$ 

Notice that typing “umask” with no arguments reports your current umask setting.

Configuring permissions of Group storage areas (PI only)

If you are a PI, you can add or remove the group write permissions (the w in r-s/rws) by using the chmod command. You may want to do this if you intend to place materials here for your group to read, but not for editing. To add group write permissions and let all members of your group create or delete files and directories in your Group Workspace area in a directory called restricted_permission.

araim1@maya-usr1:~$ chmod g+w ~/nagaraj_common/restricted_permission

To remove group write permissions so that only araim1 and the PI nagaraj can create or delete files in the directory:

araim1@maya-usr1:~$ chmod g-w ~/nagaraj_common/restricted_permisson

AFS Storage

Your AFS partition is the directory where your personal files are stored when you use the DoIT computer labs or the gl.umbc.edu systems. You can access this partition from maya. In order to access AFS, you need an AFS token. You can see whether you currently have an AFS token

straha1@maya-usr1:~> tokens

Tokens held by the Cache Manager:

Tokens for afs@umbc.edu [Expires Oct 25 00:16]
   --End of list--

The “Tokens for afs@umbc.edu” line tells me that we currently have tokens that let us access UMBC’s AFS storage. The expiration date (“Expires Oct 25 00:16”) tells us when our tokens will expire. When your tokens expire, an empty list will be returned

straha1@maya-usr1:~> tokens

Tokens held by the Cache Manager:

   --End of list--

We can renew our tokens using the “kinit” and “aklog” commands as follows. Note that kinit is asking for your MyUMBC password.

[araim1@maya-usr1 ~]$ kinit
Password for araim1@UMBC.EDU: 
[araim1@maya-usr1 ~]$ aklog
[araim1@maya-usr1 ~]$ tokens

Tokens held by the Cache Manager:

User's (AFS ID 28398) tokens for afs@umbc.edu [Expires Apr  4 05:57]
   --End of list--
[araim1@maya-usr1 ~]$

The “kinit” command may only be necessary for SSH sessions using public key / private pair, where typing of the password is bypassed at login time.

How to create simple files and directories

Now let’s try creating some files and directories. First, let’s make a directory named “testdir” in your home directory.

araim1@maya-usr1:~$ mkdir testdir
araim1@maya-usr1:~$ ls -ld testdir
drwxr-x--- 2 araim1 nagaraj 4096 Oct 30 00:12 testdir
araim1@maya-usr1:~$ cd testdir
araim1@maya-usr1:~/testdir$

The mkdir command created the directory testdir. Since your current working directory was ~ when you ran that command, testdir is inside your home directory. Thus it is said to be a subdirectory of ~. The cd command changed your working directory to ~/testdir and that is reflected by the new prompt: araim1@maya-usr1:~/testdir$. Now lets create a file in testdir:

araim1@maya-usr1:~/testdir$ echo HELLO WORLD > testfile
araim1@maya-usr1:~/testdir$ ls -l testfile
-rw-r----- 1 araim1 pi_groupname 12 Oct 30 00:16 testfile
araim1@maya-usr1:~/testdir$ cat testfile
HELLO WORLD
araim1@maya-usr1:~/testdir$ cat ~/testdir/testfile
HELLO WORLD
araim1@maya-usr1:~/testdir$

The echo command simply prints out its arguments (“HELLO WORLD”). The “>” tells your shell to send the output of echo into the file testfile. Since your current working directory is ~/testdir, testfile was created in testdir and its full path is then ~/testdir/testfile. The program cat prints (aka concatenates) out the contents of a file (where the argument to cat, testfile or ~/testdir/testfile is the file to print out). As you can see, testfile does indeed contain “HELLO WORLD”. Now let’s delete testdir and testfile. To use the “rmdir” command and remove our directory, we must first ensure that it is empty:

araim1@maya-usr1:~/testdir$ rm -i testfile
rm: remove regular file `testfile'? y

Now we delete the testdir directory with rmdir:

araim1@maya-usr1:~/testdir$ cd ~
araim1@maya-usr1:~$ rmdir testdir

How to copy files to and from maya

Probably the most general way to transfer files between machines is by Secure Copy (scp). Because some remote filesystems may be mounted to maya, it may also be possible to transfer files using “local” file operations like cp, mv, etc.

Method 1: Secure Copy (scp)

The maya cluster only allows secure connection from the outside. Secure Copy is the file copying program that is part of Secure Shell (SSH). To transfer files to and from maya, you must use scp or compatible software (such as WinSCP or SSHFS). On Unix machines such as Linux or MacOS X, you can execute scp from a terminal window. Let’s explain the use of scp by the following example: user “araim1” has a file hello.c in sub-directory math627/hw1 from his home directory on maya. To copy the file to the current directory on another Unix/Linux system with scp, use

[araim1@maya-usr1 ~]$ scp araim1@maya.rs.umbc.edu:~/math627/hw1/hello.c . 

Notice carefully the period “.” at the end of the above sample command; it signifies that you want the file copied to your current directory (without changing the name of the file). You can also send data in the other direction too. Let’s say you have a file /home/bob/myfile on your machine and you want to send it to a subdirectory “work” of your your maya home directory:

[araim1@maya-usr1 ~]$ scp /home/bob/myfile araim1@maya.rs.umbc.edu:~/work/

The “/” after “work” ensures that scp will fail if the directory “work” does not exist. If you leave out the “/” and “work” was not a directory already, then scp would create a file “work” that contains the contents of /home/bob/myfile (which is not what we want). You may also specify a different name for the file at its remote destination.

[araim1@maya-usr1 ~]$ scp /home/bob/myfile araim1@maya.rs.umbc.edu:~/work/myfile2

As with SSH, you can leave out the “araim1@”, if your username is the same on both machines. That is the case on the GL login servers and the general lab Mac OS X and Linux machines. If you issue the command from within UMBC, you can also abbreviate the machine name to maya.rs. See the scp manual page for more information. You can access the scp manual page (referred to as a “man page”) on a unix machine by running the command:

man scp

Method 2: AFS

Another way to copy data is to use the UMBC-wide AFS filesystem. The AFS filesystem is where your UMBC GL data is stored. That includes your UMBC email, your home directory on the gl.umbc.edu login servers and general lab Linux and Mac OS X machines, your UMBC webpage (if you have one) and your S: and some other drives on the general lab windows machines. Any data you put in your AFS partition will be available on maya in the directory /afs/umbc.edu/users/a/r/araim1/ where “araim1” should be replaced with your username, and “a” and “r” should be replaced with the first and second letters of your username, respectively. As an example, suppose you’re using a Mac OS X machine in a UMBC computer lab and you’ve SSHed into maya in a terminal window. Then, in that window you can type:

[araim1@maya-usr1 ~]$ cp ~/myfile /afs/umbc.edu/users/a/r/araim1/home/

and your file myfile in your maya home directory will be copied to myfile in your AFS home directory. Then, you can access that copy of the file on the Mac you’re using, via ~/myfile. Note that it’s only a copy of the file; ~/myfile on your Mac is not the same file as ~/myfile on maya. However, ~/myfile on your Mac is the same as /afs/umbc.edu/users/a/r/araim1/home/myfile on both your Mac and maya.

Make sure you’ve noted the section on AFS tokens above if you plan on using the AFS mount.

How to use the queuing system

See our How to compile C programs tutorial to learn how to run both serial and parallel programs on the cluster.

Things to check on your new maya account

Please run the following command to check your bashrc file:

[hu6@maya-usr1 ~]$ more .bashrc

You should have output to your screen like this:

# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# User specific aliases and functions
umask 007


# Load modules needed for the maya system
if [ -e /cm ]; then
module load default-environment
fi
export SQUEUE_FORMAT="%.7i %.9P %.8j %.8u %.2t %.10M %.6D %7q %R"

If you are running large jobs on hpcf2013, tests have shown that an environment using the ‘Tag Matching Interface’ will yield the best performance. The underlying Infiniband network environment is set through the ‘I_MPI_FABRICS’ environment variable, to change the variable look at the following commands:

[slet1@maya-usr1 ~]$ env | grep I_MPI_FABRICS #check to see what the variable is set too
I_MPI_FABRICS=shm:ofa
[slet1@maya-usr1 ~]$ export I_MPI_FABRICS=shm:tmi #set the variable to use the Tag Matching Interface
[slet1@maya-usr1 ~]$ env | grep I_MPI_FABRICS #check to see if it worked
I_MPI_FABRICS=shm:tmi

Please check that your umask is 007 and that you have the default-environment loaded. With the default environment, the following modules are ready for use:

[jongraf1@maya-usr1 ~]$ module list
Currently Loaded Modulefiles:
  1) dot                                 7) intel-mpi/64/4.1.3/049
  2) matlab/r2015a                       8) texlive/2014
  3) comsol/4.4                          9) quoter
  4) gcc/4.8.2                          10) git/2.0.4
  5) slurm/14.03.6                      11) default-environment
  6) intel/compiler/64/15.0/2015.1.133

Commands for HPC system monitoring

The following commands can be used for monitoring various aspects of the cluster. Descriptions and examples for the commands are given below.

The ‘sacct’ utility can be used to find accounting information on SLURM jobs that are currently running or previously submitted. The following command uses the ‘sacct’ tool to display a comprehensive set of information on a specific job:

[slet1@maya-usr1 ~]$ sacct -j JOB-ID --format=jobid,jobname,partition,account,elapsed,Timelimit,submit,start,state,nodelist

The command hpc_jobs displays a network map of the number of jobs running on the nodes.

[schou@maya-usr1 ~]$ hpc_jobs
UMBC HPC Job Count at Sat Feb 28 08:47:23 EST 2015
   n1-3  [   0   0   1 ]     n70-73  [   0   2   1   1 ]   n154-157  [   0   0   0   0 ]
   n4-6  [   1   1   0 ]     n74-77  [   4   3   4   1 ]   n158-161  [   0   0   0   0 ]
   n7-9  [   0   0   0 ]     n78-81  [   0   4   5   3 ]   n162-165  [   0   0   0   0 ]
 n10-12  [   0   1   0 ]     n82-85  [   2   1   0   1 ]   n166-169  [   0   0   0   0 ]
 n13-15  [   0   0   0 ]     n86-89  [   0   0   1   0 ]   n170-173  [   0   0   2   0 ]
 n16-18  [   0   0   1 ]     n90-93  [   1   8   1   3 ]   n174-177  [   0   0   0   0 ]
 n19-21  [   0   0   0 ]     n94-97  [   3   2   2   1 ]   n178-181  [   0   0   0   0 ]
 n22-24  [   0   0   0 ]    n98-101  [   5   2   2   1 ]   n182-185  [   0   0   0   0 ]
 n25-27  [   0   0   0 ]   n102-105  [   4   4   5   5 ]   n186-189  [   0   0   0   2 ]
 n28-30  [   0   4   1 ]   n106-109  [   1   0   6   6 ]   n190-193  [   0   0   0   0 ]
 n31-33  [   1   1   2 ]   n110-113  [   3   3   0   0 ]   n194-197  [   0   0   0   0 ]
 n34-36  [   0   2   2 ]   n114-117  [   6   1   1   0 ]   n198-201  [   0   0   0   0 ]
 n37-39  [   0   2   2 ]   n118-121  [   1   0   1   0 ]   n202-205  [   2   0   0   0 ]
 n40-42  [   0   0   0 ]   n122-125  [   0   0   1   0 ]   n206-209  [   0   0   0   0 ]
 n43-45  [   0   0   0 ]   n126-129  [   0   0   0   0 ]   n210-213  [   0   0   0   0 ]
 n46-48  [   0   0   0 ]   n130-133  [   0   0   0   0 ]   n214-217  [   0   0   0   0 ]
 n49-51  [   0   0   0 ]   n134-137  [   0   0   0   0 ]   n218-221  [   0   2   1   0 ]
 n52-54  [   0   0   0 ]   n138-141  [   0   0   0   0 ]   n222-225  [   0   0   0   0 ]
 n55-57  [   0   0   0 ]   n142-145  [   0   0   0   0 ]   n226-229  [   0   0   0   0 ]
 n58-60  [   2   2   0 ]   n146-149  [   0   0   0   0 ]   n230-233  [   0   0   0   0 ]
 n61-63  [   1   1   1 ]   n150-153  [   0   0   0   0 ]   n234-237  [   0   0   0   0 ]
 n64-66  [   1   1   1 ]
 n67-69  [   1   1   1 ]      usr1-2 [   0   0 ]                 mgt [   0 ]
Load1   TFlops  Occupancy
16.0    6.6     29.6

The command hpc_load displays a ‘heat map’ of where the highest load systems are in maya.

[schou@maya-usr1 ~]$ hpc_load
UMBC HPC Load1 (%) at Sat Feb 28 08:45:33 EST 2015
   n1-3  [   0   0   0 ]     n70-73  [   0  25  12  13 ]   n154-157  [   0   0   0   0 ]
   n4-6  [   6   6   0 ]     n74-77  [  50  38  50  12 ]   n158-161  [   0   0   0   0 ]
   n7-9  [   1   0   1 ]     n78-81  [  12  50  62  38 ]   n162-165  [   0   0   0   0 ]
 n10-12  [   0   6   0 ]     n82-85  [  25  13   0  12 ]   n166-169  [   0   0   0   0 ]
 n13-15  [   1   1   0 ]     n86-89  [   0   0  12   1 ]   n170-173  [   0   0  50   0 ]
 n16-18  [   0   0  26 ]     n90-93  [  12 100   0  25 ]   n174-177  [   0   0   0   0 ]
 n19-21  [   0   0   0 ]     n94-97  [  25  25  25  12 ]   n178-181  [   0   0   0   0 ]
 n22-24  [   0   0   0 ]    n98-101  [  62  25  25  25 ]   n182-185  [   0   0   0   0 ]
 n25-27  [   0   0   1 ]   n102-105  [  50  50  62  62 ]   n186-189  [   0   0   0  12 ]
 n28-30  [   0  10 100 ]   n106-109  [   0   0  75  75 ]   n190-193  [   0   0   0   0 ]
 n31-33  [ 100 100  35 ]   n110-113  [  38  38   0   0 ]   n194-197  [   0   0   0   0 ]
 n34-36  [   0  20  29 ]   n114-117  [  75   0   0   0 ]   n198-201  [   0   0   0   0 ]
 n37-39  [   0  20  41 ]   n118-121  [   0   0 815   0 ]   n202-205  [  25   0   0   0 ]
 n40-42  [   0   0   0 ]   n122-125  [   0   0   0   0 ]   n206-209  [   0   0   0   0 ]
 n43-45  [   0   0   0 ]   n126-129  [   0   0   0   0 ]   n210-213  [   0   0   0   0 ]
 n46-48  [   1   0   0 ]   n130-133  [   0   0   0   0 ]   n214-217  [   0   0   0   0 ]
 n49-51  [   1   0   1 ]   n134-137  [   0   0   0   0 ]   n218-221  [   0  12  12   0 ]
 n52-54  [   0   0   0 ]   n138-141  [   0   0   0   1 ]   n222-225  [   0   0   0   0 ]
 n55-57  [   0   1   1 ]   n142-145  [   0   0   0   0 ]   n226-229  [   0   0   0   0 ]
 n58-60  [  19  41   0 ]   n146-149  [   0   0   0   0 ]   n230-233  [   0   0   0   0 ]
 n61-63  [ 100 100 100 ]   n150-153  [   0   0   0   0 ]   n234-237  [   0   0   0   0 ]
 n64-66  [ 100 100 100 ]
 n67-69  [ 101 100 100 ]      usr1-2 [   1   1 ]                 mgt [ 128 ]
Load1   TFlops  Occupancy
16.1    6.6     30.0

The command hpc_mem maps the memory usage on maya.

[schou@maya-usr1 ~]$ hpc_mem
UMBC HPC Memory Use (%) at Sat Feb 28 08:48:09 EST 2015
   n1-3  [   0   0   2 ]     n70-73  [   3  17  16   5 ]   n154-157  [   2   2   3  16 ]
   n4-6  [  26  57  17 ]     n74-77  [   9   2  25   7 ]   n158-161  [  15  14  38   1 ]
   n7-9  [  16  15  16 ]     n78-81  [   5   7   5  27 ]   n162-165  [   1   1   1   1 ]
 n10-12  [  10  15   9 ]     n82-85  [  11   4   1  14 ]   n166-169  [   1   1   1   1 ]
 n13-15  [   2   3  18 ]     n86-89  [   4   5   5   5 ]   n170-173  [   1   1  16   1 ]
 n16-18  [   2   9   4 ]     n90-93  [   5  33   9   4 ]   n174-177  [   1   1   1   1 ]
 n19-21  [   3   3  30 ]     n94-97  [  12  16   5   4 ]   n178-181  [   1   1   1   1 ]
 n22-24  [   1   2   3 ]    n98-101  [   7  23  13   2 ]   n182-185  [   1   1   1   1 ]
 n25-27  [   2   2   2 ]   n102-105  [   3  19  12   6 ]   n186-189  [   1   1   1  21 ]
 n28-30  [   2  17   8 ]   n106-109  [   1   1   4  12 ]   n190-193  [   1   1   1   1 ]
 n31-33  [   8   8  11 ]   n110-113  [   6   5   3   0 ]   n194-197  [   1   1   3   1 ]
 n34-36  [   1  13  11 ]   n114-117  [  18  16   4   5 ]   n198-201  [   1   1   1   1 ]
 n37-39  [   7  14  10 ]   n118-121  [   3   3  57   1 ]   n202-205  [  27   1   1   1 ]
 n40-42  [   7   1   1 ]   n122-125  [   1   1  18   1 ]   n206-209  [   1   1   1   1 ]
 n43-45  [   1   1   1 ]   n126-129  [   1   1   1   4 ]   n210-213  [   1   1   1   1 ]
 n46-48  [   1   1   1 ]   n130-133  [  16   7   4   5 ]   n214-217  [   1   1   1   1 ]
 n49-51  [   1   1   1 ]   n134-137  [   4   3   1   4 ]   n218-221  [   1  11  11   1 ]
 n52-54  [   1   1   2 ]   n138-141  [   3   1   8   5 ]   n222-225  [   1   1   1   1 ]
 n55-57  [   3   3   3 ]   n142-145  [   1   4   3  10 ]   n226-229  [   1   1   1   1 ]
 n58-60  [  14  11   7 ]   n146-149  [   5  13   2  10 ]   n230-233  [   1   1   1   1 ]
 n61-63  [   8   8   8 ]   n150-153  [   2   2   6   2 ]   n234-237  [   1   3   3   1 ]
 n64-66  [   8   8   8 ]
 n67-69  [   8   8   8 ]      usr1-2 [   6   2 ]                 mgt [  14 ]
TotalTB Active  Use%
8.42    0.53    5.92

The command hpc_ping displays the inter-connect round-trip IP latency time in microseconds

UMBC HPC IB Ping Time to Master.ib (μs) at Mon Mar  2 11:41:22 EST 2015
   n1-3  [ 140 186  92 ]     n70-73  [ 152 145 187 145 ]   n154-157  [ 446 160 139 132 ]
   n4-6  [ 122 115 120 ]     n74-77  [ 163 612 144 157 ]   n158-161  [ 257 220 143 159 ]
   n7-9  [ 198 128  93 ]     n78-81  [ 141 168 173 175 ]   n162-165  [ 152 160 129 618 ]
 n10-12  [ 117 111 129 ]     n82-85  [ 146 132 146 170 ]   n166-169  [ 149 149 170 153 ]
 n13-15  [ 129 112  89 ]     n86-89  [ 142  79 139 174 ]   n170-173  [ 377 467 146 140 ]
 n16-18  [  94 500 193 ]     n90-93  [ 147 150 115 379 ]   n174-177  [ 143 140 139 152 ]
 n19-21  [ 127 128 130 ]     n94-97  [ 150 153 177 152 ]   n178-181  [ 141 127 166 157 ]
 n22-24  [ 150  99 121 ]    n98-101  [ 225 664 174 365 ]   n182-185  [ 167 183 128 179 ]
 n25-27  [ 123 160 112 ]   n102-105  [ 535 220 184 180 ]   n186-189  [ 188 170 146 109 ]
 n28-30  [ 114 134 117 ]   n106-109  [ 160 106 649 179 ]   n190-193  [ 178 151 157 173 ]
 n31-33  [ 117 120 227 ]   n110-113  [ 187 155 143 236 ]   n194-197  [ 131 180 183 407 ]
 n34-36  [  89 101 106 ]   n114-117  [ 152 161 159 107 ]   n198-201  [  88 386 100  97 ]
 n37-39  [ 101  95 748 ]   n118-121  [ 151  93 164 148 ]   n202-205  [ 500 132 199 133 ]
 n40-42  [  88 124  98 ]   n122-125  [ 153 178 166 160 ]   n206-209  [ 136 646 154 132 ]
 n43-45  [ 161 106  87 ]   n126-129  [ 677 147 621 160 ]   n210-213  [ 154 145 157 129 ]
 n46-48  [ 107 126  93 ]   n130-133  [  88 356 120 167 ]   n214-217  [ 172 160 127 190 ]
 n49-51  [ 116 395 110 ]   n134-137  [ 155 159  85 242 ]   n218-221  [ 175 128 158 663 ]
 n52-54  [ 107 482 118 ]   n138-141  [ 121 142 109  91 ]   n222-225  [ 597 132 187 170 ]
 n55-57  [  98 117 100 ]   n142-145  [ 138 163 104 156 ]   n226-229  [ 220 163 133 157 ]
 n58-60  [ 108 101 161 ]   n146-149  [  92 114 142 134 ]   n230-233  [ 160 160 141 124 ]
 n61-63  [ 121  98  92 ]   n150-153  [ 132 113  95 152 ]   n234-237  [  91  88 119  96 ]
 n64-66  [  99  95 251 ]
 n67-69  [ 132 127  98 ]      usr1-2 [  99 115 ]                 mgt [  53 ]

The command hpc_ping_lustre displays the inter-connect round-trip IP latency time in microseconds for the lustre file system

[schou@maya-usr1 ~]$ hpc_ping_lustre
UMBC HPC Lustre Ping Stats (μs) at Sat Feb 28 08:49:30 EST 2015
   n1-3  [  1k  71  65 ]     n70-73  [  40  41  39  53 ]   n154-157  [  49  46  53  45 ]
   n4-6  [  62  68  46 ]     n74-77  [  57  55  47  43 ]   n158-161  [  42  42  50  53 ]
   n7-9  [  62  55  55 ]     n78-81  [  50  38  29  61 ]   n162-165  [  45  43  44  42 ]
 n10-12  [  48  51  51 ]     n82-85  [  79  49  43  61 ]   n166-169  [  49  43  43  42 ]
 n13-15  [  73  54  48 ]     n86-89  [  50  47  41  45 ]   n170-173  [  52  55  47  45 ]
 n16-18  [  56  59  57 ]     n90-93  [  55  46  47  27 ]   n174-177  [  39  46  50  40 ]
 n19-21  [  77  51  46 ]     n94-97  [  55  47  47  37 ]   n178-181  [  46  52  52  42 ]
 n22-24  [  62  49  71 ]    n98-101  [  37  44  52  43 ]   n182-185  [  49  40  42  44 ]
 n25-27  [  57  68  59 ]   n102-105  [  46  35  67  33 ]   n186-189  [  44  52  48  42 ]
 n28-30  [  80  78  61 ]   n106-109  [  52  45  54  53 ]   n190-193  [  48  47  58  46 ]
 n31-33  [  72  66  38 ]   n110-113  [  50  43  53  48 ]   n194-197  [  53  50  41  54 ]
 n34-36  [   0  54  36 ]   n114-117  [  40  47  48  51 ]   n198-201  [  49  45  48  54 ]
 n37-39  [  68  37  45 ]   n118-121  [  49  56  47  49 ]   n202-205  [  41  48  42  50 ]
 n40-42  [  70  73  66 ]   n122-125  [  42  44  44  41 ]   n206-209  [  44  41  49  44 ]
 n43-45  [  73  69  86 ]   n126-129  [  55  47  44  48 ]   n210-213  [  46  48  41  44 ]
 n46-48  [  69  74  82 ]   n130-133  [  43  50  52  48 ]   n214-217  [  44  51  43  46 ]
 n49-51  [  72  56  71 ]   n134-137  [  38  43  52  42 ]   n218-221  [  58  43  42  46 ]
 n52-54  [  65  63  49 ]   n138-141  [  45  47  47  48 ]   n222-225  [  44  46  43  48 ]
 n55-57  [  61  88  58 ]   n142-145  [  57  53  44  44 ]   n226-229  [  45  58  47  44 ]
 n58-60  [  57  34  58 ]   n146-149  [  36  45  47  43 ]   n230-233  [  53  55  62  44 ]
 n61-63  [  58  67  57 ]   n150-153  [  43  45  42  53 ]   n234-237  [  49  50  43  51 ]
 n64-66  [  84  60  57 ]
 n67-69  [  49 618  68 ]      usr1-2 [  69  57 ]                 mgt [  1k ]

The command hpc_net displays the IB network usage in bytes per second

[schou@maya-usr1 ~]$ hpc_net
UMBC HPC IB Network Usage Bytes Per Second at Sat Feb 28 08:50:45 EST 2015
   n1-3  [   0   0   0 ]     n70-73  [   0   0   0   0 ]   n154-157  [   0   0   0   0 ]
   n4-6  [   0   0   0 ]     n74-77  [   0   0   0   0 ]   n158-161  [   0   0   0   0 ]
   n7-9  [   0   0   0 ]     n78-81  [   0   0   0   0 ]   n162-165  [   0   0   0   0 ]
 n10-12  [   0   0   0 ]     n82-85  [   0   0   0   0 ]   n166-169  [   0   0   0   0 ]
 n13-15  [   0   0   0 ]     n86-89  [   0   0   0   0 ]   n170-173  [   0   0   1   0 ]
 n16-18  [   0   0   0 ]     n90-93  [   0   0   0   0 ]   n174-177  [   0   0   0   0 ]
 n19-21  [   0   0   0 ]     n94-97  [   0   0   0   0 ]   n178-181  [   0   0   0   0 ]
 n22-24  [   0   0   0 ]    n98-101  [   0   0   0   0 ]   n182-185  [   0   0   0   0 ]
 n25-27  [   0   0   0 ]   n102-105  [   0   0   0   0 ]   n186-189  [   0   0   0   0 ]
 n28-30  [   0  11   1 ]   n106-109  [   0   0   0   0 ]   n190-193  [   0   0   0   0 ]
 n31-33  [   0   0  56 ]   n110-113  [   0   0   0   0 ]   n194-197  [   0   0   0   0 ]
 n34-36  [   0  56  68 ]   n114-117  [   0   0   0   0 ]   n198-201  [   0   0   0   0 ]
 n37-39  [   0  40  87 ]   n118-121  [   0   0   0   0 ]   n202-205  [   1   0   0   0 ]
 n40-42  [   0   0   0 ]   n122-125  [   0   0   0   0 ]   n206-209  [   0   0   0   0 ]
 n43-45  [   0   0   0 ]   n126-129  [   0   0   0   0 ]   n210-213  [   0   0   0   0 ]
 n46-48  [   0   0   0 ]   n130-133  [   0   0   0   0 ]   n214-217  [   0   0   0   0 ]
 n49-51  [   0   0   0 ]   n134-137  [   0   0   0   0 ]   n218-221  [   0   0   0   0 ]
 n52-54  [   0   0   0 ]   n138-141  [   0   0   0   0 ]   n222-225  [   0   0   0   0 ]
 n55-57  [   0   0   0 ]   n142-145  [   0   0   0   0 ]   n226-229  [   0   0   0   0 ]
 n58-60  [  31  71   0 ]   n146-149  [   0   0   0   0 ]   n230-233  [   0   0   0   0 ]
 n61-63  [   1   0   0 ]   n150-153  [   0   0   0   0 ]   n234-237  [   0   0   0   0 ]
 n64-66  [   1   0   0 ]
 n67-69  [   2   0   0 ]      usr1-2 [   4   0 ]                 mgt [201k ]

The command hpc_power maps out the power usage in Watts across the cluster.

[jongraf1@maya-usr1 ~]$ hpc_power
UMBC HPC Power Usage (Watts) at Mon Mar  2 11:27:09 EST 2015
   n1-3  [  67  71  96 ]     n70-73  [  80 304  84  68 ]   n154-157  [  68  68  96  68 ]
   n4-6  [ 117  97  101]     n74-77  [  60 152  60  76 ]   n158-161  [  76  56  72  68 ]
   n7-9  [ 108 106 107 ]     n78-81  [ 128  92  68  72 ]   n162-165  [  68  76  56  60 ]
 n10-12  [ 116  96 192 ]     n82-85  [  80 220 116  64 ]   n166-169  [  72  92  88  72 ]
 n13-15  [ 199 195 100 ]     n86-89  [ 160 148 132 196 ]   n170-173  [  80  72  84  80 ]
 n16-18  [ 111 109 179 ]     n90-93  [  64  68 128  68 ]   n174-177  [  72  68  84  84 ]
 n19-21  [ 198 201 214 ]     n94-97  [  68  64 160  64 ]   n178-181  [  80  84  84  68 ]
 n22-24  [ 223 209 210 ]    n98-101  [  68  60  76 208 ]   n182-185  [  88  76  72  76 ]
 n25-27  [ 212 221 204 ]   n102-105  [  72  56  68  72 ]   n186-189  [  76  64  72 140 ]
 n28-30  [ 212 186 182 ]   n106-109  [  68 176  60  68 ]   n190-193  [  84  72  88  68 ]
 n31-33  [ 188 180 198 ]   n110-113  [  84  68  60 172 ]   n194-197  [  64  72  84 256 ]
 n34-36  [ 338 337 324 ]   n114-117  [  76  80  68 240 ]   n198-201  [ 244 236 244 228 ]
 n37-39  [ 331 324 321 ]   n118-121  [  52 172  72  64 ]   n202-205  [ 156  76  80  72 ]
 n40-42  [ 322 335 321 ]   n122-125  [  76  76  80  56 ]   n206-209  [  80  64  80  80 ]
 n43-45  [ 341 334 340 ]   n126-129  [  68  72  72 148 ]   n210-213  [  80  80  80  88 ]
 n46-48  [ 337 324 319 ]   n130-133  [ 184 164 132 148 ]   n214-217  [  68  72  60  76 ]
 n49-51  [ 405 381 392 ]   n134-137  [ 132 124 172 164 ]   n218-221  [  76 148 144 100 ]
 n52-54  [ 168 163 170 ]   n138-141  [ 168 132 160 132 ]   n222-225  [  72 100  90 100 ]
 n55-57  [ 166 173 163 ]   n142-145  [ 164 144 176 162 ]   n226-229  [  90  92 100  90 ]
 n58-60  [ 166 157 163 ]   n146-149  [ 184 200 156 132 ]   n230-233  [  72  80  76 248 ]
 n61-63  [ 160 158 154 ]   n150-153  [  68 244 232  64 ]   n234-237  [ 236 236 244 204 ]
 n64-66  [ 156 169 154 ]
 n67-69  [ 152 162 156 ]      usr1-2 [ 192 342 ]                 mgt [ 145 ]
Min	Avg	Max	TotalKW
52	138	405	32.74

The command hpc_roomtemp displays the current temperature in degrees Celsius and Fahrenheit of the room in which the cluster is housed.

[jongraf1@maya-usr1 ~]$ hpc_roomtemp
C	F
17	63

The command hpc_temp displays a heat map of the air intake temperatires of the system in degrees Celsius.

[jongraf1@maya-usr1 ~]$ hpc_temp
UMBC HPC Inlet Temperature (Celcius) at Mon Mar  2 11:50:27 EST 2015
  user1  [  13 ]  user2  [  14 ]    mgt  [  22 ]
    n69  [  12 ]    n51  [  13 ]
    n68  [  11 ]    n50  [  13 ]    n33  [  17 ]
    n67  [  11 ]    n49  [  13 ] n31-32  [  17  17 ]
    n66  [  12 ]    n48  [  13 ] n29-30  [  16  16 ]
    n65  [  11 ]    n47  [  12 ] n27-28  [  16  16 ]
    n64  [  11 ]    n46  [  12 ] n25-26  [  16  16 ]
    n63  [  11 ]    n45  [  12 ] n23-24  [  15  16 ]
    n62  [  10 ]    n44  [  11 ] n21-22  [  15  15 ]
    n61  [  11 ]    n43  [  11 ] n19-20  [  14  14 ]
    n60  [  11 ]    n42  [  11 ] n17-18  [  14  13 ]
    n59  [  11 ]    n41  [  10 ] n15-16  [  13  13 ]
    n58  [  11 ]    n40  [  11 ] n13-14  [  13  13 ]
    n57  [  10 ]    n39  [  11 ] n11-12  [  13  13 ]
    n56  [  11 ]    n38  [  11 ]  n9-10  [  13  13 ]
    n55  [  11 ]    n37  [  11 ]   n7-8  [  13  13 ]
    n54  [  11 ]    n36  [  12 ]   n5-6  [  13  13 ]
    n53  [  12 ]    n35  [  13 ]   n3-4  [  14  13 ]
    n52  [  12 ]    n34  [  13 ]   n1-2  [  17  15 ]

The command hpc_uptime can be used to view the uptime of each node or the time since the last re-image.

[schou@maya-usr1 ~]$ hpc_uptime
UMBC HPC Uptime at Sat Feb 28 08:04:06 EST 2015
   n1-3  [  7h  7h  4d ]     n70-73  [ 18h  7d  7d  7d ]   n154-157  [  7d  7d 18h  7d ]
   n4-6  [ 2we 2we  4d ]     n74-77  [  7d 33h  7d  7d ]   n158-161  [  7d  7d  7d  7h ]
   n7-9  [  7d  7d  7d ]     n78-81  [  7d  4d  7d  7d ]   n162-165  [ 17h 17h 17h 17h ]
 n10-12  [  6d  6d  6d ]     n82-85  [  7d  7d  7d  7d ]   n166-169  [ 17h 17h 17h 17h ]
 n13-15  [  6d  6d  6d ]     n86-89  [  7d  7d  7d  7d ]   n170-173  [ 17h 17h  7d 17h ]
 n16-18  [  6d 2we  6d ]     n90-93  [  7d 17h  7d  7d ]   n174-177  [ 17h 17h 17h 17h ]
 n19-21  [  6d  6d  7d ]     n94-97  [  7d  7d  7d  7d ]   n178-181  [ 17h 17h 17h 17h ]
 n22-24  [ 20h  7d  7d ]    n98-101  [  7d  7d  7d  7d ]   n182-185  [ 17h 17h 17h 17h ]
 n25-27  [ 16h 15h 15h ]   n102-105  [  7d  7d  7d  7d ]   n186-189  [ 17h 17h 17h  7d ]
 n28-30  [ 15h 16h 16h ]   n106-109  [ 16h  7d  7d  7d ]   n190-193  [ 17h 17h 17h 17h ]
 n31-33  [ 16h  7h  7h ]   n110-113  [  7d  5d 18h  7d ]   n194-197  [ 17h 17h 18h  7h ]
 n34-36  [  7h 10h 10h ]   n114-117  [ 16h  7d  7d  7d ]   n198-201  [ 17h 17h  7h 17h ]
 n37-39  [ 10h 10h 10h ]   n118-121  [  7d 16h 16h 16h ]   n202-205  [  7d 17h 17h 17h ]
 n40-42  [ 10h 10h 10h ]   n122-125  [ 16h 16h 16h 16h ]   n206-209  [ 17h 17h 17h 17h ]
 n43-45  [ 10h 10h 10h ]   n126-129  [ 16h 16h 16h  7d ]   n210-213  [ 17h 17h 17h 17h ]
 n46-48  [ 10h 10h 10h ]   n130-133  [  7d  7d  7d  7d ]   n214-217  [ 17h 17h 17h 17h ]
 n49-51  [ 10h 10h 10h ]   n134-137  [  7d  7d  7d  7d ]   n218-221  [ 17h  7d  7d 17h ]
 n52-54  [  6d  7d  7d ]   n138-141  [  7d  7d  7d  7d ]   n222-225  [ 17h 17h 17h 17h ]
 n55-57  [  7d  7d  7d ]   n142-145  [  7d  7d  7d  7d ]   n226-229  [ 17h 17h 17h 17h ]
 n58-60  [ 16h 16h 16h ]   n146-149  [  7d  7d  7d  7d ]   n230-233  [ 17h 17h 17h 17h ]
 n61-63  [ 16h 16h 16h ]   n150-153  [  7d  6d  6d 17h ]   n234-237  [ 17h  7d  7d 17h ]
 n64-66  [ 16h 16h 16h ]
 n67-69  [ 17h 17h 17h ]      usr1-2 [ 4we 4we ]                 mgt [ 4we ]

The command hpc_qosstat can be used to view the current QOS usage, limitations, and partition breakdowns.

[jongraf1@maya-usr1 ~]$ hpc_qosstat
Current QOS usage:
QOS (Wait Reason)            Count
---------------------------- -----
long(None)                      36
medium(None)                   580
medium(Priority)                65
medium(QOSResourceLimit)       160
long(Priority)                   2
long_contrib(None)              67
support(Resources)            1664

QOS Limitations:
      Name  GrpCPUs MaxCPUsPU 
---------- -------- --------- 
    normal                    
      long      256        16 
long_cont+      768       256 
     short                    
    medium     1536       256 
   support                    

Partition     Active    Idle     N/A   Total  (CPUs)
------------ ------- ------- ------- ------- 
batch            699    1033     636    2368
develop*           0      64       0      64
prod             323     735     238    1296
mic                0    8640       0    8640
develop-mic        0       2       0       2