Skip to Main Content

Using your HPCF account

Table of Contents

This tutorial below will walk you through your home directory, some useful Linux information, review what to expect in your new account, the module utility to load software, and the specialized storage areas on taki.

Connecting to taki

The only nodes with a connection to the outside network are the login nodes, also called user nodes. Internally to the system, their full hostnames are taki-usr1.rs.umbc.edu and taki-usr2.rs.umbc.edu (notice the “-usr1” and “-usr2”). From the outside, we must refer to the hostname taki.rs.umbc.edu. To log in to the system, you must use a secure shell like SSH from Unix/Linux, PuTTY from Windows, or similar. You connect to the user node, which is the only node visible from the internet. For example, suppose we are connecting to taki from the Linux machine “linux1.gl.umbc.edu”. We will take user “gobbert” as our example throughout this page.

linux1 ~ 101% ssh gobbert@taki.rs.umbc.edu
WARNING: UNAUTHORIZED ACCESS to this computer is in violation of Criminal
         Law Article section 8-606 and 7-302 of the Annotated Code of MD.

NOTICE:  This system is for the use of authorized users only.
         Individuals using this computer system without authority, or in
         excess of their authority, are subject to having all of their
         activities on this system monitored and recorded by system
         personnel.

gobbert@taki.rs.umbc.edu's password: 
Last login: Wed Oct 31 08:55:30 2018 from pc23.math.umbc.edu

  UMBC High Performance Computing Facility              http://hpcf.umbc.edu
  --------------------------------------------------------------------------
  By using this system you agree with and will adhere to the HPCF usage
  policy.  If you have any questions or problems using this system please
  submit a help request via the "Help Request" link found on the HPCF
  website under "Forms".

  Remember that the Division of Information Technology will never ask for
  your password. Do NOT give out this information under any circumstances.
  --------------------------------------------------------------------------

Replace “gobbert” with your UMBC username (that you use to log into myUMBC). You will be prompted for your password when connecting; your password is your myUMBC password. Notice that connecting to taki.rs.umbc.edu puts us on taki-usr1. We may connect to the other user node with the following.

[gobbert@taki-usr1 ~]$ ssh taki-usr2
Warning: Permanently added 'taki-usr2,10.2.15.252' (ECDSA) to the list of known hosts.
... same welcome message ...
[gobbert@taki-usr2 ~]$

As another example, suppose we’re SSHing to taki from a Windows machine with PuTTY. When setting up a connection, use “taki.rs.umbc.edu” as the hostname. Once you connect, you will be prompted for your username and password, as mentioned above.

How to copy files to and from taki

Probably the most general way to transfer files between machines is by Secure Copy (scp). Because some remote filesystems may be mounted to taki, it may also be possible to transfer files using “local” file operations like cp, mv, etc.

The taki cluster only allows secure connection from the outside. Secure Copy is the file copying program that is part of Secure Shell (SSH). To transfer files to and from maya, you must use scp or compatible software (such as WinSCP or SSHFS). On Unix machines such as Linux or MacOS X, you can execute scp from a terminal window.

A brief tour of your account

This section assumes that you have logged in as described above. It assumes that you are a member of a research group, otherwise you might not have some of the storage areas.

Home directory

At any given time, the directory that you are currently in is referred to as your current working directory. Since you just logged in, your home directory is your current working directory. The “~” symbol is shorthand for your home directory. The program “pwd” tells you the full path of the current working directory, so let us run pwd to see where your home directory really is:

[gobbert@taki-usr1 ~]$ pwd
/home/gobbert

Now let us use ll to get more information about your home directory.

a[gobbert@taki-usr1 ~]$ ll
total 3
lrwxrwxrwx 1 gobbert pi_gobbert 25 Sep 21 09:06 gobbert_common -> /umbc/xfs1/gobbert/common/
lrwxrwxrwx 1 gobbert pi_gobbert 32 Sep 21 09:06 gobbert_user -> /umbc/xfs1/gobbert/users/gobbert/

The permissions “lrwxrwxrwx” starting with “l” (short for “link”) indicate that these two are symbolic links pointing to two storage areas that you have access to; a student account may have fewer areas. The purpose is to make changing directory to these areas easier, namely you would just write “cd gobbert_common” to change directory to common area of your research group. The ‘common’ area is for all users in the research group to share files, while the ‘user’ area is for each user to store his/her own research files. The space in your home directory is limited though. In your home directory, you are only allowed a very limited storage space. You should therefore use the common and user areas to do work on taki.

Common workspace

The intention of common workspace is to store large volumes of data, such as large datasets from computations, which can be accessed by everyone in your group. By default, the permissions of common workspace are set as follows to enable all members of your research group to work together on all sub-directories and files:

[gobbert@taki-usr1 ~]$ ll -d /umbc/xfs1/gobbert/common
drwxrws--- 12 gobbert pi_gobbert 14 Oct  7 14:19 /umbc/xfs1/gobbert/common/

The string “drwxrws—” indicates that the PI, who is the owner of the group, has read, write, and search permissions in this directory. In addition, other members of the group also have read, write, and search permissions. The “s” indicates that all directories created under this directory should inherit the same group permissions. (If this attribute were set but execute permissions were not enabled for the group, this would be displayed as a capital letter “S”).

User workspace

Where common workspace is intended as an area for collaboration, user workspace is intended for individual work. Again, it is intended to store reasonably large volumes of data. Your PI and other group members can see your work in this area, but cannot edit it, as indicated by these permissions:

[gobbert@taki-usr1 ~]$ ll -d /umbc/xfs1/gobbert/users/gobbert
drwxr-s--- 7 gobbert pi_gobbert 12 Feb 25  2018 /umbc/xfs1/gobbert/users/gobbert/

The string “drwxr-s—“, means that only you may make changes inside this directory, but anyone in your group can list or read the contents. Other users on taki do not have any access to this area, as indicated by the dashes.

More about permissions

Standard Unix permissions are used on taki to control which users have access to your files. We’ve already seen some examples of this. It’s important to emphasize that this is the mechanism that determines the degree of sharing, and on the other hand privacy, of your work on this system. In setting up your account, we’ve taken a few steps to simplify things, assuming you use the storage areas for the basic purposes they were designed. This should be sufficient for many users, but you can also customize your use of the permissions system if you need additional privacy, to share with additional users, etc.

Changing a file’s permissions

For existing files and directories, you can modify permissions with the “chmod” command. As a very basic example:

[gobbert@taki-usr1 ~]$ ll tmpfile 
-rwxrwxr-x 1 gobbert pi_gobbert 0 Jun 14 17:50 tmpfile
[gobbert@taki-usr1 ~]$ chmod o-rwx tmpfile 
[gobbert@taki-usr1 ~]$ chmod ug-x tmpfile
[gobbert@taki-usr1 ~]$ ll tmpfile 
-rw-rw---- 1 gobbert pi_gobbert 0 Jun 14 17:50 tmpfile

See “man chmod” for more information, or the Wikipedia page for chmod

Changing a file’s group

For users in multiple groups, you may find the need to change a file’s ownership to a different group. This can be accomplished on a file-by-file basis by the “chgrp” command

[gobbert@taki-usr1 ~]$ touch tmpfile 
[gobbert@taki-usr1 ~]$ ll tmpfile 
-rw-rw---- 1 araim1 pi_nagaraj 0 Jun 14 18:00 tmpfile
[gobbert@taki-usr1 ~]$ chgrp pi_gobbert tmpfile 
[gobbert@taki-usr1 ~]$ ll tmpfile 
-rw-rw---- 1 araim1 pi_gobbert 0 Jun 14 18:00 tmpfile

Checking disk usage vs. storage limits

TMPTMPTMP information on quota, df, du needed here, plus other tools, if any. MGo

Things to check on your new taki account

The Bash shell is the default shell for taki users – this will be the shell that you are assumed to be in for purposes of documentation and examples on this webpage. Check your shell with the command “echo $SHELL” or by using “env” and searching for SHELL in the resulting lines of output.

[gobbert@taki-usr1 ~]$ echo $SHELL
/bin/bash

Group membership

Your account has membership in one or more Unix groups. On taki, groups are usually (but not always) organized by research group and named after the PI; students in a class are in the ‘student’ group. The primary purpose of these groups is to facilitate sharing of files with other users, through the Unix permissions system. To see your Unix groups, use the groups command:

[gobbert@taki-usr1 ~]$ groups
pi_gobbert 

In the example above, the user is a member of the pi_gobbert group.

If any of the symbolic links to storage areas do not exist, you may create them using the following commands. You only need to do this once. We suggest that you repeat it for each PI if you are a member of multiple research groups.

[gobbert@taki-usr1 ~]$ ln -s /umbc/xfs1/gobbert/common ~/gobbert_common
[gobbert@taki-usr1 ~]$ ln -s /umbc/xfs1/gobbert/users/gobbert ~/gobbert_user

Umask

By default, your account will have a line in ~/.bashrc which sets your “umask”

umask 007

The umask helps to determine the permissions for new files and directories you create. Usually when you create a file, you don’t specify what its permissions will be. umask 007 ensures that outside users have no access to your new files. See the Wikipedia entry for umask for more explanation and examples. On taki, the storage areas’ permissions are already set up to enforce specific styles of collaboration. We’ve selected 007 as the default umask to not prevent sharing with your group, but to prevent sharing with outside users. If you generally want to prevent your group from modifying your files (for example), even in the shared storage areas, you may want to use a more restrictive umask.

Please run the following command to check your bashrc file:

[gobbert@taki-usr1 ~]$ more .bashrc
# .bashrc
# Set the permissions to limit the default read privs to only the user
umask 007

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature
:
# export SYSTEMD_PAGER=

# User specific aliases and functions
module load intel
module load slurm
module load default-environment

Modules

Modules are a simple way of preparing your environment to use many of the major applications on taki. Modules are normally loaded for the duration of an SSH session. They can be unloaded as well, and can also be set to automatically load each time you log in. The following shows the modules which are loaded for you by default; the version numbers will change as the software is upgraded over time.

[gobbert@taki-usr1 ~]$ module list

Currently Loaded Modules:
  1) shared
  2) DefaultModules
  3) GCCcore/7.3.0
  4) binutils/2.30-GCCcore-7.3.0
  5) icc/2018.3.222-GCC-7.3.0-2.30
  6) ifort/2018.3.222-GCC-7.3.0-2.30
  7) iccifort/2018.3.222-GCC-7.3.0-2.30
  8) impi/2018.3.222-iccifort-2018.3.222-GCC-7.3.0-2.30
  9) iimpi/2018b
 10) imkl/2018.3.222-iimpi-2018b
 11) dot
 12) intel/2018b
 13) slurm/17.11.8
 14) default-environment              

This means that, among others, SLURM, GCC, and the Intel compiler + Intel MPI implementation are usable by default as soon as you log in. If we wish to use other software, we must first load the appropriate module with “module load”.

To use compilers other than default you need to unload and load modules from time to time. If you lost track and want to get back to the default status, use the following commands:

[gobbert@taki-usr1 ~]$ module purge
[gobbert@taki-usr1 ~]$ module load shared default-environment

Complete documentation of module commands and options can be found by

man module

or

module help

.
We can list all available modules which have been defined by the system administrators with

module avail

Storage areas

The directory structure that DoIT will set up as part of your account creation is designed to facilitate the work of research groups consisting of several users and also reflects the fact that all HPCF accounts must be sponsored by a faculty member at UMBC. This sponsor will be referred to as PI (short for principal investigator) in the following. A user may be a member of one or several research groups on taki. Each research group has several storage areas on the system in the following specified locations. See the System Description for a higher level overview of the storage and the cluster architecture.

Note that some special users, such as students in a class, may not belong to a research group and therefore may not have all of the group storage areas set up.

Storage Area Location Description
User Home /home/username/ This is where the user starts after logging in to maya. Only accessible to the user by default. Default size is 100 MB, storage is located on the management node. This area is backed up nightly.
User Workspace Symlink: /home/username/pi_name_user
Mount point: /umbc/xfs1/pi_name/users/username/
A central storage area for the user’s own data, accessible only to the user and with read permission to the PI, but not accessible to other group members by default. Ideal for storing output of parallel programs, for example. This area is not backed up.
Group Workspace Symlink: /home/username/pi_name_common
Mount point: /umbc/xfs1/pi_name/common/
The same functionality and intent for use as user workspace, except this area is accessible with read and write permission to all members of the research group.
Scratch space /scratch/NNNNN Each compute node on the cluster has local /scratch storage. The space in this area is shared among current users of the node so the total amount available will vary based on system usage. This storage is convenient temporary space to use while your job is running, but note that your files here persist only for the duration of the job. Use of this area is encouraged over /tmp, which is also needed by critical system processes. Note that a subdirectory NNNNN (e.g., 22704) is created for your job by the scheduler at runtime.
Tmp Space /tmp/ Each machine on the cluster has its own local /tmp storage, as is customary on Unix systems. This scratch area is shared with other users and is purged periodically by the operating system, therefore is only suitable for temporary scratch storage. Use of /scratch is encouraged over /tmp (see above).
AFS /afs/umbc.edu/users/u/s/username/ Your AFS storage is conveniently available on the cluster, but can only be accessed from the user node. The “/u/s” in the directory name should be replaced with the first two letters of your username (for example user “straha1” would have directory /afs/umbc.edu/users/s/t/straha1).

“Mount point” indicates the actual location of the storage on the filesystem. Traditionally, many users prefer to have a link to the storage from their home directory for easier navigation. The field “symlink” gives a suggested location for this link. For example, once the link is created, you may use the command “cd ~/pi_name_user” to get to User Workspace for the given PI. These links may be created for users as part of the account creation process; however, if they do not yet exist, simple instructions are provided below to create them yourself.

The amount of space available in the PI-specific areas depend on the allocation given to your research group. Your AFS quota is determined by DoIT. The quota for everyone’s home directory is generally the same.

Note that listing the contents of /umbc/xfs1 may not show storage areas for all PIs. This is because PI storage is only loaded when it is in use. If you attempt to access a PI’s subdirectory in /umbc/xfs1, it should be loaded (seamlessly) if it was previously offline.