Table of Contents
- Introduction
- Serial Hello World
- Parallel Hello World
- Logging Which Nodes are Used
- Choosing a Compiler and MPI Implementation
Introduction
In this tutorial we will illustrate how to compile C source code and run the resulting executable on the CPU cluster in taki. Working on a distributed cluster like taki is fundamentally different from working on a standard server (like gl.umbc.edu) or a personal computer, so please make sure to read and understand this material. We will first start with a classical serial example, and work our way to compiling parallel code. We will assume that you know some basic programming concepts, so the code will not be explained in explicit detail. More details can be found in manual pages on the system that are available for Linux commands (e.g., try “man mkdir”, “man cd”, “man pwd”, “man ls”) as well as for C functions (e.g., try “man fprintf”).
We also want to demonstrate here that it is a good idea to collect files for a project in a directory. This project is on a serial version of the “Hello, world!” program. Therefore, use the mkdir (= “make directory”) command to create a directory “Hello_Serial” and cd (= “change directory”) into it.
[gobbert@taki-usr1 ~]$ mkdir Hello_Serial [gobbert@taki-usr1 ~]$ cd Hello_Serial [gobbert@taki-usr1 Hello_Serial]$ pwd /home/gobbert/Hello_Serial
Notice that the command prompt indicates that I am in directory Hello_Serial now. Use the pwd (= “print working directory”) command any time to confirm where you are in your directory structure and ll (short for “ls -l”) to list the files that are there.
A convenient way to save the example code on this page directly into the current directory of your project uses the wget command as follows. There is a “download” link under each code example. You can copy the link address in your browser and copy it after the wget command in your taki terminal session to download the file to the local directory, as shown here.
[gobbert@taki-usr1 Hello_Serial]$ wget http://hpcf-files.umbc.edu/code-2018/taki/Hello_Serial/hello_serial.c --2019-01-28 09:43:08-- http://hpcf-files.umbc.edu/code-2018/taki/Hello_Serial/hello_serial.c Resolving hpcf-files.umbc.edu (hpcf-files.umbc.edu)... 130.85.12.140 Connecting to hpcf-files.umbc.edu (hpcf-files.umbc.edu)|130.85.12.140|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 184 [text/plain] Saving to: 'hello_serial.c' 100%[======================================>] 184 --.-K/s in 0s 2019-01-28 09:43:08 (46.0 MB/s) - 'hello_serial.c' saved [184/184]
You can list all files to see that the file is present now:
[gobbert@taki-usr1 Hello_Serial]$ ll total 5 -rw-rw---- 1 gobbert pi_gobbert 184 Feb 1 2014 hello_serial.c
We have shown the prompt in the examples above to emphasize that a command is being issued. When following the examples, your prompt may look a bit different (e.g., your own username will be there!), but be careful to only issue the command part, not the prompt or the example output.
Serial Hello World
This video recording on compiling and running code on taki provides a live demonstration of the steps that are written out in the following text, including the run tutorial page.
We will now consider a simple “Hello, world!” program that prints the name of the host machine. Here is the code
Download: ../code-2018/taki/Hello_Serial/hello_serial.c
Creating a directory for this project and downloading this code with wget was the example given above on this page.
Once you have saved this code to your workspace, we have to compile it before we can execute it, since C is a source code programming language. There are several C compilers on taki. We will demonstrate the Intel C compiler, which is the default on taki.
[gobbert@taki-usr1 Hello_Serial]$ icc hello_serial.c -o hello_serial
If successful, no errors or warnings will appear and an executable hello_serial will have been created, in addition to the source code in hello_serial.c.
[gobbert@taki-usr1 Hello_Serial]$ ll total 21 -rwxrwx--- 1 gobbert pi_gobbert 22488 Jan 28 09:45 hello_serial* -rw-rw---- 1 gobbert pi_gobbert 184 Feb 1 2014 hello_serial.c
Notice that the “x” in the permissions “-rwxrwx—” indicates that hello_serial is an executable; this is also indicated by the asterisk “*” following its name (the “*” is not part of the filename, it is just an indication from the ls command). When a file is not an executable (or there is no permission to execute it), a dash “-” appears in place of the “x”; the dashes in “-rw-rw—-” for hello_serial.c confirm that this C source code is not executable in its source code form.
To see how to run your serial executable on the cluster, jump to how to run serial programs.
Parallel Hello World
Now we will compile a “Hello, world!” program which can be run in parallel on multiple processors. You may want to create a new directory for this project using “mkdir Hello_Parallel”. Use wget again to save the following code to your directory.
Download: ../code-2018/taki/Hello_Parallel/hello_parallel.c
This version of the “Hello, world!” program collects several pieces of information at each MPI process: the MPI processor name (i.e., the hostname), the process ID, and the number of processes in our job. Notice that we needed a new header file mpi.h to get access to the MPI commands. We also need to call MPI_Init before using any other MPI commands, and MPI_Finalize is needed at the end to clean up. Compile the code with the following command.
[gobbert@taki-usr1 Hello_Parallel]$ mpiicc hello_parallel.c -o hello_parallel
After a successful compilation with no errors or warnings, an executable “hello_parallel” should have been created, which we confirm by “ll”.
[gobbert@taki-usr1 Hello_Parallel]$ ll total 5 -rwxrwx--- 1 gobbert 22664 Jan 27 18:57 hello_parallel* -rw-rw---- 1 gobbert 490 Feb 1 2014 hello_parallel.c
To see how to run your parallel executable on the cluster, jump to how to run parallel programs.
Logging Which Nodes are Used
For a parallel program, it is always a good idea to log which compute nodes you have used. We can extend our parallel “Hello, world!” program to accomplish this, namely in addition to printing the information to stdout, we will save it to file. First, the functionality is contained in a self-contained function nodesused() that you can copy also into other programs and then call from the main program, as shown in the code below. Second, we noticed that the processes reported back in a random order to stdout. This is difficult to read for large numbers of processes, so for the output to file, we have the process with ID 0 receive the greeting message from each other process, in order by process ID, and only Process 0 will write the messages to file. Third, the code below actually creates and writes to two files: (i) The file “nodes_used.log” contains only the process ID and hostname, which is the same information as printed stdout already, but ordered. (ii) The file “nodesused_cpuid.log” additionally outputs the CPU ID, that is, the number of the computational core in the two CPUs on the node that the MPI process executed on.
Download: ../code-2018/taki/Nodesused/nodesused.c
Message sending is accomplished using the MPI_Send() function, and receiving with the MPI_Recv() function. Each process prepares its own message, then execution varies depending on the current process. Process 0 writes its own message first, then receives and writes the others in order by process ID. All other processes simply send their message to process 0. The fprintf function is used to write one line for each MPI process to each of the two output files.
This program is compiled for use with MPI by
[gobbert@taki-usr1 Nodesused]$ mpiicc nodesused.c -o nodesused
To see how to run your parallel executable of the nodesused program on the cluster, jump to how to run parallel programs on the batch partition.
Choosing a Compiler and MPI Implementation
On taki, several compilers and several implementations of the MPI standard are installed. At this point, we supply three compiler suites:
- Intel compiler suite (Default) with Composer XE – C, C++, Fortran 77, 90, 95, and 2003. This includes the Intel Math Kernel Library (LAPACK/BLAS)
- GNU compiler suite – C, C++, Fortran 70, 90, and 95
- Clang compiler – C, C++, Objective C/C++, OpenCL, CUDA, and RenderScript.
Each of these compilers has an OpenMP compile flag and is available in combination with the Intel implementation of MPI. The Intel compiler suite and the Intel implementation of MPI are defaults on taki, as performance studies for the combinations of icc, gcc, and clang with Intel MPI show that the combination of the Intel compiler and Intel MPI implementation is optimal by a small margin over the combinations with gcc and clang. For more details on the other compiler suites and how to use them, refer to Technical Report HPCF-2019-1 on the HPCF Publications page.
Compiling with the Intel compiler suite
The Intel compiler icc and the Intel MPI implementation, currently version 18.0.3,
are accessed on taki through the wrapper mpiicc. Since the compiler and MPI implementation are the defaults, they are available after the module load default-environment command in the .bashrc file in the user’s home directory that is automatically executed upon login to taki.
The command used to compile code depends on the language. Intel uses the icc compiler for C, icpc for C++, and ifort for Fortran. Their corresponding Intel MPI implemetations can be accessed through the wrappers mpiicc, mpiicpc, and mpiifort respectively.
Language | Serial | Parallel with Intel MPI | Example |
C | icc | mpiicc |
mpiicc example.c -o example |
C++ | icpc | mpiicpc |
mpiicpc example.cpp -o example |
FORTRAN | ifort | mpiifort |
mpiifort example.f90 -o example |
Compiling with the GNU compiler
The GNU C compiler is gcc and its Intel MPI implementation can be accessed through the wrapper mpicc, which Intel advertises as the native way to use Intel MPI with gcc specifically. gcc and Intel MPI are loaded by default by the module load default-environment command in the .bashrc file in the user’s home directory that is automatically executed upon login to taki. For C++ and FORTRAN the mpiicpc and mpiifort wrappers should be used. The GNU compilers for C++ and Fortran are g++ and gfortran. To link against the Intel MPI libraries would be mpiicpc -cxx=g++ and mpiifort -fc=gfortran respectively.
Language | Serial | Parallel with Intel MPI | Example |
C | gcc | mpicc |
mpicc example.c -o example |
C++ | g++ | mpiicpc |
mpiicpc -cxx=g++ example.cpp -o example |
FORTRAN | gfortran | mpiifort |
mpiifort -fc=gfortran example.f90 -o example |
Compiling with the clang compiler
The clang compiler and its Intel MPI implementation can be accessed through the wrapper mpiicc, which Intel advertises as the native way to use Intel MPI with alternate compilers. Since the Intel MPI implementation is loaded by default, it is available after the module load default-environment command in the .bashrc file in the user’s home directory that is automatically executed upon login to taki. However, clang must be loaded using module load Clang/7.0.0 at the command line prior to compiling since it is not loaded by default. Then, to use clang for compiling in mpiicc, the flag -cc=clang must be used with mpiicc.
Language | Serial | Parallel with Intel MPI | Example |
C | clang | mpiicc |
mpiicc -cc=clang example.c -o example |
C++ | clang++ | mpiicpc |
mpiicpc -cxx=clang++ example.cpp -o example |
FORTRAN | N/A | N/A |
N/A |