Hands-On Session Advanced Messaging #1:
Performance Profiling and Analysis

Objective: To gain experience in the profiling and performance analysis of parallel message passing programs
The code for this exercise can be found here.
Instructions on how to log into the remote machine and how to download the source code to your working directory (using wget) can be found here.

For this session, we will use gcc rather than the default compiler on Raijin (which is the Intel C compiler icc). You will need to replace the compiler module on Raijin as follows:

module unload intel-cc; module load gcc

The program heat.c sets up and solves a 2-D heat diffusion problem.

This models a metal plate of size Rx by Ry for which the temperature at the edge is held at some constant value Tedge. The aim is to determine the temperature in the middle of the plate. This is done iteratively by dividing the domain of the metal plate into a grid of points. The new temperature at each grid point is calculated as the average of the current temperatures at the four adjacent grid points. Iteration continues until the maximum change in temperature for any grid point is less than some threshold. A diagrammatic illustration of the problem is shown below:

Illustration of Jacobi iteration for heat problem. Grid
	of R_x by R_y points with temperature fixed to T_edge for edge
	points. New temperature for a grid point is average of current
	temperature at the four neighbouring grid points.

The code requests as input:

Nx
the number of data points along the x axis
Ny
the number of data points along the y axis
Tedge
the fixed temperature at the boundary
Max_iter
Maximum number of iterations permitted
converge
the convergence criteria

Except at the edges, each grid point is initially set to zero. For problem sizes with Nx and Ny less than 20 the resulting temperatures are printed. This is really just for interest.

The code has been parallelized using MPI.

Run the code mpirun -np 1 ./heat with input:

10 10
100.0
100
0.1

just to make sure it works.

Profiling General Jobs

To analyze the computational breakdown, we can use either hardware or software measurements.

We can obtain a rough estimate of time spent at the basic block level using a coverage tool like gcov. To do this, recompile the code with additional compiler options, running it and post-processing the output files as follows:

$ make heat_cov
$ mpirun -np 1 ./heat_cov < heat.input
$ gcov heat.c

This will give you a file heat.c.gcov. If you look in here you will see output of the form:

       -:   93:        // update local values
       10:   94:        jst = rank*chk+1;
       10:   95:        jfin = jst+chk > Ny-1 ? Ny-1 : jst+chk;
    49990:   96:        for (j = jst; j < jfin; j++) {
249850020:   97:            for (i = 1; i < Nx-1; i++) {
249800040:   98:                tnew[j*Nx+i]=0.25*(told[j*Nx+i+1]+told[j*Nx+i-1]+told[(j+1)*Nx+i]+told[(j-1)*Nx+i]);
        -:   99:            }
        -:  100:        }

where the large numbers on the left indicate the number of times that each line of code has been executed (a # indicates the line was not executed).

Profiling MPI Jobs

There are a variety of tools available for analyzing MPI programs (go to the NCI software developer guide and look under Debuggers and Profilers. Also see the NCI MPI Performance Analysis Tools wiki page).

  1. Run the ipm profiler for heat with the 5000 by 5000 grid size with 8 processes, i.e. mpirun -np 8 ./heat < heat.input and inspect both the text and graphical output (you need to load the ipm module, run the code, and then use the ipm_view ipmfile , where ipmfile is the name of the IPM generated file (a long name ending in .ipm). You also will need to have logged on to Raijin with X forwarding, i.e. ssh -X MyID@raijin, so that a new window can be displayed).
  2. Now run the mpiP profiler. To do this, first type make clean, then module unload ipm, then module load mpiP. Now type make and run the program. It should produce a file called something like heat.XXX....mpiP, where XXX indicates the number of MPI processes used. Inspect this file. What information is provided? How does this profiler compare with ipm?

The iterative part of heat.c is given below:

   
do {
    iter++;
    // update local values
    jst = rank*chk+1;
    jfin = jst+chk > Ny-1 ? Ny-1 : jst+chk;
    for (j = jst; j < jfin; j++) {
        for (i = 1; i < Nx-1; i++) {
            tnew[j*Nx+i] = 0.25*(told[j*Nx+i+1]+told[j*Nx+i-1]+told[(j+1)*Nx+i]+told[(j-1)*Nx+i]);
        }
    }
    // Send to rank+1
    if (rank+1 < size) {
        jst = rank*chk+chk;
        MPI_Send(&tnew[jst*Nx],Nx, MPI_DOUBLE, rank+1, 
                            2, MPI_COMM_WORLD);
    }
    if (rank-1 >= 0) {
        jst = (rank-1)*chk+chk;
        MPI_Recv(&tnew[jst*Nx],Nx, MPI_DOUBLE,rank-1, 
                            2, MPI_COMM_WORLD, &status);
    }
    // Send to rank-1
    if (rank-1 >= 0) {
        jst = rank*chk+1;
        MPI_Send(&tnew[jst*Nx],Nx, MPI_DOUBLE, rank-1, 
                            1, MPI_COMM_WORLD);
    }
    if (rank+1 < size) {
        jst = (rank+1)*chk+1;
        MPI_Recv(&tnew[jst*Nx],Nx, MPI_DOUBLE, rank+1,
                            1, MPI_COMM_WORLD, &status);
    }
    // fix boundaries in tnew
    j=0;    for (i = 0; i < Nx; i++) tnew[j*Nx+i] = Tedge;
    j=Ny-1; for (i = 0; i < Nx; i++) tnew[j*Nx+i] = Tedge;
    i=0;    for (j = 0; j < Ny; j++) tnew[j*Nx+i] = Tedge;
    i=Nx-1; for (j = 0; j < Ny; j++) tnew[j*Nx+i] = Tedge;

    jst = rank*chk+1;
    lmxdiff = fabs((double) (told[jst*Nx+1] - tnew[jst*Nx+1]));
    jfin = jst+chk > Ny-1? Ny-1: jst+chk;
    for (j = jst; j < jfin; j++) {
        for (i = 1; i < Nx-1; i++) {
            tdiff = fabs( (double) (told[j*Nx+i] - tnew[j*Nx+i]));
            lmxdiff = (lmxdiff < tdiff) ? tdiff : lmxdiff;
        }
    }
    for (i = 0; i < Nx*Ny; i++) told[i] = tnew[i];

    //find global maximum, all processes holding the result
    if (rank == 0) {
      ...
    } else {
      ...
    }
    if (!rank) printf(" iteration %d convergence %lf\n",iter,mxdiff);
} while (mxdiff > converge && iter < Max_iter);

Any parallel program can be divided into the following categories:

  1. Clearly annotate the iterative part of heat.c (i.e. the code shown above) to indicate whether a given line/section of code is [P]arallel, [O]verhead or [R]eplicated work.

Use coverage analysis (gcov)with 1, 2 and 4 processes to verify your conclusions from above. Try both of:

When running coverage analysis for multiple MPI processes, you will obtain counts summed over all processes. You should be looking to see what happens to the count values as you increase the process count. Specifically you might expect to see the count value for the parallel work stay (roughly) constant, but the count value associated with replicated work double as you double the number of processes. As a corollary to this the % of the total time spent executing the replicated lines will increase.

  1. Cut out the relevant portions of the coverage profiles for 1, 2 and 4 MPI processes for the LARGE problem. From these data and your answer to Q3 demonstrate that what is stated in the above paragraph is correct. What are the performance bottlenecks in this code?

HPCToolkit is another more elaborate profiler that is available at NCI. If you have time, you can try profiling the heat program running on 8 cores for the large problem size.