return to first page linux journal archive
keywordscontents

HPF: Programming Linux Clusters the Easy Way

All about high performance FORTRAN and how it is used to write code to run efficiently on parallel computers.

by L. M. Delves

Many programmers use Linux as their operating system of choice when developing applications software. Increasingly, they run the application on the same PC when it is developed. But what do they do if the application becomes too large or runs too long?

The obvious answer is to run it on a cluster of PCs, possibly linked by Ethernet. To do this, you need to write code for each processor. In principle, different code for each, but usually there is common code for all except a selected one or two. The code must include data passing from processor to processor as required. This type of coding uses a ``message-passing paradigm'' and is in common use on multiprocessor mainframes and workstation clusters. Standard message-passing libraries, PVM (parallel virtual machine) and MPI (message-passing interface), exist to ensure portability.

Writing message-passing code is much harder than writing serial code, and it is easy to show why. Most scientific and engineering programming makes heavy use of one and two dimensional arrays, so we add two vectors this way:

DO I = 1,10000
a(i) = b(i) + c(i)
END DO
This is clearly a parallel operation: all the elements of b and c can be added in parallel. For this to occur on a PC or workstation cluster, or indeed any distributed-memory parallel system, the elements of b, c and, most likely, a must be distributed over the available processors. The programmer must arrange this in his code by dealing with bits of each vector on each processor and keeping track of the bits.

A better solution would be for the compiler to do the arranging for you. Then your code could still refer to the complete vector or matrix object and thereby be easier to write, understand and maintain.

The advantages of compiler-aided parallel programming are widely accepted. In the scientific/engineering community, these advantages have led to the development of a standard parallel language called HPF, High Performance FORTRAN, and of HPF compilers for a widening range of architectures.

For good reasons, HPF is deliberately based on the latest, greatly upgraded, version of FORTRAN: FORTRAN95. Beginning with FORTRAN90, FORTRAN contains a rich variety of array facilities so that the loop above becomes the one line:

a = b + c
which is easier for a compiler, as well as a human, to understand.

Compilers do have difficulty deciding how best to distribute arrays across the processors; thus, HPF gives the programmer a way of providing help. Given that help, the compiler can fully automate the production of a data parallel program from a single FORTRAN77/90/95 source.

HPF codes are portable across SIMD, MIMD shared memory and MIMD Distributed Memory architectures. In particular, you can use HPF on a Linux PC cluster.

The Flavour of the Language

Here is a concocted example to demonstrate the facilities provided by HPF:

       REAL a(1000), b(1000), c(1000), &
	x(500),y(0:501)
!HPF$ PROCESSORS procs(10)
!HPF$ DISTRIBUTE(BLOCK) ONTO procs :: a, b
!HPF$ DISTRIBUTE(CYCLIC) ONTO procs :: c
!HPF$ ALIGN x(i) WITH y(i+1)

...
       a(:) = b(:)		! Statement 1
       x(1:500) = y(2:501)	! Statement 2
       a(:) = c(:)		! Statement 3
...
The lines starting with !HPF are HPF directives; the rest are standard FORTRAN90 and carry out three array operations. What are the directives doing?

The PROCESSORS directive specifies a linear arrangement of 10 virtual processors, which are mapped to the available physical processors in a manner not specified by the language. You might expect to need at least ten physical processors, but most HPF compilers will run the code happily on one or more (up to ten) physical processors. Grids of processors in any number of dimensions up to seven can be defined. They should match the problem being solved in some way--perhaps by helping to minimize communication costs. The processor directive is optional; the number of processors to use can be specified at runtime.

The DISTRIBUTE directives tell (actually, recommend to) the compiler how to distribute the elements of the arrays. Arrays a, b are distributed with blocks of 100 contiguous elements per processor, while c is distributed so that, for example, c(1), c(11), c(21), ... are on processor procs(1) and so on.

Note that the distribution of the arrays x and y is not specified explicitly, while the way they are aligned to each other is specified. The ALIGN statement causes x(i) and y(i+1) to be stored on the same processor for all values of i, regardless of the actual distribution.

How the HPF Directives Work

In Statement 1, the identical distribution of a and b ensures that for all i, a(i) and b(i) are on the same processor; thus, the compiler does not generate any message passing.

In statement 2, there is again no need for message passing. If the ALIGN statement had lined up x(i) with y(i) rather than y(i+1), communication would have been needed for some values of i.

Statement 3 looks very much like Statement 1; but the communication requirements are very different because of the different distribution of a and c. The array elements a(i) and c(i) are on the same processor for only 10 of the possible values of i, and hence for nearly all of the elements; communication of data between processors is needed. This is an unwise choice of distribution for c, if indeed this statement represents the bulk of the work.

A good choice of distribution and alignment can greatly help efficiency, and that is the point of having the directives. It is much easier to write FORTRAN90 code and embellish it with HPF directives than to write the equivalent message-passing code.

A Second Example

In practice, the steps taken in writing an HPF program are:

  1. Write FORTRAN90 code. Your existing FORTRAN77 code will do in a pinch, but you will get better efficiency by cleaning it up using the newer FORTRAN high-level constructs; tools exist to help this conversion.
  2. Decide how to configure the processors.
  3. Declare one or more templates to act as guides for distributing arrays.
  4. Decide how to distribute and align the arrays onto the template(s).

This process is illustrated in the code shown in Listing 1, which represents a subroutine to solve a set of linear equations. The subroutine is in standard FORTRAN90 and will run happily through any FORTRAN90 compiler, which will treat the HPF directives as comments. The code makes good use of the FORTRAN90 array facilities and has been parallelized by adding just four HPF directives. The resulting HPF code runs well on a Linux PC cluster, provided the size of the problem being solved is large enough to warrant the use of parallelism.

Does It Really Work?

HPF makes life easy for the programmer, by leaving nearly everything to the compiler. So, can the compilers cope? Can you really get parallel efficiency by using HPF? And, can you get useful speedups on networked PCs with relatively high latency communications?

Of course, no compiler can find parallelism where none exists; you need to give it the parallelism in the beginning. Given this, then the answer is yes, current HPF compilers are surprisingly efficient. On a PC cluster connected by Ethernets, the message-passing latency using PVM or MPI is typically around 0.6ms; this translates to ``use fairly coarse-grain parallelism if you can and don't expect to use too many PCs.''

Table 1 shows some timings to illustrate what can be achieved. They were taken on a four-PC Linux P100 cluster with 100Mb Ethernet. ``Serial'' times are those given using the N. A. Software (NASL) FORTRANPlus F90 compiler, release 1.3.57. These times are absent where the code uses HPF extensions (FORALL, EXTRINSIC(HPFSERIAL)) not supported in FORTRAN90 (for some, we timed equivalent FORTRAN90 versions). HPF times used the NASL HPFPlus compiler, release 2.0. Optimization was set ``on'' for both FORTRAN and HPF. Times are in seconds.

The overheads intrinsic to using HPF rather than FORTRAN are shown by comparing the Serial and P = 1 times. These overheads are quite low--often negligible and, for Gauss, even negative (we see this on other platforms too). The gain in using HPF is shown by comparing the Serial and P = 4 times. Speedups achieved relative to the serial times range from 2.1 to 4.5.

Resources

Mike Delves (delves@nasoftware.co.uk) spent twenty-five years at the University of Liverpool as Professor of Computational Mathematics and Director of the Institute for Advanced Scientific Computation. His research interests included numerical methods and their implementation in high-level languages (successively Algol68, Ada, FORTRAN90 and HPF--parallelism crept increasingly in along the way). He started N.A. Software in 1978 as a hobby and is now full-time chairman; the company currently has 23 employees. Linux represents its biggest single market for FORTRAN and HPF compilers.

  Previous    Next