[DBPP] previous next up contents index [Search]
Next: 9 Performance Tools Up: 8 Message Passing Interface Previous: Exercises

Chapter Notes

Message-passing functions were incorporated in specialized libraries developed for early distributed-memory computers such as the Cosmic   Cube [254], iPSC [227], and nCUBE [211].   Subsequent developments emphasized portability across different   computers and explored the functionality required in message-passing   systems. Systems such as Express [219],   p4 [44,194], PICL [118],   PARMACS [143,144], and PVM [275] all run on a   variety of homogeneous and heterogeneous systems. Each focused on a   different set of issues, with the commercially supported Express and PARMACS systems providing the most extensive functionality, p4 integrating shared-memory support, PICL incorporating instrumentation, and PVM permitting dynamic process creation. A special issue of Parallel Computing includes articles on many of these systems [196].

An unfortunate consequence of this exploration was that although various vendor-supplied and portable systems provided similar functionality, syntactic differences and numerous minor incompatibilities made it difficult to port applications from one computer to another. This situation was resolved in 1993 with the   formation of the Message Passing Interface Forum, a consortium of industrial, academic, and governmental organizations interested in standardization [203]. This group produced the MPI specification in early 1994. MPI incorporates ideas developed   previously in a range of systems, notably p4, Express, PICL, and PARMACS. An important innovation is the use of communicators to support modular design. This feature builds on ideas previously   explored in Zipcode [266], CHIMP [90,91], and   research systems at IBM Yorktown [24,25].

The presentation of MPI provided in this chapter is intended to be self-contained. Nevertheless, space constraints have prevented inclusion of its more complex features. The MPI standard provides a detailed technical description [202]. Gropp, Lusk, and Skjellum [126] provide an excellent, more accessible tutorial text that includes not only a description of MPI but also material on the development of SPMD libraries and on MPI implementation.

Here is a Web Tour providing access to additional information on programming in MPI, including public domain implementations, a tutorial, and example programs.

 



[DBPP] previous next up contents index [Search]
Next: 9 Performance Tools Up: 8 Message Passing Interface Previous: Exercises

© Copyright 1995 by Ian Foster