The attractiveness of the message-passing paradigm at least partially stems from its wide portability. Programs expressed this way may run on distributed-memory multicomputers, shared-memory multiprocessors, networks of workstations, and combinations of all of these. The paradigm will not be made obsolete by architectures combining the shared- and distributed-memory views, or by increases in network speeds. Thus, it should be both possible and useful to implement this standard on a great variety of machines, including those ``machines'' consisting of collections of other machines, parallel or not, connected by a communication network.
The interface is suitable for use by fully general Multiple
Instruction, Multiple Data
(MIMD) programs, or Multiple Program, Multiple Data (MPMD)
programs, where each process follows a distinct execution path through
the same code, or even executes a different code.
It is also suitable for
those written in the more restricted style of Single Program,
Multiple Data (SPMD), where all processes follow the same execution
path through the same program.
Although no explicit
support for threads is provided,
the interface has been designed so as not to
prejudice their use.
With this version of MPI no support is provided for dynamic spawning
of tasks; such support is expected in future versions of MPI; see
Section .
MPI provides many features intended to improve performance on
scalable parallel computers with
specialized interprocessor communication
hardware. Thus, we expect that native, high-performance
implementations of MPI will be provided on such machines. At the
same time, implementations of MPI on top of standard Unix
interprocessor communication protocols will provide portability to
workstation clusters and heterogeneous networks of workstations.
Several proprietary, native implementations of MPI, and public
domain, portable implementation of MPI are now available. See
Section for more information
about MPI implementations.