[DBPP] previous next up contents index [Search]
Next: 2 Designing Parallel Algorithms Up: 1 Parallel Computers and Computation Previous: Exercises

Chapter Notes

Kauffman and Smarr [169] discuss the impact of high-performance computing on science. Levin [189] and several U.S. government reports [232,233,215] describe the so-called Grand   Challenge problems that have motivated recent initiatives in high-performance computing. The computational requirements in Table 1.1 are derived from the project plan for the   CHAMMP climate modeling program, which has adapted a range of climate models for execution on parallel computers [287]. Dewitt and   Gray [79] discuss developments in parallel databases.   Lawson [186] discusses industrial real-time applications of parallel computing. Worlton [299], Meindl [201], and Hennessy and Joupp [147] discuss trends in processor design and   sequential and parallel computer architecture. Ullman [286] provides a succinct explanation of the complexity results.

Goldstine and von Neumann [121] provide an early exposition   of the von Neumann computer. Bailey [22] explains how this model derived from the automation of algorithms performed previously   by ``human computers.'' He argues that highly parallel computers are stimulating not only new algorithmic approaches, but also new ways of thinking about problems. Many researchers have proposed abstract machine models for parallel computing [67,99,288]. Snyder [268] explains why the multicomputer is a good choice.   Early parallel computers with a multicomputer-like architecture   include the Ultracomputer [252] and the Cosmic   Cube [254]. Athas and Seitz [18] and Seitz [255] discuss developments in this area. Almasi and Gottlieb [11] and Hwang [156] provide good introductions to parallel computer architectures and interconnection networks.   Hillis [150] describes SIMD computers. Fortune and Wylie [99] and Jájá [157] discuss the PRAM model.   Comer [63] discusses LANs and WANs.   Kahn [162] describes the ARPANET, an early WAN. The chapter notes in Chapter 3 provide additional references on parallel computer architecture.

The basic abstractions used to describe parallel algorithms have been developed in the course of many years of research in operating systems, distributed computing, and parallel computation. The use of channels was first explored by Hoare in Communicating Sequential   Processes (CSP) [154] and is fundamental to the occam programming   language [231,280]. However, in CSP the task and channel structure is static, and both sender and receiver block until a communication has completed. This approach has proven too restrictive for general-purpose parallel programming. The   task/channel model introduced in this chapter is described by Chandy and Foster [102], who also discuss the conditions under which the model can guarantee deterministic execution [51].

  Seitz [254] and Gropp, Lusk, and Skjellum [126] describe the message-passing model (see also the chapter notes in Chapter 8). Ben Ari [32] and Karp and   Babb [165] discuss shared-memory programming. Hillis and Steele [151] and Hatcher and Quinn [136] describe data-parallel programming; the chapter notes in Chapter 7   provide additional references. Other approaches that have generated   considerable interest include Actors [5], concurrent logic   programming [107], functional programming [146],   Linda [48], and Unity [54]. Bal et al. [23]   provide a useful survey of some of these approaches. Pancake and Bergmark [218] emphasize the importance of deterministic   execution in parallel computing.

Here is a Web Tour providing access to additional information on parallel applications, parallel computer architecture, and parallel programming models.


[DBPP] previous next up contents index [Search]
Next: 2 Designing Parallel Algorithms Up: 1 Parallel Computers and Computation Previous: Exercises

© Copyright 1995 by Ian Foster