+ All Categories
Home > Documents > Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.

Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.

Date post: 22-Dec-2015
Category:
View: 245 times
Download: 3 times
Share this document with a friend
18
Transcript
Page 1: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 2: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 3: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 4: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 5: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.

Bulk Synchronous Parallel (BSP) Model

•Illustration of a BSP superstep

Page 6: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 7: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 8: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 9: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 10: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 11: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.

RMA communications fall in two categories:

1. Active Target: communication, where data is moved from the memory of one process to the memory of another, and both are explicitly involved in the communication. This communication pattern is similar to message passing, except that all the data transfer arguments are provided by one process, and the second process only participates in the synchronization.

2. Passive Target: communication, where data is moved from the memory of one process to the memory of another, and only the origin process is explicitly involved in the transfer. Thus, two origin processes may communicate by accessing the same location in a target window. The process that owns the target window may be distinct from the two communicating processes, in which case it does not participate explicitly in the communication. This communication paradigm is closest to a shared memory model, where shared data can be accessed by all processes, irrespective of location.

Page 12: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.

Code Region Terminology :

• access epoch (accessor code region)

• Exposure epoch (provider code region)

Page 13: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.

MPI provides 3 synchronization mechanisms:

1. The MPI_WIN_FENCE • Collective synchronization call supports a simple synchronization pattern

that is often used in parallel computations: namely a loosely-synchronous model, where global computation phases alternate with global communication phases.

• This mechanism is most useful for loosely synchronous algorithms where the graph of communicating processes changes very frequently, or where each process communicates with many others.

• This call is used for active target communication. • An access epoch at an origin process or an exposure epoch at a target

process are started and completed by calls to MPI_WIN_FENCE.• A process can access windows at all processes in the group of win during

such an access epoch, and the local window can be accessed by all processes in the group of win during such an exposure epoch.

• Can be used to implement communication in BSP model

Page 14: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.

2. By MPI_WIN_START, MPI_WIN_COMPLETE,MPI_WIN_POST, MPI_WIN_WAIT• Can be used to restrict synchronization to the minimum: • Only pairs of communicating processes synchronize, and they do so only when a synchronization is needed to order correctly RMA accesses to a window with respect to local accesses to that same window. •This mechanism may be more efficient when each process communicates with few (logical) neighbors, and the communication graph is fixed or changes infrequently.•These calls are used for active target communication. •An access epoch is started at the origin process by a call to MPI_WIN_START and is terminated by a call to MPI_WIN_COMPLETE. •The start call has a group argument that specifies the group of target processes for that epoch. •An exposure epoch is started at the target process by a call to MPI_WIN_POST and is completed by a call to MPI_WIN_WAIT. •The post call has a group argument that specifies the set of origin processes for that epoch.

Page 15: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 16: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.
Page 17: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.

3. By shared and exclusive locks are provided by the two functions: MPI_WIN_LOCK and MPI_WIN_UNLOCK.

•Lock synchronization is useful for MPI applications that emulate a shared memory model via MPI calls; e.g., in a ``billboard'' model, where processes can, at random times, access or update different parts of the billboard. •These two calls provide passive target communication. •An access epoch is started by a call to MPI_WIN_LOCK and terminated by a call to MPI_WIN_UNLOCK. •Only one target window can be accessed during that epoch with win.

Page 18: Bulk Synchronous Parallel (BSP) Model Illustration of a BSP superstep.

Recommended