+ All Categories
Home > Documents > Using MPI - the Fundamentals

Using MPI - the Fundamentals

Date post: 14-Jan-2016
Category:
Upload: adsila
View: 30 times
Download: 0 times
Share this document with a friend
Description:
Using MPI - the Fundamentals. University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed Email : [email protected]. What is MPI?. - PowerPoint PPT Presentation
Popular Tags:
34
Using MPI - the Fundamentals University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed Email:
Transcript
Page 1: Using MPI - the Fundamentals

Using MPI - the Fundamentals

University of North Carolina - Chapel HillITS - Research Computing

Instructor: Mark Reed Email:

[email protected]

Page 2: Using MPI - the Fundamentals

its.unc.edu 2

What is MPI?

•Technically it is the definition of a standard, portable, message-passing library, the Message Passing Interface. The standard defines bothsyntax

semantics

•Colloquially, it usually refers to a specific MPI implementation of this library, which will include the process and communication control layers in addition to the MPI library.

Page 3: Using MPI - the Fundamentals

its.unc.edu 3

a) Configuration Control - the ability to determine the machine’s type, accessibility, file control, etc.

b) Message Passing Library - syntax and semantics of calls to move messages around the system

c) Process Control - the ability to spawn processes and communicate with these processes

MP implementations consist of:

MP implementations consist of:

Page 4: Using MPI - the Fundamentals

its.unc.edu 4

MPI ForumMPI Forum

First message-passing interface standard.

Sixty people from forty different organizations, began in 1992

Users and vendors represented, from the US and Europe.

Two-year process of proposals, meetings and review.

Message Passing Interface document produced in May 1994, MPI-1

Page 5: Using MPI - the Fundamentals
Page 6: Using MPI - the Fundamentals

its.unc.edu 6

Goals and Scope of MPI

Goals and Scope of MPI

MPI's prime goals are:

•To provide source-code portability.

• To allow efficient implementation.

It also offers:

•A great deal of functionality.

•Support for heterogeneous parallel architectures.

Page 7: Using MPI - the Fundamentals

its.unc.edu 7

What’s included in MPI?

What’s included in MPI?

PTP communication

collective operations

process groups

communication domains

process topologies

environmental management and inquiry

profiling interface

bindings for Fortran 77 and C

Page 8: Using MPI - the Fundamentals

its.unc.edu 8

What is not included in MPI?

What is not included in MPI?

support for task management

I/O functions

explicit shared memory operations

explicit support for threads

program construction tools

debugging facilities

Page 9: Using MPI - the Fundamentals

its.unc.edu 9

Seven function version of MPI:

MPI_Init

MPI_Comm_size

MPI_Comm_rank

MPI_Send

MPI_Recv

MPI_Barrier

MPI_Finalize

Page 10: Using MPI - the Fundamentals

its.unc.edu 10

MPI is Little? No, MPI is Big!

There are many additional functions, at least 133 functions at (my :) last count.

These functions add flexibility, robustness, efficiency,modularity, or convenience.

You should definitely check them out! MPI is big and little

Page 11: Using MPI - the Fundamentals

its.unc.edu 11

MPI Communicators

Every message passing system must have a means of specifying where the message is to go -> MPI uses communicators.

Page 12: Using MPI - the Fundamentals

its.unc.edu 12

MPI Communicators

Communicator has two distinguishing characteristics:

• Group Name - a unique name given to a collection of processes Rank - unique integer id within the group

•Context - the context defines the communication domain and is allocated by the system at run time

Page 13: Using MPI - the Fundamentals

its.unc.edu 13

MPI Communicators -

Rank

Rank - an integer id assigned to the process by the system

ranks are contiguous and start from 0 within each group

Thus group and rank serve to uniquely identify each process

Page 14: Using MPI - the Fundamentals

its.unc.edu 14

Syntax: C vs Fortran

Syntax is generally the same, with the exception that C returns the status of the function call while Fortran has an additional integer argument at the end to return this error code

All MPI objects (e.g., MPI_Datatype, MPI_Comm) are of type INTEGER in Fortran.

Disclaimer: Henceforth, we will usually omit the Fortran syntax, as it should be clear what the syntax is from the C call

Page 15: Using MPI - the Fundamentals

its.unc.edu 15

int MPI_Init (int *argc, char ***argv)

This must be called once and only once by every MPI program. It should be the first MPI call (exception: MPI_Initialized). argc and argv are the standard arguments to a C program and can be empty if none are required. • argc argument count

• argv pointers to arguments

• Fortran syntax: CALL MPI_Init(ierr)

Page 16: Using MPI - the Fundamentals

its.unc.edu 16

int MPI_Comm_size(MPI_Comm comm, int *size)

Generally the second function called, this returns with the size of the group associated with the MPI communicator, comm. The communicator, MPI_COMM_WORLD is predefined to include all processes.

comm communicatorsize upon return, set to the size of the group

Page 17: Using MPI - the Fundamentals

its.unc.edu 17

int MPI_Comm_rank(MPI_Comm comm, int *rank)

Generally the third function called, this returns with the rank of the processor within the group comm.

comm communicatorrank upon return, set to the rank of the process within the group

Page 18: Using MPI - the Fundamentals

its.unc.edu 18

What happens on a send?

What happens on a send?

Ideally, the data buffer is transmitted to the receiver, however, what if the receiver is not ready?

3 Scenarios:

• Wait for receiver to be ready (blocking)

• Copy the message to an internal buffer (on sender, receiver, or elsewhere) and then return from the send call (nonblocking)

• Fail

Which is best?

Page 19: Using MPI - the Fundamentals

its.unc.edu 19

Sending a message:

2 scenarios

Sending a message:

2 scenarios

CPU 0

DRAM

array

array

DRAM

CPU 1

Direct

Buffered

Page 20: Using MPI - the Fundamentals

its.unc.edu 20

int MPI_Send (void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

This is the basic send call, note it is potentially a blocking send.

buf starting address of the send buffercount number of elementsdatatype MPI datatype of the data, typically

it is the language datatype with ``MPI_'' prepended to it, e.g., MPI_INT and MPI_FLOAT in C or MPI_INTEGER and MPI_REAL in Fortran.

dest rank of the destination processtag message tagcomm communicator

Page 21: Using MPI - the Fundamentals

its.unc.edu 21

int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status

*status)

This is the basic receive call, note it is a blocking receive.

buf starting address of the receive buffercount number of elements to receive, it can be less than or equal to the number sentdatatype MPI datatype of the data, see MPI_Send

source rank of sender, or wildcard (MPI_ANY_SOURCE)tag msg. tag of sender, or wildcard (MPI_ANY_TAG)comm communicatorstatus structure containing a minimum (implementation dependent) of three entries, specifying the source, tag, and error code of the received message.

Page 22: Using MPI - the Fundamentals

its.unc.edu 22

Receive status

In C, the source, tag and error code are given by the structure fields•MPI_SOURCE, MPI_TAG, MPI_ERROR

In Fortran, the field MPI_Status is an integer array of size MPI_STATUS_SIZE.

•The three predefined indices are MPI_SOURCE, MPI_TAG, MPI_ERROR.

Page 23: Using MPI - the Fundamentals

its.unc.edu 23

int MPI_Barrier(MPI_Comm comm)

Blocks until all members of the group comm have made this call.This synchronizes all processes.

comm the name of the communicator

Page 24: Using MPI - the Fundamentals

its.unc.edu 24

int MPI_Finalize(void)

Cleans up all MPI states.

Should be called by all processes.

User must ensure all pending communications complete before calling this routine.

Page 25: Using MPI - the Fundamentals

its.unc.edu 25

tokenring exampletokenring example

Proc 0

Proc 3

Proc 4

Proc 5

Proc 2

Proc 1

Page 26: Using MPI - the Fundamentals

its.unc.edu 26

Simple Example:Simple Example:

program tokenring implicit none

include "mpif.h"c this is a sample test program to pass a token around the ringc each pe will increment the token w/ by it's pe number integer itoken,npts,npes,mype,msgtag,iprev,inext integer i,istat,irecvstat(MPI_STATUS_SIZE),len

c initialize mpi, set the npes and mype variables call MPI_Init(istat) if (istat.ne.0) stop "ERROR: can't initialize mpi" call MPI_Comm_size(MPI_COMM_WORLD,npes,istat) call MPI_Comm_rank(MPI_COMM_WORLD,mype,istat)c initialize variables on all pe's itoken = 0 npts = 1 msgtag = 101 iprev = mod(mype+npes-1,npes) inext = mod(mype+npes+1,npes)

Page 27: Using MPI - the Fundamentals

its.unc.edu 27

Simple Example Cont.Simple Example Cont.

c now send and receive the token if (mype.eq.0) then

itoken = itoken + mype call MPI_Send (itoken,npts,MPI_INTEGER,inext,msgtag,

& MPI_COMM_WORLD,istat) call MPI_Recv (itoken,npts,MPI_INTEGER,iprev,msgtag,

& MPI_COMM_WORLD,irecvstat,istat) else call MPI_Recv (itoken,npts,MPI_INTEGER,iprev,msgtag, & MPI_COMM_WORLD,irecvstat,istat) itoken = itoken + mype call MPI_Send (itoken,npts,MPI_INTEGER,inext,msgtag, & MPI_COMM_WORLD,istat) endif print *, "mype = ",mype," received token = ",itoken

c call barrier and exit call mpi_barrier(MPI_COMM_WORLD,istat) call mpi_finalize (istat)

Page 28: Using MPI - the Fundamentals

its.unc.edu 28

Sample tokenring output on 6 processors

Sample tokenring output on 6 processors

mype = 5 of 6 procs and has token = 15 mype = 5 my name = baobab-n40.isis.unc.edu mype = 1 of 6 procs and has token = 1 mype = 1 my name = baobab-n47.isis.unc.edu mype = 4 of 6 procs and has token = 10 mype = 4 my name = baobab-n40.isis.unc.edu mype = 2 of 6 procs and has token = 3 mype = 2 my name = baobab-n41.isis.unc.edu mype = 3 of 6 procs and has token = 6 mype = 3 my name = baobab-n41.isis.unc.edu mype = 0 of 6 procs and has token = 15 mype = 0 my name = baobab-n47.isis.unc.edu

Page 29: Using MPI - the Fundamentals

its.unc.edu 29

Same example in CSame example in C

/* program tokenring *//* this is a sample test program to pass a token around the

ring */;/* each pe will increment the token w/ by it's pe number */;#include <stdio.h>#include <mpi.h>void main(int argc, char* argv[]){ int itoken,npts,npes,mype,msgtag,iprev,inext; int len; char name[MPI_MAX_PROCESSOR_NAME]; MPI_Status irecvstat;

/* initialize mpi, set the npes and mype variables */; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD,&npes); MPI_Comm_rank(MPI_COMM_WORLD,&mype);

Page 30: Using MPI - the Fundamentals

its.unc.edu 30

Tokenring Cont. Tokenring Cont.

/* initialize variables on all pe's */; itoken = 0; npts = 1; msgtag = 101; iprev = (mype+npes-1)%npes; inext = (mype+npes+1)%npes;/* now send and receive the token */; if (mype==0) { MPI_Send (&itoken,npts,MPI_INT,inext,msgtag,MPI_COMM_WORLD); MPI_Recv

(&itoken,npts,MPI_INT,iprev,msgtag,MPI_COMM_WORLD,&irecvstat); } else { MPI_Recv

(&itoken,npts,MPI_INT,iprev,msgtag,MPI_COMM_WORLD,&irecvstat); itoken += mype; MPI_Send (&itoken,npts,MPI_INT,inext,msgtag,MPI_COMM_WORLD); }

Page 31: Using MPI - the Fundamentals

its.unc.edu 31

Tokenring Cont.Tokenring Cont.

printf ("mype = %d received token = %d \n",mype,itoken);

MPI_Get_processor_name(name, &len);

printf ("mype = %d my name = %s\n",mype,name);

/* barrier and exit */;

MPI_Barrier(MPI_COMM_WORLD);

MPI_Finalize ();

return;

}

Page 32: Using MPI - the Fundamentals

its.unc.edu 32

TimingTiming

MPI_Wtime returns wall clock (elapsed) time in seconds from some arbitrary initial point

MPI_Wtick returns resolution of MPI_Wtime in seconds

•both functions return double (C) or double precision (Fortran) values

•there are no calling arguments

Page 33: Using MPI - the Fundamentals

its.unc.edu 33

References

“Using MPI” by Gropp, Lusk, and Skjellum

“MPI: The Complete Reference” by Snir, Otto, Huss-Lederman, Walker, and Dongarra

Edinburgh Parallel Computing Centre www.epcc.ed.ac.uk/epic/mpi/notes/mpi-course-epic.book_1.html

by MacDonald, Minty, Antonioletti, Malard, Harding, and Brown

Maui High Performance Computing Centerwww.mhpcc.edu/training/workshop/html/workshop.html

Page 34: Using MPI - the Fundamentals

its.unc.edu 34

References Cont.

“Parallel Programming with MPI” by Peter Pacheco

List of tutorials at:http://www.lam-mpi.org/tutorials


Recommended