+ All Categories
Home > Software > MPI message passing interface

MPI message passing interface

Date post: 06-Jan-2017
Category:
Upload: mohit-raghuvanshi
View: 123 times
Download: 4 times
Share this document with a friend
30
MPI - MESSAGE PASSING INTERFACE An approach for parallel algorithms COMPUTER SCIENCE DEPARTMENT PROF. AKHTAR RASOOL UNDER THE GUIDANCE OF
Transcript
Page 1: MPI message passing interface

MPI - MESSAGE PASSING INTERFACEAn approach for parallel algorithms

COMPUTER SCIENCE DEPARTMENT

PROF. AKHTAR RASOOLUNDER THE GUIDANCE OF

Page 2: MPI message passing interface

TABLE OF CONTENTParallel Merge Sort using MPI

Introduction

Background

MPI

Code implementation

Result

About UsGoal and VisionTarget

Programming language ClusterMPI libraryDataset

What is MPIData type and syntaxCommunication ModeFeatures

Merge sort implementationFlow of code Process flow

Single processorMultiprocessorSummery

Page 3: MPI message passing interface

About Us

Page 4: MPI message passing interface

Our Team

Nishaant Sharma131112226

Kartik131112265

Mohit Raghuvanshi131112232

Prabhash Prakash131112241

Page 5: MPI message passing interface

“Analysis of Running Time complexity of Sorting Algorithm (Merge Sort) on parallel processing

environment (preferably on a cluster with various processors) and single node Using MPI”

Goal

05

Page 6: MPI message passing interface

Installation Implementation Result

Install mpich2 on Ubantu Implement merge sort algorithm parallel

Change in time when we increse in the number of processor

VISIONWrite your relevant text here

06

Page 7: MPI message passing interface

Background

Programming languageBase of project

good knowledge of “C” for implementing codeC for implementing the algorithm Python for generating the data setFaker library in python

11

Page 8: MPI message passing interface

Background

ClusterMaster and Slave Node

Master Node

Slave Node

11

Page 9: MPI message passing interface

Background

MPI Message passing interface

MPI is a proposed standard message-passing interface. It is a library Specification, not a language. The programs that users can write in Fortran 77 and C are compiled with Ordinary compilers and linked with the MPI library.

11

Page 10: MPI message passing interface

Dataset

Python Script For generating dataset

We use faker module for generating the dataset of census.Some basic api of faker

fake.name() for generating fake namefake.email() for generating fake email fake.ean(length=13) for unique id of 13 digitfake.job() for creating fake jobMany other api

11

Page 11: MPI message passing interface

Dataset

Python Script For generating dataset

Dataset attributeId NamePhone Number SalaryEmail

11

Page 12: MPI message passing interface

Dataset preview

12

Page 13: MPI message passing interface

What is MPI

MPIMessage Passing Interface

A message-passing library specifications:Extended message-passing modelNot a language or compiler specificationNot a specific implementation or product

For parallel computers, clusters, and heterogeneous networks.Designed to permit the development of parallel software libraries.Designed to provide access to advanced parallel hardware for

End usersLibrary writersTool developers

13

Page 14: MPI message passing interface

Communication ModesBased on the type of send:

Synchronous: Completes once the acknowledgement is received by the sender.Buffered send: completes immediately, unless if an error occurs.Standard send: completes once the message has been sent, which may or may not imply that the message has arrived at its destination.Ready send: completes immediately, if the receiver is ready for the message it will get it, otherwise the message is dropped silently.

Page 15: MPI message passing interface

Data TypesThe following data types are supported by MPI:

Predefined data types that are corresponding to data types from the programming language.Arrays.Sub blocks of a matrixUser defined data structure.A set of predefined data types

Page 16: MPI message passing interface

API of MPI#include “mpi.h” provides basic MPI definitions and types.

MPI_Init starts MPI

MPI_Finalize exits MPI

Note that all non-MPI routines are local; thus “printf” run on each process

Note: MPI functions return error codes or MPI_SUCCESS

Page 17: MPI message passing interface

MPI_INIT

Initializing MPI

The initialization routine MPI_INIT is the first MPI routine called.

MPI_INIT is called once.

int MPI_INIT( int *argc, char **argv );

Page 18: MPI message passing interface

MPI_Finalize

After a program has finished using the MPI library, it must call MPI_Finalize in order to clean up all MPI state. Once this routine is called, no MPI routine may be called.

MPI_finalize is genrally last command in the program.

MPI_finalize()

Page 19: MPI message passing interface

MPI_Comm_rank

It updates the rank of processes in the variable world_rank. Determines the rank of the calling process in the communicator

int MPI_Comm_rank(MPI_COMM_WORLD, &wordld_rank);

Page 20: MPI message passing interface

MPI_Comm_size

Determines the size of the group associated with a communicator.

int MPI_Comm_size(MPI_COMM comm, int *size);

Page 21: MPI message passing interface

MPI_Scatter

Sends data from one process to all other processes in a communicator.

. int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype sendtype,void

*recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm);

Page 22: MPI message passing interface

MPI_Scatter

Page 23: MPI message passing interface

MPI_Gather

MPI_Gather is opposite of MPI_Scatter, it gather all the data from other processes to single (Master) process.

. int MPI_Gather(const void *sendbuf, intsendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)

Page 24: MPI message passing interface

MPI_Gather

Page 25: MPI message passing interface

MPI_Bcast

Broadcasts a message from the process with rank "root" to all other processes of the communicator.

int MPI_Bcast( void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm )

Page 26: MPI message passing interface

MPI_Bcast

Page 27: MPI message passing interface

Parallel Merge Sort Algorithm

Page 28: MPI message passing interface

Result

No. of Process Time elapsed in reading

input file(ms)Time elapsed to sort

the data(ms)Time in writing output file(ms)

1 3982.326 8528.112 5839.587

2 4050.897 6878.000 5401.234

4 8145.740 12073.895 11083.125

5 10178.689 14361.952 13087.155

Page 29: MPI message passing interface

Conclusion

In this project we focused on using MPI as a message passing interface

implementation on LINUX platform. The effect of parallel processes number and

also the number of cores on the performance of parallel merge sort

algorithms has been theoretically and experimentally studied.

Page 30: MPI message passing interface

Thank You


Recommended