Home >Documents >MPJ Express: An Implementation of Message Passing Interface (MPI) in Java

MPJ Express: An Implementation of Message Passing Interface (MPI) in Java

Date post:31-Dec-2015
Category:
View:48 times
Download:1 times
Share this document with a friend
Description:
MPJ Express: An Implementation of Message Passing Interface (MPI) in Java. Aamir Shafi http://mpj-express.org http://acet.rdg.ac.uk/projects/mpj. Writing Parallel Software. There are mainly two approaches for writing parallel software : - PowerPoint PPT Presentation
Transcript:
  • MPJ Express: An Implementation of Message Passing Interface (MPI) in JavaAamir Shafi

    http://mpj-express.orghttp://acet.rdg.ac.uk/projects/mpj

  • Writing Parallel SoftwareThere are mainly two approaches for writing parallel software: Software that can be executed on parallel hardware to exploit computational and memory resourcesThe first approach is to use messaging libraries (packages) written in already existing languages like C, Fortran, and Java: Message Passing Interface (MPI)Parallel Virtual Machine (PVM)The second and more radical approach is to provide new languages: HPC has a history of novel parallel languagesHigh Performance Fortran (HPF)Unified Parallel C (UPC)In this talk we talk about an implementation of MPI in Java called MPJ Express

  • Introduction to Java for HPCJava was released by Sun in 1996:A mainstream language in software industry, Attractive features include:Portability,Automatic garbage collection,Type-safety at compile time and runtime,Built-in support for multi-threading:A possible option to provide nested parallelism on multi-core systems,Performance:Just-In-Time compilers convert source code to byte code,Modern JVMs perform compilation from byte code to native machine code on the flyBut Java has safety features that may limit performance.

  • Introduction to Java for HPCThree existing approaches to Java messaging:Pure Java (Sockets based), Java Native Interface (JNI), andRemote Method Invocation (RMI),mpiJava has been perhaps the most popular Java messaging systemmpiJava (http://www.hpjava.org/mpiJava.html)MPJ/Ibis (http://www.cs.vu.nl/ibis/mpj.html)Motivation for a new Java messaging system:Maintain compatibility with Java threads by providing thread-safety, Handle contradicting issues of high-performance and portability.

  • Distributed Memory ClusterProc 6Proc 0Proc 1Proc 3Proc 2Proc 4Proc 5Proc 7messageLANEthernetMyrinetInfiniband etc

  • Write machines files

  • Bootstrap MPJ Express runtime

  • Write Parallel Program

  • Compile and Execute

  • Introduction to MPJ Express MPJ Express is an implementation of a Java messaging system, based on Java bindings:Will eventually supersede mpiJava.Aamir Shafi, Bryan Carpenter, and Mark BakerThread-safe communication devices using Java NIO and Myrinet:Maintain compatibility with Java threads, The buffering layer provides explicit memory management instead of relying on the garbage collector,Runtime system for portable bootstrapping

  • James Gosling Says

  • Who is using MPJ Express?First released in September 2005 under LGPL (an open-source licence):Approximately 1000 users all around the worldSome projects using this software:Cartablanca is a simulation package that uses Jacobian-Free-Newton-Krylov (JFNK) methods to solve non-linear problemsThe project is done at Los Alamos National Lab (LANL) in the USResearchers at University of Leeds, UK have used this software in Modelling and Simulation in e-Social Science (MoSeS) projectTeaching Purposes: Parallel Programming using Java (PPJ): http://www.sc.rwth-aachen.de/Teaching/Labs/PPJ05/Parallel Processing SS 2006: http://tramberend.inform.fh-hannover.de/

  • MPJ Express Design

  • Presentation OutlineImplementation Details: Point-to-point communicationCommunicators, groups, and contexts Process topologies Derived datatypesCollective communicationsMPJ Express Buffering LayerRuntime SystemPerformance Evaluation

  • Java NIO DeviceUses non-blocking I/O functionality, Implements two communication protocols:Eager-send protocol for small messages,Rendezvous protocol for large messages, Locks around communication methods results in deadlocks:In Java, the keyword synchronized ensures that only one object can call synchronized method at a time,A process sending a message to itself using synchronous send,Locks for thread-safety:Writing messages:A lock for send-communication-sets,Locks for destination channels: One for every destination process,Obtained one after the other,Reading messages:A lock for receive-communication-sets.

  • Standard mode with eager send protocol (small messages)

  • Standard mode with rendezvous protocol (large messages)

  • MPJ Express Buffering LayerMPJ Express requires a buffering layer: To use Java NIO: SocketChannels use byte buffers for data transfer, To use proprietary networks like Myrinet efficiently,Implement derived datatypes,Various implementations are possible based on actual storage medium,Direct or indirect ByteBuffers, An mpjbuf buffer object consists of:A static buffer to store primitive datatypes, A dynamic buffer to store serialized Java objects,Creating ByteBuffers on the fly is costly:Memory management is based on Knuths buddy algorithm, Two implementations of memory management.

  • MPJ Express Buffering LayerFrequent creation and destruction of communication buffers hurts performance.To tackle this, MPJ Express requires a buffering layer:Provides two implementations of Knuths buddy algorithm, To use Java NIO and proprietary networks:Direct ByteBuffers, Implement derived datatypes

  • Presentation OutlineImplementation Details: Point-to-point communicationCommunicators, groups, and contexts Process topologies Derived datatypesCollective communicationsMPJ Express Buffering LayerRuntime SystemPerformance Evaluation

  • Communicators, groups, and contextsMPI provides a higher level abstraction to create parallel libraries:Safe communication spaceGroup scope for collective operationsProcess Naming Communicators + Groups provide:Process Naming (instead of IP address + ports)Group scope for collective operationsContexts:Safe communication

  • What is a group?A data-structure that contains processesMain functionality:Keep track of ranks of processesExplanation of figure Group A contains eight processesGroup B and C are created from Group AAll group operations are local (no communication with remote processes)

  • Example of a group operation(Union)Explanation of union operationTwo processes a and d are in both groups:Thus, six processes are executing this operationEach group has its own view of this group operations:Apply theory of relativityRe-assigning ranks in new groups:Process 0 in group A is re-assigned rank 0 in Group CProcess 0 in group B is re-assigned rank 4 in Group CIf any existing process does not make it into the new group, it returns MPI.GROUP_EMPTY

  • What are communicators?A data-structure that contains groups (and thus processes)Why is it useful:Process naming, ranks are names for application programmers Easier than IPaddress + portsGroup communications as well as point to point communicationThere are two types of communicators, Intracommunicators:Communication within a groupIntercommunicators:Communication between two groups (must be disjoint)

  • What are contexts?An unique integer:An additional tag on the messages Each communicator has a distinct context that provides a safe communication universe:A context is agreed upon by all processes when a communicator is builtIntracommunicators has two contexts:One for point-to-point communicationsOne for collective communications, Intercommunicators has two contexts:Explained in the coming slides

  • Process topologiesUsed to specify processes in a geometric shapeVirtual topologies: have no connection with the physical layout of machines:Its possible to make use of underlying machine architectureThese virtual topologies can be assigned to processes in an IntracommunicatorMPI provides:Cartesian topologyGraph topology

  • Cartesian topology: Mapping four processes onto 2x2 topologyEach process is assigned a coordinate:Rank 0: (0,0)Rank 1: (1,0)Rank 2: (0,1)Rank 3: (1,1) Uses:Calculate rank by knowing grid (not globus one!) positionCalculate grid positions from ranksEasier to locate rank of neighboursApplications may have communication patterns:Lots of messaging with immediate neighbours

  • Periods in cartesian topologyAxis 1 (y-axis is periodic):Processes in top and bottom rows have valid neighbours towards top and bottom respectively

    Axis 0 (x-axis is non-periodic):Processes in right and left column have undefined neighbour towards right and left respectively

  • Derived datatypesBesides, basic datatypes, it is possible to communicate heterogeneous, non-contiguous data.Contiguous IndexedVectorStruct

  • Indexed datatypeThe elements that may form this datatype should be: Same typesAt non-contiguous locationsAdd flexibility by specifying displacements

    int SIZE = 4; int [] blklen = new int[DIM],displ = new int[DIM];for(i=0 ; i

  • Presentation OutlineImplementation Details: Point-to-point communicationCommunicators, groups, and contexts Process topologies Derived datatypesCollective communicationsRuntime SystemThread-safety in MPJ ExpressPerformance Evaluation

  • Collective communicationsProvided as a convenience for application developers:Save significant development timeEfficient algorithms may be used Stable (tested)Built on top of point-to-point communications,These operations include:Broadcast, Barrier, Reduce, Allreduce, Alltoall, Scatter, Scan, AllscatterVersions that allows displacements between the data

  • Broadcast, scatter, gather, allgather, alltoallImage from MPI standard doc

  • Reduce collective operationsMPI.PRODMPI.SUMMPI.MINMPI.MAXMPI.LANDMPI.BANDMPI.LORMPI.BORMPI.LXORMPI.BXORMPI.MINLOCMPI.MAXLOC

  • Barrier with Tree Algorithm

  • Eight processes, thus forms only one group Each process exchanges an integer 4 times Overlaps communications well Execution of barrier with eight processes

  • Intracomm.Bcast( )Sends data from a process to all the other processes

Click here to load reader

Embed Size (px)
Recommended