Post on 11-Jan-2016
transcript
CC02 – Parallel Programming Using OpenMP 1 of 25PhUSE 2011
Aniruddha Deshmukh
Cytel Inc.
Email: aniruddha.deshmukh@cytel.com
CC02 – Parallel Programming Using OpenMP 2 of 25PhUSE 2011
CC02 – Parallel Programming Using OpenMP 3 of 25PhUSE 2011
Massive, repetitious computations
Availability of multi-core / multi-CPU machines
Exploit hardware capability to achieve high performance
Useful in software implementing intensive computations
CC02 – Parallel Programming Using OpenMP 4 of 25PhUSE 2011
Large simulations
Problems in linear algebra
Graph traversal
Branch and bound methods
Dynamic programming
Combinatorial methods
OLAP
Business Intelligence etc.
CC02 – Parallel Programming Using OpenMP 5 of 25PhUSE 2011
A standard for portable and scalable parallel programmingProvides an API for parallel programming with shared memory multiprocessorsCollection of compiler directives (pragmas), environment variables and library functionsWorks with C/C++ and FORTRANSupports workload division, communication and synchronization between threads
CC02 – Parallel Programming Using OpenMP 6 of 25PhUSE 2011
CC02 – Parallel Programming Using OpenMP 7 of 25PhUSE 2011
Initialize
Generate Data
Analyze Data
Summarize
Clean-up
Aggregate Results
Simulations running sequentially
CC02 – Parallel Programming Using OpenMP 8 of 25PhUSE 2011
Simulations running in parallel
Initialize
Generate Data
Analyze Data
Summarize
Clean-up
Aggregate Results
Generate DataGenerate Data
Analyze DataAnalyze Data
SummarizeSummarize
Aggregate ResultsAggregate Results
Generate DataGenerate Data
Analyze DataAnalyze Data
SummarizeSummarize
Aggregate ResultsAggregate Results
Thread 1Thread 1
Master
Thread 2Thread 2
CC02 – Parallel Programming Using OpenMP 9 of 25PhUSE 2011
CC02 – Parallel Programming Using OpenMP 10 of 25PhUSE 2011
Declare and initialize variables
Allocate memory
Create one copy of trial data object and random number array per thread.
CC02 – Parallel Programming Using OpenMP 11 of 25PhUSE 2011
Simulation loop
Pragma omp parallel for creates multiple threads and distributes iterations among them.
Iterations may not be executed in sequence.
CC02 – Parallel Programming Using OpenMP 12 of 25PhUSE 2011
Generation of random numbers and trial data
CC02 – Parallel Programming Using OpenMP 13 of 25PhUSE 2011
Analyze data.
Summarize output and combine results.
CC02 – Parallel Programming Using OpenMP 14 of 25PhUSE 2011
Iteration #
Generate DataGenerate Data
Analyze DataAnalyze Data
SummarizeSummarize
Aggregate ResultsAggregate Results
345
Entry into the parallel for loop
Barrier at the end of the loop
Body of the loop
12
Thread # Iterations
1 (Master) 1, 2
2 3, 4, 5
Loop entered
Loop exited
CC02 – Parallel Programming Using OpenMP 15 of 25PhUSE 2011
A work sharing directive
Master thread creates 0 or more child threads. Loop iterations distributed among the threads.
Implied barrier at the end of the loop, only master continues beyond.
Clauses can be used for finer control – sharing variables among threads, maintaining order of execution, controlling distribution of iterations among threads etc.
CC02 – Parallel Programming Using OpenMP 16 of 25PhUSE 2011
For reproducibility of results -Random number sequence must not change from run to run.
Random numbers must be drawn from the same stream across runs.
Pragma omp ordered ensures that attached code is executed sequentially by threads.A thread executing a later iteration, waits for threads executing earlier iterations to finish with the ordered block.
CC02 – Parallel Programming Using OpenMP 17 of 25PhUSE 2011
Output from simulations running on different threads needs to be summarized into a shared object.
Simulation sequence does not matter.
Pragma omp critical ensures that attached code is executed by any single thread at a time.
A thread waits at the critical block if another thread is currently executing it.
CC02 – Parallel Programming Using OpenMP 18 of 25PhUSE 2011
CC02 – Parallel Programming Using OpenMP 19 of 25PhUSE 2011
† SiZ® - a design and simulation package for fixed sample size studies‡ Tests executed on a laptop with 3 GB RAM and a quad-core processor with a speed of 2.4 GHz
CC02 – Parallel Programming Using OpenMP 20 of 25PhUSE 2011
† SiZ® - a design and simulation package for fixed sample size studies‡ Tests executed on a laptop with 3 GB RAM and a quad-core processor with a speed of 2.4 GHz
CC02 – Parallel Programming Using OpenMP 21 of 25PhUSE 2011
† SiZ® - a design and simulation package for fixed sample size studies‡ Tests executed on a laptop with 3 GB RAM and a quad-core processor with a speed of 2.4 GHz
CC02 – Parallel Programming Using OpenMP 22 of 25PhUSE 2011
Win32 APICreate, manage and synchronize threads at a much lower level
Generally involves much more coding compared to OpenMP
MPI (Message Passing Interface)Supports distributed and cluster computing
Generally considered difficult to program – program’s data structures need to be partitioned and typically the entire program needs to be parallelized
CC02 – Parallel Programming Using OpenMP 23 of 25PhUSE 2011
OpenMP is simple, flexible and powerful.
Supported on many architectures including Windows and Unix.
Works on platforms ranging from the desktop to the supercomputer.
Read the specs carefully, design properly and test thoroughly.
CC02 – Parallel Programming Using OpenMP 24 of 25PhUSE 2011
OpenMP Website: http://www.openmp.org For the complete OpenMP specification
Parallel Programming in OpenMPRohit Chandra, Leonardo Dagum, Dave Kohr, Dror Maydan,
Jeff McDonald, Ramesh Menon
Morgan Kaufmann Publishers
OpenMP and C++: Reap the Benefits of Multithreading without All the WorkKang Su Gatlin, Pete Isenseehttp://msdn.microsoft.com/en-us/magazine/cc163717.aspx
CC02 – Parallel Programming Using OpenMP 25 of 25PhUSE 2011
Email: aniruddha.deshmukh@cytel.com