1 Programming Multicore Processors Aamir Shafi High Performance Computing Lab .

Post on 11-Jan-2016

224 views 0 download

transcript

1

Programming Multicore Processors

Aamir Shafi

High Performance Computing Lab

http://hpc.seecs.nust.edu.pk

2

Serial Computation • Traditionally, software has been written for

serial computation:• To be run on a single computer having a single

Central Processing Unit (CPU)• A problem is broken into a discrete series of

instructions

Parallel Computation• Parallel computing is the simultaneous use of

multiple compute resources to solve a computational problem:• Also known as High Performance Computing (HPC)

• The prime focus of HPC is performance—the ability to solve biggest possible problems in the least possible time

3

Traditional Usage of Parallel Computing--Scientific Computing • Traditionally parallel computing is used to solve

challenging scientific problems by doing simulations: • For this reason, it is also called “Scientific Computing”:

• Computational science

4

Emergence of Multi-core Processors

• In the last decade, performance of processors is not enhanced by increasing clock speed:• Increasing clock speed directly increases power

consumption• Power is dissipated as heat, not practical to cool down

processors• Intel canceled a project to produce 4 GHz processor!

• This led to the emergence of multi-core processors:• Performance is increased by increasing processing cores

that run on lower clock speed:• Implies better power usage

5Disruptive Technology!

6

Moore’s Law is Alive and Well

7

Power Wall

Why Multi-core Processors Consume Lesser Power

• Dynamic power is proportional to V2fC• Increasing frequency (f) also increases supply voltage

(V): more than linear effect • Increasing cores increases capacitance (C) but has only

a linear effect

8

9

Software in the Multi-core Era• The challenge has been thrown to the software

industry:• Parallelism is perhaps the answer

• The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software:• http://www.gotw.ca/publications/concurrency-ddj.htm

• Some excerpts:• The biggest sea change in software development since the

OO revolution is knocking at the door, and its name is Concurrency

• This essentially means every software programmer will be a parallel programmer:• The main motivation behind conducting this “Programming

Multicore Processors” workshop

10

About the “Programming Multicore Processors” Workshop

12

Course Contents …

A little background on Parallel Computing Approaches

Parallel Hardware• Three main classifications:

• Shared Memory Multi-processors: • Symmetric Multi-Processors (SMP)• Multi-core Processors

• Distributed Memory Multi-processors• Massively Parallel Processors (MPP)• Clusters:

• Commodity and custom clusters

• Hybrid Multi-processors: • Mixture of shared and distributed memory technologies

13

14

First Type: Shared Memory Multi-processors

• All processors have access to shared memory: • Notion of “Global Address Space”

15

Symmetric Multi-Processors (SMP)• A SMP is a parallel processing system with a shared-

everything approach:• The term signifies that each processor shares the main

memory and possibly the cache

• Typically a SMP can have 2 to 256 processors• Also called Uniform Memory Access (UMA)• Examples include AMD Athlon, AMD Opteron 200 and

2000 series, Intel XEON etc

16

Multi-core Processors

17

Second Type: Distributed Memory

• Each processor has its own local memory• Processors communicate with each other by

message passing on an interconnect

18

Cluster Computers• A group of PCs or workstations or Macs (called

nodes) connected to each other via a fast (and private) interconnect: • Each node is an independent computer

• Each cluster has one head-node and multiple compute-nodes:• Users logon to head-node and start parallel jobs on

compute-nodes

• Two popular cluster classifications: • Beowulf Clusters (http://www.beowulf.org)• Rocks Clusters (http://www.rocksclusters.org)

19

Proc 6

Proc 0

Proc 1

Proc 3

Proc 2

Proc 4

Proc 5

Proc 7

Cluster Computer

20

Third Type: Hybrid• Modern clusters have hybrid architecture:

• Distributed memory for inter-node (between nodes) communications

• Shared memory for intra-node (within a node) communications

21

SMP and Multi-core clusters• Most modern commodity clusters have SMP and/or

multi-core nodes: • Processors not only communicate via interconnect, but

shared memory programming is also required

• This trend is likely to continue: • Even a new name “constellations” has been proposed

Classification of Parallel Computers

22

Parallel HardwareParallel Hardware

Shared Memory HardwareShared Memory Hardware Distributed Memory HardwareDistributed Memory Hardware

SMPsSMPs Multicore ProcessorsMulticore Processors ClustersClustersMPPsMPPs

In this workshop, we will learn how to program shared memory parallel hardware …

Parallel Hardware Shared Memory Hardware *

Writing Parallel Software• There are mainly two approaches for writing

parallel software• The first approach is to use libraries (packages)

written in already existing languages:• Economical

• The second and more radical approach is to provide new languages: • Parallel Computing has a history of novel parallel

languages• These languages provide high level parallelism

constructs:

23

24

Shared Memory Languages and Libraries

• Designed to support parallel programming on shared memory platforms:• OpenMP:

• Consists of a set of compiler directives, library routines, and environment variables

• The runtime uses fork-join model of parallel execution

• Cilk++:• A design goal was to support asynchronous parallelism

• A set of keywords:• cilk_for, cilk_spawn, cilk_sync …

• POSIX Threads (PThreads)• Threads Building Blocks (TBB)

Distributed Memory Languages and Libraries

• Libraries: • Message Passing Interface (MPI)—defacto standard• PVM

• Languages:• High Performance Fortran (HPF): • Fortran M:• HPJava:

25

26

Our Focus• Shared Memory and Multi-core Processors

Machines:• Using POSIX Threads• Using OpenMP• Using Cilk++ (covered briefly)

• Disruptive Technology: • Using Graphics Processing Units (GPUs) by NVIDIA for

general-purpose computing

Day One

27

Timings Topic Presenter10:00 to 10:30 Introduction to multicore

computingAamir Shafi

10:30 to 11:30 Background discussion—review of processes, threads, and architecture. Speedup analysis

Akbar Mehdi

11:30 to 11:45 Break11:45 to 12:55P Introduction to POSIX

ThreadsAkbar Mehdi

12:55P to 1:25P Prayers break1:25P to 2:30P Practical Session—Run

hello world PThreads program, introduce Linux, top, Solaris. Also introduce the first coding assignment

Akbar Mehdi

Day Two

28

Timings Topic Presenter

10:00 to 11:00 POSIX Threads continued…

Akbar Mehdi

11:00 to 12:55P Introduction to OpenMP

Mohsan Jameel

12:55P to 1:25P Prayer Break

1:25P to 2:30P OpenMP continued… + Lab session

Mohsan Jameel

Day Three

29

Timings Topic Presenter10:00 to 12:00 Parallelizing the Image

Processing Application using PThreads and OpenMP—Practical Session

Akbar Mehdi and Mohsan Jameel

12:00 to 12:55P Introduction to Intel Cilk++ Aamir Shafi

12:55 to 1:25P Prayer Break

1:25P to 2:30P Introduction to NVIDIA CUDA

Akbar Mehdi

2:30P to 2:35P Concluding Remarks Aamir Shafi

Learning Objectives• To become aware of the multicore revolution and

its impact on the computer software industry• To program multicore processors using POSIX

Threads• To program multicore processors using OpenMP

and Cilk++• To program Graphics Processing Units (GPUs) for

general purpose computation (using NVIDIA CUDA API)

30

You may download the tentative agenda from http://hpc.seecs.nust.edu.pk/~aamir/res/mc_agenda.pdf

Next Session

• Review of important and relevant Operating Systems and Computer Architecture concepts by Akbar Mehdi ….

31