+ All Categories
Home > Software > Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Date post: 15-May-2015
Category:
Upload: intel-software-brasil
View: 374 times
Download: 0 times
Share this document with a friend
Description:
Werner Krotz-Vogel Intel Software Conference 2014 May 2014
Popular Tags:
65
Getting the maximum performance in distributed clusters Intel Cluster Studio XE Werner Krotz-Vogel Development Products Division Software and Services Group May 2014
Transcript
Page 1: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Getting the maximum performance in distributed clusters Intel Cluster Studio XE Werner Krotz-Vogel Development Products Division Software and Services Group May 2014

Page 2: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Intel® Software Conference 2014

Agenda Performance Tuning Methodology Overview Quick overview of Intel® Trace Analyzer and Collector What’s new in 2015 beta Quick overview of Intel® VTune™ Amplifier XE What’s new in 2015 beta Performance Tuning Methodology using ITAC and VTune™ Amplifier XE Demonstrated on Poisson Example MPI 3.0 Support with Intel® MPI Summary

2

Page 3: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Performance Tuning Methodology using ITAC and VTune™ Amplifier XE

Step 1

•Cluster Level Analysis & Algorithmic Tuning

Step 2 •Run-time Analysis & Tuning

Step 3

•Intra-Node and Single Node Level Analysis

Page 4: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Trace Analyzer and Collector 8.1 What’s new?

4

Page 5: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Trace Analyzer and Collector 8.1 Update 3 What’s New

Fresh look-and-feel to the Intel® Trace Analyzer Graphical Interface New toolbars, icons, and dialogs for more

streamlined analysis flow

Addition of Welcome Page and easy access to past projects

Support of Dynamic Profiling Tool Command MPI_PControl supported

Support for MPI 2.x Standard

New GUI-based installer on Linux*

5

Compilers & Libraries

Page 6: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® MPI Library 5.0 Beta Key Features

6

Page 7: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Initial MPI-3.0 Support

Non-blocking and Sparse Collectives

Fast Remote Memory Acess (RMA)

Large buffer support (e.g. > 2GB) via mpi_count derived type

ABI compatibility with existing Intel® MPI Library and other MPICH*-based applications

7

What’s New in Intel MPI Library 5.0 Beta

Page 8: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Support for the latest MPI-3.0 features

8

Use non-blocking collectives for a complete comm/comp overlap

More efficient one-sided communication via new Fast Remote Memory Access functionality

// Start synchronization MPI_Ibarrier(comm, &req); // Do extra computation … // Complete synchronization MPI_Test(&req, …);

Example (C)

Page 9: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Trace Analyzer and Collector 9.0 Beta Key Features

9

Page 10: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

What’s New in Intel Trace Analyzer and Collector 9.0 Beta

10

Initial MPI-3.0 Support

Automatic Performance Assistant

Detect common MPI performance issues

Automated tips on potential solutions

Page 11: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Trace Analyzer and Collector Optimize MPI Communications (part of Intel® Cluster Studio XE)

Visually understand parallel application behavior

Communications Patterns

Hotspots

Load Balance

MPI Checking

Detect Deadlocks

Data Corruption

Errors in Parameters, Data Types, etc

11

Intel® ITAC

Proc

esse

s

Year

0

1000

2000

3000

4000

5000

6000

7000

2010 2011 2012

Intel® Trace Analyzer and Collector (processes)

Page 12: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

ITAC 9.0: What’s New

12

• Collection

Full MPI-3 support

New mpirun options to customize collection

Experimental TIME-WINDOWS support

System calls profiling

• Analysis

New Performance Assistant

Visual appearance enhancement

New Summary Page

• New tutorials

Page 13: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

New mpirun data collection keys

13

Reduce a trace file size or a number of Message Checker reports (supported only at runtime with Hydra process manager):

• -trace-collectives: collect info only about Collective operations

• -trace-pt2pt: collect info only about Point-to-Point operations

Example:

$ [mpirun|mpiexec] -trace-pt2pt –n 4 ./myApp

Page 14: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

System calls profiling (1|2)

14

Linux* only. Capability to trace the following system calls:

access clearerr close creat

dup dup2 fclose fdopen

feof ferror fflush fgetc

fgetpos fgets fileno fopen

fprintf fputc fputs fread

freopen fseek fsetpos ftell

fwrite getc getchar gets

lseek lseek64 mkfifo perror

pipe poll printf putc

putchar puts read readv

remove rename rewind setbuf

setvbuf sync tmpfile tmpnam

umask ungetc vfprintf vprintf

write writev

Page 15: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

System calls profiling (2|2)

15

To turn on system calls collection add any of the following lines into ITC configuration file:

• To collect all system calls:

ACTIVITY SYSTEM on

• To collect an exact function:

STATE SYSTEM:<func_name> ON

View system calls using ITA (new Group SYSTEM, can be expanded in an ordinary way):

Page 16: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

New Summary Page

16

At-a-glance view on MPI activity and hints on how to start the analysis of the application:

Page 17: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

New Performance Assistant

17

Automatic highlights of performance issues, both in GUI and CLI.

Currently 4 types of issues are supported, see screenshots:

Page 18: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® VTune™ Amplifier XE 2013 Key Features

18

Page 19: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® VTune™ Amplifier XE Tune Applications for Scalable Multicore Performance

Fast, Accurate Performance Profiles Hotspot (Statistical call tree) Call counts (Statistical) Hardware-Event Sampling

Thread Profiling Visualize thread interactions on timeline Balance workloads

Easy set-up Pre-defined performance profiles Use a normal production build

Find Answers Fast Filter extraneous data View results on the source / assembly

Compatible Microsoft, GCC, Intel compilers C/C++, Fortran, Assembly, .NET, Java Latest Intel® processors

and compatible processors1

Windows or Linux Visual Studio Integration (Windows) Standalone user i/f and command line 32 and 64-bit

19

1 IA32 and Intel® 64 architectures. Many features work with compatible processors. Event based sampling requires a genuine Intel® Processor.

Page 20: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

What’s New in 2013 SP1? Intel® VTune™ Amplifier XE

Intel Confidential

20

More Profiling Data Intel® Xeon Phi™ – memory and vectorization profiling

Gen graphics tuning – GT event counting, offload, OpenCL*, …

Better Data Mining – Find Answers Faster Search added to all grids

Timeline sorting, band height, time scale configuration

Loop hierarchy, overhead and spin time metrics

OpenMP* 4.0 – affinity controls, tasking and scalability analysis

Easier to Use Attach to a running Java process

Contextual help for hardware events and performance metrics

Easier generation of command line options from the user i/f

New OS & Processor Support Intel® Xeon Phi™, Haswell – Windows* & Linux*

Windows 8 desktop and Visual Studio* 2012

Collection on Windows UI and Windows Blue

Latest Linux distributions

New since the first 2013 release. Some features released in earlier updates.

Page 21: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® VTune™ Amplifier XE 2015 Beta Key Features

21

Page 22: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

GPU analysis

TSX analysis

Remote collection in the GUI via ssh

Mac OS* GUI data viewer (no collection)

CSV import and custom collector support

Timeline grouping

22

What’s New in Intel VTune™ Amplifier XE 2015 Beta

Page 23: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Performance Tuning Methodology using ITAC and VTune™ Amplifier XE

Page 24: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Performance Tuning Methodology using ITAC and VTune™ Amplifier XE

Step 1

•Cluster Level Analysis & Algorithmic Tuning

Step 2 •Run-time Analysis & Tuning

Step 3 •Single Node Level Analysis

Global Analysis of the whole application gives first indications of performance issues • Run time and scaling analysis

• Message passing performance analysis on an inter/intra node level,

including finding of MPI hotspots

• Network Idealization that yields an imbalance diagram, providing guidance on how to proceed

• Algorithmic/source code changes can be implemented for better message passing practices or improving the load balance of the application by:

• Fixing imbalances in communication patterns of MPI and non-MPI routines.

• E.g: slow sequential I/O often causes imbalances. • Removing unnecessary synchronization.

• E.g: message passing patterns using blocking send and receive may cause a send/receive order that increases wait times.

• This may be resolved by using non-blocking MPI_Isend/MPI_Irecv pairs.

Page 25: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Performance Tuning Methodology using ITAC and VTune™ Amplifier XE

Step 1

•Cluster Level Analysis & Algorithmic Tuning

Step 2 •Run-time Analysis & Tuning

Step 3 •Single Node Level Analysis

Intel MPI can be tuned without changing the source code using: • Environment variables for tuning of collective

operations, e.g., I_MPI_ADJUST_ALLREDUCE

• Environment variables for changing the message passing characteristics, e.g., I_MPI_DAPL_DIRECT_COPY_THRESHOLD

• It is also possible to change the MPI process/rank to node mapping for a better inter/intra node communication balance

Page 26: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Performance Tuning Methodology using ITAC and VTune™ Amplifier XE

Step 1

•Cluster Level Analysis & Algorithmic Tuning

Step 2 •Run-time Analysis & Tuning

Step 3 •Single Node Level Analysis

Single node tuning is necessary for serial and parallel performance optimizations. Single node tuning is important for improving overall application scalability and reducing load imbalance. Bandwidth analysis on the node is important for an understanding of deficiencies in cluster level scaling. Example: Conducting a hotspot analysis for each rank or critical ranks identified in step 1 and 2. The call stack information for a specific MPI routine may be also helpful in refining of the analysis in Step 1.

Page 27: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Simple Scaling analysis

First step may be to just run the program for various number of processes [p] and record timings: T[p]

Speedup S is defined as: S[p] = T[1]/T[p]

Efficiency E is defined as: E[p] = S[p]/p

An ideal parallel program will show: S[p] = p and E[p] = 1

Page 28: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Example: Poisson solver on a square 3200x3200 computational grid analysis

Poisson solver: simple implementation of Poisson solver: e.g. heat equation

• The 3200x3200 grid points can be distributed to MPI ranks using

• 2D process grid, e.g., in the case of 4 ranks, one can use 2 rows x 2 columns of processes

• 1D distribution with 4 rows x 1 column or 1 row x 4 columns

0 1 2 3

3200x800 local grid points per MPI rank

2 3

0 1

1600x1600 local grid

points per MPI rank

Benchmark Environment Intel® Xeon® E5 v2 processors (Ivy Town) with 12 cores. Frequency: 2 processors per node (24 cores per node) Mellanox QDR Infiniband Operating system: RedHat EL 6.1 Intel® MPI 4.1

Page 29: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Example: Poisson solver on a square 3200x3200 computational grid analysis

Analysis of the application with different numbers of processes (p)

Speed-up: S[p] = T[1]/T[p]

Parallel Efficiency: E[p] = S[p]/p

The speedup curves for the 2D quadratic and 1D process (1D 1xN and 1D Nx1) grids show some differences in scaling

Page 30: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 1- Cluster Level Analysis using Intel Trace Analyzer and Collector

The MPI communication and compute performance breakdown of total run time

T[p] = T_comp[p] + T_mpi[p]

can be accessed through the trace analyzer’s Function Profile (Intel® Trace Analyzer displays the Functions Profile Chart when opening a trace file).

The trace file can be generated by adding the flag “-trace” to the mpirun or mpiexec.hydra command

The trace analyzer API was used to time just 100

of 1653 iterations. VT_API is paused time

Timing is accumulated over Ranks. Application

time is T_comp

This column is the average time per process. It can be

added by right click and Function Profile Settings

Intel Trace Analyzer and Collector Flat Profile

Page 31: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Example - measuring MPI times with ITAC Function Profile

MPI Breakdown of a real Application

(VASP).

All MPI functions are listed and may be sorted by a click on the top of each

column

Page 32: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 1- Cluster Level Analysis using Intel Trace Analyzer and Collector

Parallel efficiency can be calculated and plotted for the compute time of the application separately.

• Parallel Efficiency: E[p] = S[p]/p

• One can see that MPI Time is insignificant up to 48 cores (the equivalent of two nodes).

• Above 96 ranks (4 nodes), pure computational application performance also yields super linear scaling.

• However, at the same data point, at around 96 ranks, MPI time becomes the main reason for low efficiency

Page 33: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 1- Cluster Level Analysis using Intel Trace Analyzer and Collector

Message Passing Profile – 2D case (48x32)

• Message Passing Profile is a display of various characteristics of message passing in a sender/receiver matrix that can be obtained through Charts-> Message Profile Chart.

• Dealing with 1536 ranks generates a huge matrix we may fuse all ranks for each node: Advanced-> Process Aggregation-> All Nodes.

• The diagonal now shows the intra-node performance characteristics while the off diagonals show the inter-node statistics. Without Process Aggregation the diagonal will be only filled if we send messages from rank n to the same rank n which is usually not a good idea

Page 34: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 1- Cluster Level Analysis using Intel Trace Analyzer and Collector

Message Passing Profile –1D case (1535x1)

• Have much fewer massages in the 1D case that are much larger, which leads to a much higher average transfer rate.

• In the 1D case we also transfer a much larger amount of data.

Page 35: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Algorithm and Network evaluation

ITAC shows timing of all MPI routines used by a program

The timing of these routines may be due to network transfer times caused by bandwidth limitations

The other possibility are waiting times caused by the algorithm: load imbalance or dependencies

Page 36: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

A simple Network Model

The most simple network model defines latency

T_trans[V] = L + (1/BW)*V

• Latency L = transfer time for 0 byte message

• Bandwidth BW = transfer rate for (asymptotically) large messages.

• The transfer time is (V = Message Volume)

Page 37: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Ideal Network Simulator

It is extremely complicated to simulate a realistic network!

An extreme case – the ideal network – may be simulated by setting all transfer times to 0. This would mean L = 0 and BW = ∞ for the simple model

ITAC offers an ideal network simulation with transfer times set to 0. Compute times (non MPI) will stay the same

Page 38: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 1- Cluster Level Analysis using Intel Trace Analyzer and Collector

Idealization of network and the Load Imbalance Diagram

• Employing the ideal network simulator (invoked through the Advanced->Idealization menu) allows us to separate network stack performance impact on total MPI performance from algorithmic inefficiencies like imbalance and dependencies.

• A simple network model for the transfer time as a function of message volume V is

T_trans[V] = L + (1/BW)*V

• L is latency, defined as the time needed to transfer a 0 byte

message. BW is the transfer rate for asymptotically large messages

Page 39: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 2- Runtime Analysis

It is possible to improve MPI performance without changing the source code

This can be done by using Intel MPI environment variables or by changing the process mapping of ranks to compute nodes.

Process to node mapping can be altered by advanced methodologies like machine- or configuration files or by reordering the ranks inside of a communicator

One option is start the tuning by concentrating on global operations

Set the environment variable I_MPI_DEBUG = 5.

Prints valuable information about used variables, network fabrics and process placement.

Setting I_MPI_DEBUG to 6 will further reveal the default algorithms for collective algorithms

Page 40: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 2- Runtime Analysis

Intel MPI reference guide reveals 8 different algorithms for MPI_Allreduce

• The algorithm can easily be changed by setting the environment variable I_MPI_ADJUST_ALLREDUCE to an integer value in the range of 1-8

Page 41: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 3 - Analyzing Intra-Node performance using Intel® VTune™ Amplifier XE

Intel Trace Analyzer and Collector is not sufficient for hybrid applications due to its primarily focus on MPI performance

• Hybrid codes that combine parallel MPI processes with threading for a more efficient exploitation of computing resources

$> mpirun –n N amplxe-cl –result-dir hotspots_N –collect hotspots -- poisson.x

ITAC analysis showed us which MPI functions are the hotspots. But which MPI function?

MPI_Waitall function actually has the largest contribution to the application run time.

Page 42: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 3 - Analyzing Intra-Node performance using Intel® VTune™ Amplifier XE

Hotspot Functions

Page 43: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 3 - Analyzing Intra-Node performance using Intel® VTune™ Amplifier XE

Hotspot MPI/System functions

Page 44: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 3 - Analyzing Intra-Node performance using Intel® VTune™ Amplifier XE

Callstack #3

Callstack #2 Callstack #1

Page 45: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 3 - Analyzing Intra-Node performance using Intel® VTune™ Amplifier XE – Bandwidth Analysis

export VTUNE_COLLECT=snb-bandwidth

export VTUNE_COLLECT=hotspots

export VTUNE_FLAGS=-start-paused

mpiexec.hydra $MPI_FLAGS -n 1 amplxe-cl $VTUNE_FLAGS --result-dir ${VTUNE_COLLECT}_$1 --collect $VTUNE_COLLECT

Page 46: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 3 - Analyzing Intra-Node performance using Intel® VTune™ Amplifier XE - Bandwidth Analysis

Page 47: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Step 3 - Analyzing Intra-Node performance using Intel® VTune™ Amplifier XE

8.908

61.836

83.548 86.569 80.717

30.392

15.839

0

0.2

0.4

0.6

0.8

1

1.2

1.4

0

10

20

30

40

50

60

70

80

90

100

1 6 12 24 48 72 96

Para

llel E

ffic

ienc

y

Ban

dwid

th o

n on

e no

de, G

B/s

Number of ranks

Bandwidth vs. Parallel Efficiency on a first node

Bandwidth, GB/s Parallel Efficiency

Page 48: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

MPI 3.0 Support with Intel® MPI Intel® MPI Library 5.0 and Intel® Trace Analyzer and Collector 9.0

Page 49: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

How do you spell MPI?

A de facto standard for communicating between processes of a parallel program on a distributed memory system Standardized

Supported on almost all platforms

Portable No need to modify your code when

porting

Performance opportunities Vendor MPIs can exploit native

hardware features

Functionality Over 125 routines defined by a

committee

49

#include "mpi.h“ int main(argc,argv){ MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numtasks); MPI_Comm_rank(MPI_COMM_WORLD,&rank); printf ("Number of tasks= %d \ My rank= %d\n",ntasks,rank); /******* do some work *******/ MPI_Finalize(); }

Example (C)

MPI include file

Initialize MPI environment

Terminate MPI environment

Do work and make MPI calls

Page 50: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

What is in MPI-3?

Topic Motivation Main Result

Collective Operations Collective performance Non-Blocking & Sparse Collectives

Remote Memory Access Cache coherence, PGAS support Fast RMA

Backward Compatibility Buffers > 2 GB Large buffer support, const buffers

Fortran Bindings Fortran 2008 Fortran 2008 bindings Removed C++ bindings

Tools Support PMPI Limitations MPIT Interface

Hybrid Programming Core count growth MPI_Mprobe, shared memory windows

Fault Tolerance Node count growth None. Next time?

50

Page 51: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

I want a complete comm/comp overlap

Problem

Computation/communication overlap is not possible with the blocking collective operations

Solution: Non-blocking Collectives

Add non-blocking equivalents for existing blocking collectives

Do not mix non-blocking and blocking collectives on different ranks in the same operation

51

// Start synchronization MPI_Ibarrier(comm, &req); // Do extra computation … // Complete synchronization MPI_Test(&req, …);

Example (C)

Page 52: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

I have a sparse communication network

Problem

Neighbor exchanges are poorly served by the current collective operations (memory and performance losses)

Solution: Sparse Collectives

Add blocking and non-blocking Allgather* and Alltoall* collectives based on neighborhoods

52

call MPI_NEIGHBOR_ALLGATHER(& & sendbuf, sendcount, sendtype,& & recvbuf, recvcount, recvtype,& & graph_comm, ierror)

Example (FORTRAN)

Page 53: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

I want to use one-sided calls to reduce sync overhead Problem MPI-2 one-sided operations are too

general to work efficiently on cache coherent systems and compete with PGAS languages

Solution: Fast Remote Memory Access Eliminate unnecessary overheads by

adding a ‘unified’ memory model

Simplify usage model by supporting the MPI_Request non-blocking call, extra synchronization calls, relaxed restrictions, shared memory, and much more

53

call MPI_WIN_GET_ATTR(win, MPI_WIN_MODEL, & memory_model, flag, ierror) if (memory_model .eq. MPI_WIN_UNIFIED) then ! private and public copies coincide

Example (FORTRAN)

Page 54: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

I’m sending *very* large messages

Problem

Original MPI counts are limited to 2 Gigaunits, while applications want to send much more

Solution: Large Buffer Support

“Hide” the long counts inside the derived MPI datatypes

Add new datatype query calls to manipulate long counts

54

// mpi_count may be, e.g., // 64-bit long MPI_Get_elements_x(&status, datatype, &mpi_count);

Example (FORTRAN)

Page 55: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

None of these apply to me. What else you got? I have a hybrid application

Create a communicator inside a shared memory domain (intranode, via MPI_Comm_split_type)

Use the new MPI_Mprobe calls

I need to know what architecture I’m running on Predefined info object MPI_INFO_ENV allows for environment query

I’m using the C++ bindings

Tough luck. C++ bindings have been removed from the standard.

55

Page 56: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Tell me more about this Intel® MPI Library

Optimized MPI application performance

Application-specific tuning

Automatic tuning

Lower Latency and Multi-vendor interoperability

Optimized support for latest OFED* features

Faster MPI communication

Optimized collectives

Sustainable scalability beyond 120K cores

Native InfiniBand* interface allows for reduced memory load and higher bandwidth

Simply and Accelerate Clusters

Intel® Cluster Ready compliance

56

iWARPiWARP

Page 57: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Tell me more about this Intel® MPI Library

Optimized MPI application performance

Application-specific tuning

Automatic tuning

Lower Latency and Multi-vendor interoperability

Optimized support for latest OFED* features

Faster MPI communication

Optimized collectives

Sustainable scalability beyond 120K cores

Native InfiniBand* interface allows for reduced memory load and higher bandwidth

Simply and Accelerate Clusters

Intel® Cluster Ready compliance

57

iWARPiWARP

Page 58: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® MPI Library 5.0 & Intel® Trace Analyzer and Collector 9.0 Beta Nov 2013

Initial MPI-3.0 Support

Non-blocking Collectives

Fast RMA

Large Counts

ABI compatibility with existing Intel® MPI Library applications

58

Initial MPI-3.0 Support

Automatic Performance Assistant

Detect common MPI performance issues

Automated tips on potential solutions

Intel® MPI Library Intel® Trace Analyzer and Collector

Page 59: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

What can I do?

Register for the Beta program (it’s free)

Start playing around with MPI-3.0

Come talk to us about it:

Visit the Intel® Clusters and HPC Technology forums

Check out the Intel® MPI Library product page (LEARN tab) for articles, examples, etc.

59

bit.ly/impi50-beta

software.intel.com/en-us/forums/intel-clusters-and-hpc-technologychnology

www.intel.com/go/mpi

Page 60: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Summary

Page 61: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel Technical Computing

Compute enables a New Scientific Method* Technical computing and R&D workflow innovation

61

• Prediction

• Modeling & Simulation • Experiment Refinement

• Physical Prototyping

• Analysis • Conclusion • Refinement

• Physical Prototyping

• Analysis • Conclusion • Refinement

• Hypothesis

• Hypothesis

1. Satava, Richard M. “The Scientific Method Is Dead-Long Live the (New) Scientific Method.” Journal of Surgical Innovation (June 2005).

• Prediction

Accelerates the Method

Iterate

Page 62: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Intel® Technical Computing

Millions of Applications… Plus Yours Delivering performance across generations and platforms

62

Today Development

Tools Performance/ Optimizations

Standards

Intel Architecture ecosystem: Increasing the return and longevity of your application investment

Page 63: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Tablet

Desktop

Intel® Xeon ® Workstation

Local Cluster Computation Large Clusters

Common underlying architecture and software tools scales investments across technical computing platforms

Intel® Technical Computing

The Right Tool for the Job: A Continuum of Computing How do you get breakthroughs for your investment

63

Page 64: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Intel Confidential — Do Not Forward

Page 65: Getting the maximum performance in distributed clusters Intel Cluster Studio XE

Copyright© 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.

Legal Disclaimer & Optimization Notice

INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS”. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

Copyright © , Intel Corporation. All rights reserved. Intel, the Intel logo, Xeon, Xeon Phi, Core, VTune, and Cilk are trademarks of Intel Corporation in the U.S. and other countries.

Optimization Notice

Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804


Recommended