Overview of the MVAPICH Project:Latest Status and Future Roadmap
MVAPICH2 User Group (MUG) Meeting
by
Dhabaleswar K. (DK) Panda
The Ohio State University
E-mail: [email protected]
http://www.cse.ohio-state.edu/~panda
MVAPICH User Group Meeting (MUG) 2019 2Network Based Computing Laboratory
High-End Computing (HEC): PetaFlop to ExaFlop
Expected to have an ExaFlop system in 2020-2021!
100 PFlops in 2017
1 EFlops in 2020-2021?
143 PFlops in 2018
MVAPICH User Group Meeting (MUG) 2019 3Network Based Computing Laboratory
Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges
Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,
OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications (HPC and DL)
Networking Technologies(InfiniBand, 40/100/200GigE,
Aries, and Omni-Path)
Multi-/Many-coreArchitectures
Accelerators(GPU and FPGA)
Middleware Co-Design Opportunities and Challenges across Various
Layers
PerformanceScalabilityResilience
Communication Library or Runtime for Programming ModelsPoint-to-point
CommunicationCollective
CommunicationEnergy-
AwarenessSynchronization
and LocksI/O and
File SystemsFault
Tolerance
MVAPICH User Group Meeting (MUG) 2019 4Network Based Computing Laboratory
• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)– Scalable job start-up– Low memory footprint
• Scalable Collective communication– Offload– Non-blocking– Topology-aware
• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)– Multiple end-points per node
• Support for efficient multi-threading• Integrated Support for Accelerators (GPGPUs and FPGAs)• Fault-tolerance/resiliency• QoS support for communication and I/O• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,
MPI+UPC++, CAF, …)• Virtualization • Energy-Awareness
Designing (MPI+X) at Exascale
MVAPICH User Group Meeting (MUG) 2019 5Network Based Computing Laboratory
Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002
– MVAPICH2-X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014
– Support for Virtualization (MVAPICH2-Virt), Available since 2015
– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 3,025 organizations in 89 countries
– More than 564,000 (> 0.5 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (Nov ‘18 ranking)
• 3rd, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China
• 5th, 448, 448 cores (Frontera) at TACC
• 8th, 391,680 cores (ABCI) in Japan
• 15th, 570,020 cores (Neurion) in South Korea and many others
– Available with software stacks of many vendors and Linux Distros (RedHat, SuSE, and OpenHPC)
– http://mvapich.cse.ohio-state.edu
• Empowering Top500 systems for over a decadePartner in the TACC Frontera System
MVAPICH User Group Meeting (MUG) 2019 6Network Based Computing Laboratory
TimelineJan-
04
Jan-
10
Nov
-12
MVAPICH2-X
OMB
MVAPICH2
MVAPICH
Oct
-02
Nov
-04
Apr
-15
EOL
MVAPICH2-GDR
MVAPICH2-MIC
MVAPICH Project Timeline
Jul-
15
MVAPICH2-Virt
Aug
-14
Aug
-15
Sep-
15
MVAPICH2-EA
OSU-INAM
MVAPICH User Group Meeting (MUG) 2019 7Network Based Computing Laboratory
0
100000
200000
300000
400000
500000
600000
Sep-
04
Jan-
05M
ay-0
5
Sep-
05
Jan-
06M
ay-0
6
Sep-
06
Jan-
07M
ay-0
7
Sep-
07
Jan-
08M
ay-0
8
Sep-
08
Jan-
09M
ay-0
9
Sep-
09
Jan-
10M
ay-1
0
Sep-
10
Jan-
11M
ay-1
1
Sep-
11
Jan-
12M
ay-1
2
Sep-
12
Jan-
13M
ay-1
3
Sep-
13
Jan-
14M
ay-1
4
Sep-
14
Jan-
15M
ay-1
5
Sep-
15
Jan-
16M
ay-1
6
Sep-
16
Jan-
17M
ay-1
7
Sep-
17
Jan-
18M
ay-1
8
Sep-
18
Jan-
19M
ay-1
9
Num
ber o
f Dow
nloa
ds
Timeline
MV
0.9.
4
MV2
0.9
.0
MV2
0.9
.8
MV2
1.0
MV
1.0
MV2
1.0.
3
MV
1.1
MV2
1.4
MV2
1.5
MV2
1.6
MV2
1.7
MV2
1.8
MV2
1.9 M
V2-G
DR
2.0b
MV2
-MIC
2.0
MV2
-GD
R 2
.3.2
MV2
-X2.
3rc
2
MV2
Virt
2.2
MV2
2.3
.2
OSU
INAM
0.9
.3
MV2
-Azu
re 2
.3.2
MV2
-AW
S2.
3
MVAPICH2 Release Timeline and Downloads
MVAPICH User Group Meeting (MUG) 2019 8Network Based Computing Laboratory
Architecture of MVAPICH2 Software Family (for HPC and DL)
High Performance Parallel Programming Models
Message Passing Interface(MPI)
PGAS(UPC, OpenSHMEM, CAF, UPC++)
Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)
High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms
Point-to-point
Primitives
Collectives Algorithms
Energy-Awareness
Remote Memory Access
I/O andFile Systems
FaultTolerance
Virtualization Active Messages
Job StartupIntrospection & Analysis
Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni-Path)
Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPower, Xeon-Phi, ARM, NVIDIA GPGPU)
Transport Protocols Modern Features
RC XRC UD DC SHARP2* ODPSR-IOV
Multi Rail
Transport MechanismsShared
MemoryCMA IVSHMEM
Modern Features
MCDRAM* NVLink CAPI*
* Upcoming
XPMEM
MVAPICH User Group Meeting (MUG) 2019 9Network Based Computing Laboratory
• Research is done for exploring new designs
• Designs are first presented to conference/journal publications
• Best performing designs are incorporated into the codebase
• Rigorous Q&A procedure before making a release
– Exhaustive unit testing
– Various test procedures on diverse range of platforms and interconnects
– Test 19 different benchmarks and applications including, but not limited to
• OMB, IMB, MPICH Test Suite, Intel Test Suite, NAS, ScalaPak, and SPEC
– Spend about 18,000 core hours per commit
– Performance regression and tuning
– Applications-based evaluation
– Evaluation on large-scale systems
• All versions (alpha, beta, RC1 and RC2) go through the above testing
Strong Procedure for Design, Development and Release
MVAPICH User Group Meeting (MUG) 2019 10Network Based Computing Laboratory
MVAPICH2 Software Family Requirements Library
MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2
Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure
Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InfiniBand Network Monitoring and Analysis),
MVAPICH2-X
Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA)
MVAPICH2-X-AWS
Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications
MVAPICH2-GDR
Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2)
MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2019 11Network Based Computing Laboratory
• Released on 08/09/2019
• Major Features and Enhancements
– Improved performance for inter-node communication
– Improved performance for Gather, Reduce, and Allreduce with cyclic hostfile
– - Thanks to X-ScaleSolutions for the patch
– Improved performance for intra-node point-to-point communication
– Add support for Mellanox HDR adapters
– Add support for Cascade lake systems
– Add support for Microsoft Azure platform
• Enhanced point-to-point and collective tuning for Microsoft Azure
– Add support for new NUMA-aware hybrid binding policy
– Add support for AMD EPYC Rome architecture
– Improved multi-rail selection logic
– Enhanced heterogeneity detection logic
– Enhanced point-to-point and collective tuning for AMD EPYC Rome, Frontera@TACC, Mayer@Sandia, Pitzer@OSC, Summit@ORNL, Lassen@LLNL, and Sierra@LLNL systems
– Add multiple PVARs and CVARs for point-to-point and collective operations
MVAPICH2 2.3.2
MVAPICH User Group Meeting (MUG) 2019 12Network Based Computing Laboratory
• Support for highly-efficient inter-node and intra-node communication• Scalable Start-up• Optimized Collectives using SHArP and Multi-Leaders• Support for OpenPOWER and ARM architectures• Performance Engineering with MPI-T• Application Scalability and Best Practices
Highlights of MVAPICH2 2.3.2-GA Release
MVAPICH User Group Meeting (MUG) 2019 13Network Based Computing Laboratory
One-way Latency: MPI over IB with MVAPICH2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8Small Message Latency
Message Size (bytes)
Late
ncy
(us)
1.11
1.19
1.011.15
1.041.1
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switchConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB SwitchOmni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switchConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
0
20
40
60
80
100
120TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-4-EDR
Omni-Path
ConnectX-6 HDR
Large Message Latency
Message Size (bytes)
Late
ncy
(us)
MVAPICH User Group Meeting (MUG) 2019 14Network Based Computing Laboratory
Bandwidth: MPI over IB with MVAPICH2
0
5000
10000
15000
20000
25000
30000
4 16 64 256 1024 4K 16K 64K 256K 1M
Unidirectional Bandwidth
Band
wid
th (M
Byte
s/se
c)
Message Size (bytes)
12,590
3,373
6,356
12,08312,366
24,532
0
10000
20000
30000
40000
50000
60000
4 16 64 256 1024 4K 16K 64K 256K 1M
TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-4-EDR
Omni-Path
ConnectX-6 HDR
Bidirectional Bandwidth
Band
wid
th (M
Byte
s/se
c)
Message Size (bytes)
21,227
12,161
21,983
6,228
48,027
24,136
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switchConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-4-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB SwitchOmni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switchConnectX-6-HDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
MVAPICH User Group Meeting (MUG) 2019 15Network Based Computing Laboratory
• Near-constant MPI and OpenSHMEM initialization time at any process count
• 10x and 30x improvement in startup time of MPI and OpenSHMEM respectively at 16,384 processes
• Memory consumption reduced for remote endpoint information by O(processes per node)
• 1GB Memory saved per node with 1M processes and 16 processes per node
Towards High Performance and Scalable Startup at Exascale
P M
O
Job Startup Performance
Mem
ory
Requ
ired
to S
tore
En
dpoi
nt In
form
atio
na b c d
eP
M
PGAS – State of the art
MPI – State of the art
O PGAS/MPI – Optimized
PMIX_Ring
PMIX_Ibarrier
PMIX_Iallgather
Shmem based PMI
b
c
d
e
aOn-demand Connection
On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI. S. Chakraborty, H. Subramoni, J. Perkins, A. A. Awan, and D K Panda, 20th International Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS ’15)
PMI Extensions for Scalable MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, J. Perkins, M. Arnold, and D K Panda, Proceedings of the 21st European MPI Users' Group Meeting (EuroMPI/Asia ’14)
Non-blocking PMI Extensions for Fast MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, A. Venkatesh, J. Perkins, and D K Panda, 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’15)
SHMEMPMI – Shared Memory based PMI for Improved Performance and Scalability. S. Chakraborty, H. Subramoni, J. Perkins, and D K Panda, 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’16)
a
b
c d
e
MVAPICH User Group Meeting (MUG) 2019 16Network Based Computing Laboratory
Startup Performance on KNL + Omni-Path
0
5
10
15
20
25
64 128
256
512 1K 2K 4K 8K 16K
32K
64K
Tim
e Ta
ken
(Sec
onds
)
Number of Processes
MPI_Init & Hello World - Oakforest-PACS
Hello World (MVAPICH2-2.3a)
MPI_Init (MVAPICH2-2.3a)
• MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale)• 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC)• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS)• All numbers reported with 64 processes per node
5.8s
21s
22s
New designs available since MVAPICH2-2.3a and as patch for SLURM-15.08.8 and SLURM-16.05.1
MVAPICH User Group Meeting (MUG) 2019 17Network Based Computing Laboratory
Startup Performance on TACC Frontera
• MPI_Init takes 3.9 seconds on 57,344 processes on 1,024 nodes• All numbers reported with 56 processes per node
4.5s3.9s
New designs available in MVAPICH2-2.3.2
0500
100015002000250030003500400045005000
56 112 224 448 896 1792 3584 7168 14336 28672 57344
Tim
e Ta
ken
(Mill
isec
onds
)
Number of Processes
MPI_Init on Frontera
Intel MPI 2019
MVAPICH2 2.3.2
MVAPICH User Group Meeting (MUG) 2019 18Network Based Computing Laboratory
0
1000
2000
3000
4000
5000
6000
1 2 4 8 16 32 64 128
Exec
utio
n Ti
me
(ms)
Number of nodes (56 ppn)
osu_initMVAPICH2.3.2 MVAPICH2.3.1
010002000300040005000600070008000
1 2 4 8 16 32 64 128
Exec
utio
n Ti
me
(ms)
Number of nodes (56 ppn)
osu_helloMVAPICH2.3.2 MVAPICH2.3.1
Startup Performance on TACC Frontera MVAPICH2 2.3.1 vs 2.3.2
• MVAPICH2 2.3.2 significantly improves performance on top of MVAPICH2 2.3.1• All numbers reported with 56 processes per node
MVAPICH User Group Meeting (MUG) 2019 19Network Based Computing Laboratory
00.05
0.10.15
0.20.25
0.30.35
(4,28) (8,28) (16,28)La
tenc
y (s
econ
ds)(Number of Nodes, PPN)
MVAPICH2
MVAPICH2-SHArP
Benefits of SHARP Allreduce at Application Level
12%Avg DDOT Allreduce time of HPCG
SHARP support available since MVAPICH2 2.3a
Parameter Description DefaultMV2_ENABLE_SHARP=1 Enables SHARP-based collectives Disabled--enable-sharp Configure flag to enable SHARP Disabled
• Refer to Running Collectives with Hardware based SHARP support section of MVAPICH2 user guide for more information
• http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3-userguide.html#x1-990006.26
MVAPICH User Group Meeting (MUG) 2019 20Network Based Computing Laboratory
0
0.5
1
0 1 2 4 8 16 32 64 128 256 512 1K 2K
Late
ncy
(us)
MVAPICH2-2.3.1
SpectrumMPI-2019.02.07
Intra-node Point-to-Point Performance on OpenPower
Platform: Two nodes of OpenPOWER (POWER9-ppc64le) CPU using Mellanox EDR (MT4121) HCA
Intra-Socket Small Message Latency Intra-Socket Large Message Latency
Intra-Socket Bi-directional BandwidthIntra-Socket Bandwidth
0.22us0
20
40
60
80
100
4K 8K 16K 32K 64K 128K 256K 512K 1M 2M
Late
ncy
(us)
MVAPICH2-2.3.1
SpectrumMPI-2019.02.07
0
10000
20000
30000
40000
1 8 64 512 4K 32K 256K 2M
Band
wid
th (M
B/s) MVAPICH2-2.3.1
SpectrumMPI-2019.02.07
0
20000
40000
1 8 64 512 4K 32K 256K 2M
Bi-B
andw
idth
(MB/
s) MVAPICH2-2.3.1
SpectrumMPI-2019.02.07
MVAPICH User Group Meeting (MUG) 2019 21Network Based Computing Laboratory
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Late
ncy
(us)
MVAPICH2-2.3
Intra-node Point-to-point Performance on ARM Cortex-A72
0
2000
4000
6000
8000
10000
1 2 4 8 16 32 64 128
256
512 1K 2K 4K 8K 32K
64K
128K
256K
512K 1M 2M 4M
Band
wid
th (M
B/s) MVAPICH2-2.3
02000400060008000
10000120001400016000
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Bidi
rect
iona
l Ban
dwid
th MVAPICH2-2.3
Platform: ARM Cortex A72 (aarch64) processor with 64 cores dual-socket CPU. Each socket contains 32 cores.
Small Message Latency Large Message Latency
Bi-directional BandwidthBandwidth
0.27 micro-second (1 bytes)
0
100
200
300
400
500
600
700
8K 16K 32K 64K 128K 256K 512K 1M 2M 4M
Late
ncy
(us)
MVAPICH2-2.3
MVAPICH User Group Meeting (MUG) 2019 22Network Based Computing Laboratory
● Enhance existing support for MPI_T in MVAPICH2 to expose a richer set of performance and control variables
● Get and display MPI Performance Variables (PVARs) made available by the runtime in TAU
● Control the runtime’s behavior via MPI Control Variables (CVARs)● Introduced support for new MPI_T based CVARs to MVAPICH2
○ MPIR_CVAR_MAX_INLINE_MSG_SZ, MPIR_CVAR_VBUF_POOL_SIZE, MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE
● TAU enhanced with support for setting MPI_T CVARs in a non-interactive mode for uninstrumented applications
● S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH and TAU, EuroMPI/USA ‘17, Best Paper Finalist
● More details in Sameer Shende’s talk today and poster presentations
Performance Engineering Applications using MVAPICH2 and TAU
VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf
MVAPICH User Group Meeting (MUG) 2019 23Network Based Computing Laboratory
Application Scalability on Skylake and KNL
MiniFE (1300x1300x1300 ~ 910 GB)
Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K
0
20
40
60
80
100
120
140
2048 4096 8192
Exec
utio
n Ti
me
(s)
No. of Processes (KNL: 64ppn)
MVAPICH2
0
10
20
30
40
50
60
2048 4096 8192
Exec
utio
n Ti
me
(s)
No. of Processes (Skylake: 48ppn)
MVAPICH2
0
200
400
600
800
1000
1200
48 96 192 384 768
No. of Processes (Skylake: 48ppn)
MVAPICH2
NEURON (YuEtAl2012)
Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b
0
500
1000
1500
2000
2500
3000
3500
64 128 256 512 1024 2048 4096
No. of Processes (KNL: 64ppn)
MVAPICH2
0
200
400
600
800
1000
1200
1400
68 136 272 544 1088 2176 4352
No. of Processes (KNL: 68ppn)
MVAPICH2
0
500
1000
1500
2000
48 96 192 384 768 1536 3072
No. of Processes (Skylake: 48ppn)
MVAPICH2
Cloverleaf (bm64) MPI+OpenMP, NUM_OMP_THREADS = 2
MVAPICH User Group Meeting (MUG) 2019 24Network Based Computing Laboratory
• Released on 08/16/2019
• Major Features and Enhancements
– Based on MVAPICH2-2.3.2
– Enhanced tuning for point-to-point and collective operations
– Targeted for Azure HB & HC virtual machine instances
– Flexibility for 'one-click' deployment
– Tested with Azure HB & HC VM instances
MVAPICH2-Azure 2.3.2
MVAPICH User Group Meeting (MUG) 2019 25Network Based Computing Laboratory
Performance of Radix
0
5
10
15
20
25
30
16(1x16) 32(1x32) 44(1X44) 88(2X44) 176(4X44) 352(8x44)
Exec
utio
n Ti
me
(Sec
onds
)
Number of Processes (Nodes X PPN)
Total Execution Time on HC (Lower is better)
MVAPICH2-X
HPCx 3x faster
0
5
10
15
20
25
60(1X60) 120(2X60) 240(4X60)
Exec
utio
n Ti
me
(Sec
onds
)
Number of Processes (Nodes X PPN)
Total Execution Time on HB (Lower is better)
MVAPICH2-X HPCx
38% faster
MVAPICH User Group Meeting (MUG) 2019 26Network Based Computing Laboratory
Performance of FDS (HC)
0123456789
10
16(1x16) 32(1x32) 44(1X44)
Exec
utio
n Ti
me
(Sec
onds
)
Processes (Nodes X PPN)
Single NodeTotal Execution Time (Lower is better)
MVAPICH2-X HPCx
0
100
200
300
400
500
600
88(2X44) 176(4X44)
Exec
utio
n Ti
me
(Sec
onds
)
Processes (Nodes X PPN)
Multi-NodeTotal Execution Time (Lower is better)
MVAPICH2-X HPCx
Part of input parameter: MESH IJK=5,5,5, XB=-1.0,0.0,-1.0,0.0,0.0,1.0, MULT_ID='mesh array'
1.11x better
MVAPICH User Group Meeting (MUG) 2019 27Network Based Computing Laboratory
• Integration of SHARP2 and associated Collective Optimizations
• Communication optimizations on upcoming architectures – Intel Cooper Lake
– AMD Rome
– ARM
• Dynamic and Adaptive Communication Protocols
MVAPICH2 Upcoming Features
MVAPICH User Group Meeting (MUG) 2019 28Network Based Computing Laboratory
Dynamic and Adaptive MPI Point-to-point Communication Protocols
Process on Node 1 Process on Node 2
Eager Threshold for Example Communication Pattern with Different Designs
0 1 2 3
4 5 6 7
Default
16 KB 16 KB 16 KB 16 KB
0 1 2 3
4 5 6 7
Manually Tuned
128 KB 128 KB 128 KB 128 KB
0 1 2 3
4 5 6 7
Dynamic + Adaptive
32 KB 64 KB 128 KB 32 KB
H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper
0
100
200
300
400
500
600
128 256 512 1K
Wal
l Clo
ck T
ime
(sec
onds
)
Number of Processes
Execution Time of Amber
Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold
0123456789
128 256 512 1KRela
tive
Mem
ory
Cons
umpt
ion
Number of Processes
Relative Memory Consumption of Amber
Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold
Default Poor overlap; Low memory requirement Low Performance; High Productivity
Manually Tuned Good overlap; High memory requirement High Performance; Low Productivity
Dynamic + Adaptive Good overlap; Optimal memory requirement High Performance; High Productivity
Process Pair Eager Threshold (KB)
0 – 4 32
1 – 5 64
2 – 6 128
3 – 7 32
Desired Eager Threshold
MVAPICH User Group Meeting (MUG) 2019 29Network Based Computing Laboratory
MVAPICH2 Software Family Requirements Library
MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2
Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure
Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InfiniBand Network Monitoring and Analysis),
MVAPICH2-X
Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA)
MVAPICH2-X-AWS
Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications
MVAPICH2-GDR
Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2)
MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2019 30Network Based Computing Laboratory
MVAPICH2-X for Hybrid MPI + PGAS Applications
• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI– Possible deadlock if both runtimes are not progressed
– Consumes more network resource
• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF– Available with since 2012 (starting with MVAPICH2-X 1.9) – http://mvapich.cse.ohio-state.edu
High Performance and Scalable Unified Communication Runtime
Support for Modern Multi-/Many-core Architectures(Intel-Xeon, Intel-Xeon Phi, OpenPOWER, ARM…)
High Performance Parallel Programming ModelsMPI
Message Passing InterfacePGAS
(UPC, OpenSHMEM, CAF, UPC++)Hybrid --- MPI + X
(MPI + PGAS + OpenMP/Cilk)
Diverse APIs and MechanismsOptimized Point-to-
point Primitives
Collectives Algorithms(Blocking and Non-
Blocking)
Remote Memory Access
FaultTolerance
Active Messages Scalable Job Startup
Introspection & Analysis with OSU
INAM
Support for Modern Networking Technologies(InfiniBand, iWARP, RoCE, Omni-Path…)
Support for Efficient Intra-node Communication(POSIX SHMEM, CMA, LiMIC, XPMEM…)
MVAPICH User Group Meeting (MUG) 2019 31Network Based Computing Laboratory
• Released on 03/01/2019
• Major Features and Enhancements
– MPI Features
– Based on MVAPICH2 2.3.1
• OFA-IB-CH3, OFA-IB-RoCE, PSM-CH3, and PSM2-CH3 interfaces
– MPI (Advanced) Features
• Improved performance of large message communication
• Support for advanced co-operative (COOP) rendezvous protocols in SMP channel
– OFA-IB-CH3 and OFA-IB-RoCE interfaces
• Support for RGET, RPUT, and COOP protocols for CMA and XPMEM
– OFA-IB-CH3 and OFA-IB-RoCE interfaces
• Support for load balanced and dynamic rendezvous protocol selection
– OFA-IB-CH3 and OFA-IB-RoCE interfaces
• Support for XPMEM-based MPI collective operations (Broadcast, Gather, Scatter, Allgather)
– OFA-IB-CH3, OFA-IB-RoCE, PSM-CH3, and PSM2-CH3 interfaces
• Extend support for XPMEM-based MPI collective operations (Reduce and All-Reduce for PSM-CH3 and PSM2-CH3 interfaces
MVAPICH2-X 2.3rc2
• Improved connection establishment for DC transport
– OFA-IB-CH3 interface
• Add improved Alltoallv algorithm for small messages
• OFA-IB-CH3, OFA-IB-RoCE, PSM-CH3, and PSM2-CH3 interfaces
– OpenSHMEM Features
• Support for XPMEM-based collective operations (Broadcast, Collect, Reduce_all, Reduce, Scatter, Gather)
– UPC Features
• Support for XPMEM-based collective operations (Broadcast, Collect, Scatter, Gather)
– UPC++ Features
• Support for XPMEM-based collective operations (Broadcast, Collect, Scatter, Gather)
– Unified Runtime Features
• Based on MVAPICH2 2.3.1 (OFA-IB-CH3 interface). All the runtime features enabled by default in OFA-IB-CH3 and OFA-IB-RoCE interface of MVAPICH2 2.3.1 are available in MVAPICH2-X 2.3rc2
MVAPICH User Group Meeting (MUG) 2019 32Network Based Computing Laboratory
MVAPICH2-X Feature Table
• * indicates disabled by default at runtime. Must use appropriate environment variable in MVAPICH2-X user guide to enable it.• + indicates features only tested with InfiniBand network
Features for InfiniBand (OFA-IB-CH3) and RoCE (OFA-RoCE-CH3) Basic Basic-XPMEM Intermediate Advanced
Architecture Specific Point-to-point and Collective Optimizationsfor x86, OpenPOWER, and ARM
Optimized Support for PGAS models(UPC, UPC++, OpenSHMEM, CAF) and Hybrid MPI+PGAS models
CMA-Aware Collectives
Optimized Asynchronous Progress*
InfiniBand Hardware Multicast-based MPI_Bcast*+
OSU InfiniBand Network Analysis and Monitoring (INAM)*+
XPMEM-based Point-to-Point and Collectives
Direct Connected (DC) Transport Protocol*+
User mode Memory Registration (UMR)*+
On Demand Paging (ODP)*+
Core-direct based Collective Offload*+
SHARP-based Collective Offload*+
MVAPICH User Group Meeting (MUG) 2019 33Network Based Computing Laboratory
• Direct Connect (DC) Transport
• Co-operative Rendezvous Protocol
• Advanced All-reduce with SHARP
• CMA-based Collectives
• Asynchronous Progress
• XPMEM-based Reduction Collectives
• XPMEM-based Non-reduction Collectives
• Optimized Collective Communication and Advanced Transport Protocols
• PGAS and Hybrid MPI+PGAS Support
Overview of Some of the MVAPICH2-X Features
MVAPICH User Group Meeting (MUG) 2019 34Network Based Computing Laboratory
Minimizing Memory Footprint by Direct Connect (DC) Transport
Nod
e 0 P1P0
Node 1
P3
P2Node 3
P7
P6
Nod
e 2 P5P4
IBNetwork
• Constant connection cost (One QP for any peer)
• Full Feature Set (RDMA, Atomics etc)
• Separate objects for send (DC Initiator) and receive (DC Target)
– DC Target identified by “DCT Number”– Messages routed with (DCT Number, LID)– Requires same “DC Key” to enable communication
• Available since MVAPICH2-X 2.2a
0
0.2
0.4
0.6
0.8
1
1.2
160 320 620
Nor
mal
ized
Exec
utio
n Ti
me
Number of Processes
NAMD - Apoa1: Large data set
RC DC-Pool UD XRC
1022
4797
1 1 12
10 10 10 10
1 1
35
1
10
100
80 160 320 640
Conn
ectio
n M
emor
y (K
B)
Number of Processes
Memory Footprint for Alltoall
RC DC-Pool UD XRC
H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT) of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14)
MVAPICH User Group Meeting (MUG) 2019 35Network Based Computing Laboratory
Impact of DC Transport Protocol on Neuron
• Up to 76% benefits over MVAPICH2 for Neuron using Direct Connected transport protocol at scale
– VERSION 7.6.2 master (f5a1284) 2018-08-15
• Numbers taken on bbpv2.epfl.ch– Knights Landing nodes with 64 ppn– ./x86_64/special -mpi -c stop_time=2000 -c is_split=1
parinit.hoc– Used “runtime” reported by execution to measure
performance
• Environment variables used– MV2_USE_DC=1– MV2_NUM_DC_TGT=64– MV2_SMALL_MSG_DC_POOL=96– MV2_LARGE_MSG_DC_POOL=96– MV2_USE_RDMA_CM=0
0200400600800
1000120014001600
512 1024 2048 4096
Exec
utio
n Ti
me
(s)
No. of Processes
MVAPICH2 MVAPICH2-X
Neuron with YuEtAl2012
10%
76%
39%
Overhead of RC protocol for connection establishment and
communication Available from MVAPICH2-X 2.3rc2 onwards
MVAPICH User Group Meeting (MUG) 2019 36Network Based Computing Laboratory
Cooperative Rendezvous Protocols
Platform: 2x14 core Broadwell 2680 (2.4 GHz)Mellanox EDR ConnectX-5 (100 GBps)
Baseline: MVAPICH2X-2.3rc1, Open MPI v3.1.0Cooperative Rendezvous Protocols for Improved Performance and OverlapS. Chakraborty, M. Bayatpour, J Hashmi, H. Subramoni, and DK Panda,SC ‘18 (Best Student Paper Award Finalist)
19%16% 10%
• Use both sender and receiver CPUs to progress communication concurrently
• Dynamically select rendezvous protocol based on communication primitives and sender/receiver availability (load balancing)
• Up to 2x improvement in large message latency and bandwidth
• Up to 19% improvement for Graph500 at 1536 processes
-100100300500700900
110013001500
28 56 112 224 448 896 1536
Tim
e (S
econ
ds)
Number of Processes
Graph500
MVAPICH2
Open MPI
Proposed
0
100
200
300
400
500
28 56 112 224 448 896 1536
Tim
e (S
econ
ds)
Number of Processes
CoMD
MVAPICH2
Open MPI
Proposed
0
20
40
60
80
100
28 56 112 224 448 896 1536
Tim
e (S
econ
ds)
Number of Processes
MiniGhost
MVAPICH2
Open MPI
Proposed
Available in MVAPICH2-X 2.3rc2
MVAPICH User Group Meeting (MUG) 2019 37Network Based Computing Laboratory
Advanced Allreduce Collective Designs Using SHArP and Multi-Leaders
• Socket-based design can reduce the communication latency by 23% and 40% on Broadwell + IB-EDR nodes
• Support is available since MVAPICH2-X 2.3b
HPCG (28 PPN)
0
0.1
0.2
0.3
0.4
0.5
0.6
56 224 448
Com
mun
icat
ion
Late
ncy
(Sec
onds
)
Number of ProcessesMVAPICH2 Proposed-Socket-Based MVAPICH2+SHArP
0
10
20
30
40
50
60
4 8 16 32 64 128 256 512 1K 2K 4K
Late
ncy
(us)
Message Size (Byte)MVAPICH2 Proposed-Socket-Based MVAPICH2+SHArP
OSU Micro Benchmark (16 Nodes, 28 PPN)
23%
40%
Lower is better
M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-Leader Design, Supercomputing '17.
MVAPICH User Group Meeting (MUG) 2019 38Network Based Computing Laboratory
Performance of MPI_Allreduce On Stampede2 (10,240 Processes)
0
50
100
150
200
250
300
4 8 16 32 64 128 256 512 1024 2048 4096
Late
ncy
(us)
Message Size
MVAPICH2 MVAPICH2-OPT IMPI
0
200
400
600
800
1000
1200
1400
1600
1800
2000
8K 16K 32K 64K 128K 256KMessage Size
MVAPICH2 MVAPICH2-OPT IMPIOSU Micro Benchmark 64 PPN
2.4X
• MPI_Allreduce latency with 32K bytes reduced by 2.4X
MVAPICH User Group Meeting (MUG) 2019 39Network Based Computing Laboratory
Optimized CMA-based Collectives for Large Messages
1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M 4M
Message Size
KNL (2 Nodes, 128 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
MVAPICH2-X
Late
ncy (
us)
1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M
Message Size
KNL (4 Nodes, 256 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
MVAPICH2-X1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M
Message Size
KNL (8 Nodes, 512 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
MVAPICH2-X
• Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPOWER)
• New two-level algorithms for better scalability• Improved performance for other collectives (Bcast, Allgather, and Alltoall)
~ 2.5xBetter
~ 3.2xBetter
~ 4xBetter
~ 17xBetter
S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist
Performance of MPI_Gather on KNL nodes (64PPN)
Available since MVAPICH2-X 2.3b
MVAPICH User Group Meeting (MUG) 2019 40Network Based Computing Laboratory
0
5000
10000
15000
20000
25000
30000
224 448 896
Perf
orm
ance
in G
FLO
PS
Number of Processes
MVAPICH2 Async MVAPICH2 Default IMPI 2019 Default
0
1
2
3
4
5
6
7
8
9
112 224 448
Tim
e p
er lo
op in
seco
nds
Number of Processes
MVAPICH2 Async MVAPICH2 Default IMPI 2019 Default IMPI 2019 Async
Benefits of the New Asynchronous Progress Design: Broadwell + InfiniBand
Up to 33% performance improvement in P3DFFT application with 448 processesUp to 29% performance improvement in HPL application with 896 processes
Memory Consumption = 69%
P3DFFT High Performance Linpack (HPL)
26%
27% Lower is better Higher is better
A. Ruhela, H. Subramoni, S. Chakraborty, M. Bayatpour, P. Kousha, and D.K. Panda, “Efficient design for MPI Asynchronous Progress without Dedicated Resources”, Parallel Computing 2019
Available since MVAPICH2-X 2.3rc1
PPN=28
33%
29%
12%
PPN=28
8%
MVAPICH User Group Meeting (MUG) 2019 41Network Based Computing Laboratory
• Direct Connect (DC) Transport
• Co-operative Rendezvous Protocol
• Advanced All-reduce with SHARP
• CMA-based Collectives
• Asynchronous Progress
• XPMEM-based Reduction Collectives
• XPMEM-based Non-reduction Collectives
• Optimized Collective Communication and Advanced Transport Protocols
• PGAS and Hybrid MPI+PGAS Support
Overview of Some of the MVAPICH2-X Features
MVAPICH User Group Meeting (MUG) 2019 42Network Based Computing Laboratory
Shared Address Space (XPMEM)-based Collectives Design
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Late
ncy
(us)
Message Size
MVAPICH2-2.3bIMPI-2017v1.132MVAPICH2-X-2.3rc1
OSU_Allreduce (Broadwell 256 procs)
• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2
• Offloaded computation/communication to peers ranks in reduction collective operation
• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce
73.2
1.8X
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Message Size
MVAPICH2-2.3b
IMPI-2017v1.132
MVAPICH2-2.3rc1
OSU_Reduce (Broadwell 256 procs)
4X
36.1
37.9
16.8
J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018.
Available since MVAPICH2-X 2.3rc1
MVAPICH User Group Meeting (MUG) 2019 43Network Based Computing Laboratory
Reduction Collectives on IBM OpenPOWER
MPI_Allreduce
• Two POWER8 dual-socket nodes each with 20 ppn
• Up to 2X improvement for Allreduce and 3X improvement for Reduce at 4MB message
• Used osu_reduce and osu_allreduce from OSU Microbenchmarks v5.5
MPI_Reduce
0
100
200
4K 8K 16K 32K 64K
MVAPICH2-2.3rc1
SpectrumMPI-10.1.0
OpenMPI-3.0.0
MVAPICH2-XPMEM
0
1000
2000
3000
4000
128K 256K 512K 1M 2M
MVAPICH2-2.3rc1
SpectrumMPI-10.1.0
OpenMPI-3.0.0
MVAPICH2-XPMEM
0
200
400
600
800
4K 8K 16K 32K 64K 128K 256K
MVAPICH2-2.3rc1
SpectrumMPI-10.1.0
OpenMPI-3.0.0
MVAPICH2-XPMEM
0
10000
20000
30000
40000
512K 1M 2M 4M 8M 16M
MVAPICH2-2.3rc1
SpectrumMPI-10.1.0
OpenMPI-3.0.0
MVAPICH2-XPMEM
Late
ncy
(us)
Late
ncy
(us)
3.7X2X
5X3X
MVAPICH User Group Meeting (MUG) 2019 44Network Based Computing Laboratory
Application Level Benefits of XPMEM-based Designs
MiniAMR (dual-socket, ppn=16)
• Intel XeonCPU E5-2687W v3 @ 3.10GHz (10-core, 2-socket)• Up to 20% benefits over IMPI for CNTK DNN training using AllReduce• Up to 27% benefits over IMPI and up to 15% improvement over MVAPICH2 for MiniAMR application kernel
0100200300400500600700800
28 56 112 224
Exec
utio
n Ti
me
(s)
No. of Processes
Intel MPIMVAPICH2MVAPICH2-XPMEM
CNTK AlexNet Training (B.S=default, iteration=50, ppn=28)
0
10
20
30
40
50
60
70
16 32 64 128 256
Exec
utio
n Ti
me
(s)
No. of Processes
Intel MPI
MVAPICH2
MVAPICH2-XPMEM20%
9%
27%
15%
MVAPICH User Group Meeting (MUG) 2019 45Network Based Computing Laboratory
0
20
40
60
10 20 40 60
Exec
utio
n Ti
me
(s)
No. of Processes
MVAPICH-2.3rc1
MVAPICH2-XPMEM
Impact of XPMEM-based Designs on MiniAMR
• Two POWER8 dual-socket nodes each with 20 ppn
• MiniAMR application execution time comparing MVAPICH2-2.3rc1 and optimized All-Reduce design– MiniAMR application for weak-
scaling workload on up to three POWER8 nodes.
– Up to 45% improvement over MVAPICH2-2.3rc1 in mesh-refinement time
45%41%
36%42%
OpenPOWER (weak-scaling, 3 nodes, ppn=20)
MVAPICH User Group Meeting (MUG) 2019 46Network Based Computing Laboratory
Performance of Non-Reduction Collectives with XPMEM
• 28 MPI Processes on single dual-socket Broadwell E5-2680v4, 2x14 core processor
• Used osu_bcast from OSU Microbenchmarks v5.5
1
10
100
1000
100004K 8K 16
K
32K
64K
128K
256K
512K 1M 2M 4M
Late
ncy
(us)
Message Size (Bytes)
BroadcastIntel MPI 2018OpenMPI 3.0.1MV2X-2.3rc1 (CMA Coll)MV2X-2.3rc2 (XPMEM Coll)
5X over OpenMPI
1
10
100
1000
10000
100000
4K 8K 16K
32K
64K
128K
256K
512K 1M 2M 4M
Late
ncy
(us)
Message Size (Bytes)
GatherIntel MPI 2018OpenMPI 3.0.1MV2X-2.3rc1 (CMA Coll)MV2X-2.3rc2 (XPMEM Coll)
3X over OpenMPI
MVAPICH User Group Meeting (MUG) 2019 47Network Based Computing Laboratory
Impact of Optimized Small Message MPI_Alltoallv Algorithm
• Optimized designs in MVAPICH2-X offer significantly improved performance for small message MPI_Alltoallv
1
10
100
1000
10000
100000
1 2 4 8 16 32 64 128 256
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2-X HPE-MPI
~5X better
• Up to 5X benefits over HPE-MPI using optimized using optimized Alltoallv algorithm and Direct Connected transport protocol
• Numbers taken on bbpv2.epfl.ch– 96 KNL nodes with 64 ppn (6,144 processes)– osu_alltoallv from OSU Micro Benchmarks
• Environment variables used– MV2_USE_DC=1– MV2_NUM_DC_TGT=64– MV2_SMALL_MSG_DC_POOL=96– MV2_LARGE_MSG_DC_POOL=96– MV2_USE_RDMA_CM=0
Courtesy: Pramod Shivaji Kumbhar@EPFLAvailable from MVAPICH2-X 2.3rc2 onwards
MVAPICH User Group Meeting (MUG) 2019 48Network Based Computing Laboratory
Application Level Performance with Graph500 and SortGraph500 Execution Time
J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013
J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012
05
101520253035
4K 8K 16K
Tim
e (s
)
No. of Processes
MPI-Simple
MPI-CSC
MPI-CSR
Hybrid (MPI+OpenSHMEM) 13X
7.6X
• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design• 8,192 processes
- 2.4X improvement over MPI-CSR- 7.6X improvement over MPI-Simple
• 16,384 processes- 1.5X improvement over MPI-CSR- 13X improvement over MPI-Simple
J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October 2013.
Sort Execution Time
0
500
1000
1500
2000
2500
3000
500GB-512 1TB-1K 2TB-2K 4TB-4K
Tim
e (s
econ
ds)
Input Data - No. of Processes
MPI Hybrid
51%
• Performance of Hybrid (MPI+OpenSHMEM) Sort Application
• 4,096 processes, 4 TB Input Size- MPI – 2408 sec; 0.16 TB/min- Hybrid – 1172 sec; 0.36 TB/min- 51% improvement over MPI-design
MVAPICH User Group Meeting (MUG) 2019 49Network Based Computing Laboratory
• Released on 08/12/2019
• Major Features and Enhancements
– Based on MVAPICH2-X 2.3
– New design based on Amazon EFA adapter's Scalable Reliable Datagram (SRD) transport protocol
– Support for XPMEM based intra-node communication for point-to-point and collectives
– Enhanced tuning for point-to-point and collective operations
– Targeted for AWS instances with Amazon Linux 2 AMI and EFA support
– Tested with c5n.18xlarge instance
MVAPICH2-X-AWS 2.3
MVAPICH User Group Meeting (MUG) 2019 50Network Based Computing Laboratory
Point-to-Point Performance
• Both UD and SRD shows similar latency for small messages
• SRD shows higher message rate due to lack of software reliability overhead
• SRD is faster for large messages due to larger MTU size
MVAPICH User Group Meeting (MUG) 2019 51Network Based Computing Laboratory
Collective Performance: MPI Gatherv
• Up to 33% improvement with SRD compared to UD
• Root does not need to send explicit acks to non-root processes
• Non-roots can exit as soon as the message is sent (no need to wait for acks)
MVAPICH User Group Meeting (MUG) 2019 52Network Based Computing Laboratory
Collective Performance: MPI Allreduce
• Up to 18% improvement with SRD compared to UD
• Bidirectional communication pattern allows piggybacking of acks
• Modest improvement compared to asymmetric communication patterns
MVAPICH User Group Meeting (MUG) 2019 53Network Based Computing Laboratory
Application Performance
01020304050607080
72(2x36) 144(4x36) 288(8x36)
Exec
utio
n Ti
me
(Sec
onds
)
Processes (Nodes X PPN)
miniGhost
MV2X OpenMPI
10% better
0
5
10
15
20
25
30
72(2x36) 144(4x36) 288(8x36)
Exec
utio
n Ti
me
(Sec
onds
)
Processes (Nodes X PPN)
CloverLeaf
MV2X-UD MV2X-SRD
OpenMPI27.5% better
• Up to 10% performance improvement for MiniGhost on 8 nodes
• Up to 27% better performance with CloverLeaf on 8 nodes
S. Chakraborty, S. Xu, H. Subramoni and D. K. Panda, Designing Scalable and High-Performance MPI Libraries on Amazon Elastic Adapter, Hot Interconnect, 2019
MVAPICH User Group Meeting (MUG) 2019 54Network Based Computing Laboratory
• XPMEM-based MPI Derived Datatype Designs
• Exploiting Hardware Tag Matching
MVAPICH2-X Upcoming Features
MVAPICH User Group Meeting (MUG) 2019 55Network Based Computing Laboratory
Efficient Zero-copy MPI Datatypes for Emerging Architectures
• New designs for efficient zero-copy based MPI derived datatype processing• Efficient schemes mitigate datatype translation, packing, and exchange overheads• Demonstrated benefits over prevalent MPI libraries for various application kernels• To be available in the upcoming MVAPICH2-X release
0.1
1
10
100
2 4 8 16 28Logs
cale
Lat
ency
(mill
isec
onds
)
No. of Processes
MVAPICH2X-2.3IMPI 2018IMPI 2019MVAPICH2X-Opt
5X
0.1
1
10
100
1000
Grid Dimensions (x, y, z, t)
MVAPICH2X-2.3IMPI 2018MVAPICH2X-Opt
19X
0.01
0.1
1
10
Grid Dimensions (x, y, z, t)
MVAPICH2X-2.3
IMPI 2018
MVAPICH2X-Opt3X
3D-Stencil Datatype Kernel on Broadwell (2x14 core)
MILC Datatype Kernel on KNL 7250 in Flat-Quadrant Mode (64-core)
NAS-MG Datatype Kernel on OpenPOWER (20-core)
MVAPICH User Group Meeting (MUG) 2019 56Network Based Computing Laboratory
• Offloads the processing of point-to-point MPI messages from the host processor to HCA
• Enables zero copy of MPI message transfers– Messages are written directly to the user's buffer without extra buffering and copies
• Provides rendezvous progress offload to HCA– Increases the overlap of communication and computation
56
Hardware Tag Matching Support
MVAPICH User Group Meeting (MUG) 2019 57Network Based Computing Laboratory 57
Impact of Zero Copy MPI Message Passing using HW Tag Matching
0
50
100
150
200
250
300
350
400
32K 64K 128K 256K 512K 1M 2M 4M
Late
ncy
(us)
Message Size (byte)
osu_latencyRendezvous Message Range
MVAPICH2 MVAPICH2+HW-TM
0
1
2
3
4
5
6
7
8
0 1 2 4 8 16 32 64 128256512 1K 2K 4K 8K 16K
Late
ncy
(us)
Message Size (byte)
osu_latencyEager Message Range
MVAPICH2 MVAPICH2+HW-TM
Removal of intermediate buffering/copies can lead up to 35% performance improvement in latency of medium messages
35%
MVAPICH User Group Meeting (MUG) 2019 58Network Based Computing Laboratory 58
Impact of Rendezvous Offload using HW Tag Matching
0
10000
20000
30000
40000
50000
60000
70000
16K 32K 64K 128K 256K 512K
Late
ncy
(us)
Message Size (byte)
osu_iscatterv1,280 Processes
MVAPICH2 MVAPICH2+HW-TM
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
16K 32K 64K 128K 256K 512K
Late
ncy
(us)
Message Size (byte)
osu_iscatterv640 Processes
MVAPICH2 MVAPICH2+HW-TM
The increased overlap can lead to 1.8X performance improvement in total latency of osu_iscatterv
1.7 X 1.8 X
MVAPICH User Group Meeting (MUG) 2019 59Network Based Computing Laboratory
MVAPICH2 Software Family Requirements Library
MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2
Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure
Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InfiniBand Network Monitoring and Analysis),
MVAPICH2-X
Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA)
MVAPICH2-X-AWS
Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications
MVAPICH2-GDR
Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2)
MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2019 60Network Based Computing Laboratory
At Sender:
At Receiver:MPI_Recv(r_devbuf, size, …);
insideMVAPICH2
• Standard MPI interfaces used for unified data movement
• Takes advantage of Unified Virtual Addressing (>= CUDA 4.0)
• Overlaps data movement from GPU with RDMA transfers
High Performance and High Productivity
MPI_Send(s_devbuf, size, …);
GPU-Aware (CUDA-Aware) MPI Library: MVAPICH2-GPU
MVAPICH User Group Meeting (MUG) 2019 61Network Based Computing Laboratory
CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3.2 Releases
• Support for MPI communication from NVIDIA GPU device memory• High performance RDMA-based inter-node point-to-point
communication (GPU-GPU, GPU-Host and Host-GPU)• High performance intra-node point-to-point communication for multi-
GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU)• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node
communication for multiple GPU adapters/node• Optimized and tuned collectives for GPU device buffers• MPI datatype support for point-to-point and collective communication
from GPU device buffers• Unified memory
MVAPICH User Group Meeting (MUG) 2019 62Network Based Computing Laboratory
• Released on 08/08/2019
• Major Features and Enhancements– Based on MVAPICH2 2.3.1
– Support for CUDA 10.1
– Support for PGI 19.x
– Enhanced intra-node and inter-node point-to-point performance
– Enhanced MPI_Allreduce performance for DGX-2 system
– Enhanced GPU communication support in MPI_THREAD_MULTIPLE mode
– Enhanced performance of datatype support for GPU-resident data
• Zero-copy transfer when P2P access is available between GPUs through NVLink/PCIe
– Enhanced GPU-based point-to-point and collective tuning
• OpenPOWER systems such as ORNL Summit and LLNL Sierra ABCI system @AIST, Owens and Pitzer systems @Ohio Supercomputer Center
– Scaled Allreduce to 24,576 Volta GPUs on Summit
– Enhanced intra-node and inter-node point-to-point performance for DGX-2 and IBM POWER8 and IBM POWER9 systems
– Enhanced Allreduce performance for DGX-2 and IBM POWER8/POWER9 systems
– Enhanced small message performance for CUDA-Aware MPI_Put and MPI_Get
– Flexible support for running TensorFlow (Horovod) jobs
MVAPICH2-GDR 2.3.2
MVAPICH User Group Meeting (MUG) 2019 63Network Based Computing Laboratory
0100020003000400050006000
1 2 4 8 16 32 64 128
256
512 1K 2K 4K
Band
wid
th (M
B/s)
Message Size (Bytes)
GPU-GPU Inter-node Bi-Bandwidth
MV2-(NO-GDR) MV2-GDR-2.3
0500
100015002000250030003500
1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Band
wid
th (M
B/s)
Message Size (Bytes)
GPU-GPU Inter-node Bandwidth
MV2-(NO-GDR) MV2-GDR-2.3
05
1015202530
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K
Late
ncy
(us)
Message Size (Bytes)
GPU-GPU Inter-node Latency
MV2-(NO-GDR) MV2-GDR 2.3
MVAPICH2-GDR-2.3Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores
NVIDIA Volta V100 GPUMellanox Connect-X4 EDR HCA
CUDA 9.0Mellanox OFED 4.0 with GPU-Direct-RDMA
10x
9x
Optimized MVAPICH2-GDR Design
1.85us11X
MVAPICH User Group Meeting (MUG) 2019 64Network Based Computing Laboratory
02468
101214161820
1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K
Late
ncy
(us)
Message Size (Bytes)
INTRA-NODE LATENCY (SMALL)
Intra-Socket
Inter-Socket
Device-to-Device Performance on OpenPOWER (NVLink2 + Volta)
0
5
10
15
20
25
30
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Band
wid
th (G
B/se
c)
Message Size (Bytes)
INTER-NODE BANDWIDTH
Platform: OpenPOWER (POWER9-ppc64le) nodes equipped with a dual-socket CPU, 4 Volta V100 GPUs, and 2port EDR InfiniBand Interconnect
050
100150200250300350400450500
16K 32K 64K 128K 256K 512K 1M 2M 4M
Late
ncy
(us)
Message Size (Bytes)
INTRA-NODE LATENCY (LARGE)
Intra-Socket
Inter-Socket
0
2
4
6
8
10
12
1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K
Late
ncy
(us)
Message Size (Bytes)
INTER-NODE LATENCY (SMALL)
0
50
100
150
200
250
300
350
16K 32K 64K 128K 256K 512K 1M 2M 4M
Late
ncy
(us)
Message Size (Bytes)
INTER-NODE LATENCY (LARGE)
Intra-node Bandwidth: 70.4 GB/sec for 128MB (via NVLINK2)
Intra-node Latency: 5.36 us (without GDRCopy)
Inter-node Latency: 5.66 us (without GDRCopy) Inter-node Bandwidth: 23.7 GB/sec (2 port EDR)Available since MVAPICH2-GDR 2.3a
0
10
20
30
40
50
60
70
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Band
wid
th (G
B/se
c)
Message Size (Bytes)
INTRA-NODE BANDWIDTH
Intra-Socket Inter-Socket
MVAPICH User Group Meeting (MUG) 2019 65Network Based Computing Laboratory
• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)• HoomDBlue Version 1.0.5
• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384
Application-Level Evaluation (HOOMD-blue)
0
500
1000
1500
2000
2500
4 8 16 32
Aver
age
Tim
e St
eps p
er se
cond
(TPS
)
Number of Processes
MV2 MV2+GDR
0
500
1000
1500
2000
2500
3000
3500
4 8 16 32Aver
age
Tim
e St
eps p
er se
cond
(T
PS)
Number of Processes
64K Particles 256K Particles
2X2X
MVAPICH User Group Meeting (MUG) 2019 66Network Based Computing Laboratory
Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96N
orm
alize
d Ex
ecut
ion
Tim
e
Number of GPUs
CSCS GPU cluster
Default Callback-based Event-based
0
0.2
0.4
0.6
0.8
1
1.2
4 8 16 32Nor
mal
ized
Exec
utio
n Ti
me
Number of GPUs
Wilkes GPU Cluster
Default Callback-based Event-based
• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16
On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application
Cosmo model: http://www2.cosmo-model.org/content/tasks/operational/meteoSwiss/
MVAPICH User Group Meeting (MUG) 2019 67Network Based Computing Laboratory
• Deep Learning frameworks are a different game altogether
– Unusually large message sizes (order of megabytes)
– Most communication based on GPU buffers
• Existing State-of-the-art– cuDNN, cuBLAS, NCCL --> scale-up performance
– NCCL2, CUDA-Aware MPI --> scale-out performance• For small and medium message sizes only!
• Proposed: Can we co-design the MPI runtime (MVAPICH2-GDR) and the DL framework (Caffe) to achieve both?
– Efficient Overlap of Computation and Communication
– Efficient Large-Message Communication (Reductions)
– What application co-designs are needed to exploit communication-runtime co-designs?
Deep Learning: New Challenges for MPI Runtimes
Scal
e-up
Per
form
ance
Scale-out Performance
cuDNN
NCCL
gRPC
Hadoop
ProposedCo-
Designs
MPIcuBLAS
A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)
NCCL2
MVAPICH User Group Meeting (MUG) 2019 68Network Based Computing Laboratory
• Efficient Allreduce is crucial for Horovod’s overall training performance– Both MPI and NCCL designs are available
• We have evaluated Horovod extensively and compared across a wide range of designs using gRPC and gRPC extensions
• MVAPICH2-GDR achieved up to 90%scaling efficiency for ResNet-50 Training on 64 Pascal GPUs
Scalable TensorFlow using Horovod, MPI, and NCCL
Awan et al., “Scalable Distributed DNN Training using TensorFlow and CUDA-Aware MPI: Characterization, Designs, and Performance Evaluation”, CCGrid ‘19. https://arxiv.org/abs/1810.11112
MVAPICH User Group Meeting (MUG) 2019 69Network Based Computing Laboratory
MVAPICH2-GDR vs. NCCL2 – Allreduce Operation
• Optimized designs in MVAPICH2-GDR 2.3 offer better/comparable performance for most cases
• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 16 GPUs
1
10
100
1000
10000
100000
128K 256K 512K 1M 2M 4M 8M 16M 32M 64M 128M 256M
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2-GDR NCCL2
~1.2X better
Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect
1
10
100
1000
4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2-GDR NCCL2
~3X better
MVAPICH User Group Meeting (MUG) 2019 70Network Based Computing Laboratory
MVAPICH2-GDR vs. NCCL2 – Allreduce Operation (DGX-2)
• Optimized designs in upcoming MVAPICH2-GDR offer better/comparable performance for most cases
• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 1 DGX-2 node (16 Volta GPUs)
1
10
100
1000
10000
256K 512K 1M 2M 4M 8M 16M 32M 64M 128M 256M
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2-GDR-2.3.2 NCCL-2.4
~2.5X better
Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2
0
10
20
30
40
50
60
70
8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2-GDR-2.3.2 NCCL-2.4
~5.8X better
MVAPICH User Group Meeting (MUG) 2019 71Network Based Computing Laboratory
MVAPICH2-GDR: Enhanced MPI_Allreduce at Scale
• Optimized designs in upcoming MVAPICH2-GDR offer better performance for most cases
• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) up to 1,536 GPUs
0
1
2
3
4
5
6
32M 64M 128M 256M
Band
wid
th (G
B/s)
Message Size (Bytes)
Bandwidth on 1,536 GPUs
MVAPICH2-GDR-2.3.2 NCCL 2.4
1.7X better
0
50
100
150
200
250
300
350
400
450
4 8 16 32 64 128
256
512 1K 2K 4K 8K 16K
Late
ncy
(us)
Message Size (Bytes)
Latency on 1,536 GPUs
MVAPICH2-GDR-2.3.2 NCCL 2.4
1.6X better
Platform: Dual-socket IBM POWER9 CPU, 6 NVIDIA Volta V100 GPUs, and 2-port InfiniBand EDR Interconnect
0123456789
10
24 48 96 192 384 768 1536
Band
wid
th (G
B/s)
Number of GPUs
128MB Message
SpectrumMPI 10.2.0.11 OpenMPI 4.0.1 NCCL 2.4 MVAPICH2-GDR-2.3.2
1.7X better
MVAPICH User Group Meeting (MUG) 2019 72Network Based Computing Laboratory
Distributed Training with TensorFlow and MVAPICH2-GDR
• ResNet-50 Training using TensorFlow benchmark on 1 DGX-2 node (16 Volta GPUs)
0
1000
2000
3000
4000
5000
6000
7000
1 2 4 8 16
Imag
e pe
r sec
ond
Number of GPUs
NCCL-2.4 MVAPICH2-GDR-2.3.2
9% higher
Platform: Nvidia DGX-2 system (16 Nvidia Volta GPUs connected with NVSwitch), CUDA 9.2
0102030405060708090
100
1 2 4 8 16
Scal
ing
Effic
ienc
y (%
)
Number of GPUs
NCCL-2.4 MVAPICH2-GDR-2.3.2
Scaling Efficiency =Actual throughput
Ideal throughput at scale× 100%
MVAPICH User Group Meeting (MUG) 2019 73Network Based Computing Laboratory
Distributed Training with TensorFlow and MVAPICH2-GDR
• ResNet-50 Training using TensorFlow benchmark on SUMMIT -- 1536 Volta GPUs!
• 1,281,167 (1.2 mil.) images
• Time/epoch = 3.6 seconds
• Total Time (90 epochs) = 3.6 x 90 = 332 seconds = 5.5 minutes!
0
50
100
150
200
250
300
350
400
1 2 4 6 12 24 48 96 192 384 768 1536
Imag
e pe
r sec
ond
(Tho
usan
ds)
Number of GPUs
NCCL-2.4 MVAPICH2-GDR-2.3.2
Platform: The Summit Supercomputer (#1 on Top500.org) – 6 NVIDIA Volta GPUs per node connected with NVLink, CUDA 9.2
*We observed errors for NCCL2 beyond 96 GPUs
MVAPICH2-GDR reaching ~0.35 million images per second for ImageNet-1k!
ImageNet-1k has 1.2 million images
MVAPICH User Group Meeting (MUG) 2019 74Network Based Computing Laboratory
• Scalable Host-based Collectives
• Enhanced Derived Datatype
• Integrated Collective Support with SHArP from GPU Buffers
• Optimization for PyTorch and MXNET
MVAPICH2-GDR Upcoming Features for HPC and DL
MVAPICH User Group Meeting (MUG) 2019 75Network Based Computing Laboratory
0
10
20
30
40
4 8 16 32 64 128 256 512 1K 2K 4K
Late
ncy
(us)
MVAPICH2-GDRSpectrumMPI-10.1.0.2OpenMPI-3.0.0
Scalable Host-based Collectives on OpenPOWER (Intra-node Reduce & AlltoAll)(N
odes
=1, P
PN=2
0)
0
50
100
150
200
4 8 16 32 64 128 256 512 1K 2K 4K
Late
ncy
(us)
MVAPICH2-GDR
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
(Nod
es=1
, PPN
=20)
Up to 5X and 3x performance improvement by MVAPICH2 for small and large messages respectively
3.6X
Alltoall
Reduce
5.2X
3.2X3.3X
0
400
800
1200
1600
2000
8K 16K 32K 64K 128K 256K 512K 1M
Late
ncy
(us)
MVAPICH2-GDRSpectrumMPI-10.1.0.2OpenMPI-3.0.0
(Nod
es=1
, PPN
=20)
0
2500
5000
7500
10000
8K 16K 32K 64K 128K 256K 512K 1M
Late
ncy
(us)
MVAPICH2-GDR
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
(Nod
es=1
, PPN
=20)
3.2X
1.4X
1.3X1.2X
MVAPICH User Group Meeting (MUG) 2019 76Network Based Computing Laboratory
MVAPICH2-GDR: Enhanced Derived Datatype
• Kernel-based and GDRCOPY-based one-shot packing for inter-socket and inter-node communication
• Zero-copy (packing-free) for GPUs with peer-to-peer direct access over PCIe/NVLink
0
5
10
15
20
25
[6, 8,8,8,8] [6, 8,8,8,16] [6, 8,8,16,16] [6, 16,16,16,16]
MILC
Spee
dup
Problem size
GPU-based DDTBench mimics MILC communication kernel
OpenMPI 4.0.0 MVAPICH2-GDR 2.3.1 MVAPICH2-GDR-NextPlatform: Nvidia DGX-2 system
(NVIDIA Volta GPUs connected with NVSwitch), CUDA 9.2
0
5
10
15
20
25
16 32 64
Exec
utio
n Ti
me
(s)
Number of GPUs
Communication Kernel of COSMO Model
MVAPICH2-GDR 2.3.1 MVAPICH2-GDR-NextPlatform: Cray CS-Storm
(16 NVIDIA Tesla K80 GPUs per node), CUDA 8.0
Improved 3.4X
(https://github.com/cosunae/HaloExchangeBenchmarks)
Improved 15X
MVAPICH User Group Meeting (MUG) 2019 77Network Based Computing Laboratory
MVAPICH2 Software Family Requirements Library
MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2) MVAPICH2
Optimized Support for Microsoft Azure Platform with InfiniBand MVAPICH2-Azure
Advanced MPI features/support (UMR, ODP, DC, Core-Direct, SHArP, XPMEM), OSU INAM (InfiniBand Network Monitoring and Analysis),
MVAPICH2-X
Advanced MPI features (SRD and XPMEM) with support for Amazon Elastic Fabric Adapter (EFA)
MVAPICH2-X-AWS
Optimized MPI for clusters with NVIDIA GPUs and for GPU-enabled Deep Learning Applications
MVAPICH2-GDR
Energy-aware MPI with Support for InfiniBand, Omni-Path, Ethernet/iWARP and, RoCE (v1/v2)
MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2019 78Network Based Computing Laboratory
Overview of OSU INAM• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the MPI runtime
– http://mvapich.cse.ohio-state.edu/tools/osu-inam/
• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes
• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication– Point-to-Point, Collectives and RMA
• Ability to filter data based on type of counters using “drop down” list
• Remotely monitor various metrics of MPI processes at user specified granularity
• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X
• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes
• OSU INAM 0.9.4 released on 11/10/2018
– Enhanced performance for fabric discovery using optimized OpenMP-based multi-threaded designs
– Ability to gather InfiniBand performance counters at sub-second granularity for very large (>2000 nodes) clusters
– Redesign database layout to reduce database size
– Enhanced fault tolerance for database operations• Thanks to Trey Dockendorf @ OSC for the feedback
– OpenMP-based multi-threaded designs to handle database purge, read, and insert operations simultaneously
– Improved database purging time by using bulk deletes
– Tune database timeouts to handle very long database operations
– Improved debugging support by introducing several debugging levels
MVAPICH User Group Meeting (MUG) 2019 79Network Based Computing Laboratory
OSU INAM Features
• Show network topology of large clusters• Visualize traffic pattern on different links• Quickly identify congested links/links in error state• See the history unfold – play back historical state of the network
Comet@SDSC --- Clustered View
(1,879 nodes, 212 switches, 4,377 network links)Finding Routes Between Nodes
MVAPICH User Group Meeting (MUG) 2019 80Network Based Computing Laboratory
OSU INAM Features (Cont.)
Visualizing a Job (5 Nodes)
• Job level view• Show different network metrics (load, error, etc.) for any live job• Play back historical data for completed jobs to identify bottlenecks
• Node level view - details per process or per node• CPU utilization for each rank/node• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)• Network metrics (e.g. XmitDiscard, RcvError) per rank/node
Estimated Process Level Link Utilization
• Estimated Link Utilization view• Classify data flowing over a network link at
different granularity in conjunction with MVAPICH2-X 2.2rc1• Job level and• Process level
More Details in Tutorial/Demo
Session Tomorrow
MVAPICH User Group Meeting (MUG) 2019 81Network Based Computing Laboratory
• Available since 2004
• Suite of microbenchmarks to study communication performance of various programming models
• Benchmarks available for the following programming models– Message Passing Interface (MPI)
– Partitioned Global Address Space (PGAS)
• Unified Parallel C (UPC)
• Unified Parallel C++ (UPC++)
• OpenSHMEM
• Benchmarks available for multiple accelerator based architectures– Compute Unified Device Architecture (CUDA)
– OpenACC Application Program Interface
• Part of various national resource procurement suites like NERSC-8 / Trinity Benchmarks
• Continuing to add support for newer primitives and features
• Please visit the following link for more information– http://mvapich.cse.ohio-state.edu/benchmarks/
OSU Microbenchmarks
MVAPICH User Group Meeting (MUG) 2019 82Network Based Computing Laboratory
• MPI runtime has many parameters• Tuning a set of parameters can help you to extract higher performance• Compiled a list of such contributions through the MVAPICH Website
– http://mvapich.cse.ohio-state.edu/best_practices/
• Initial list of applications– Amber– HoomDBlue– HPCG– Lulesh– MILC– Neuron– SMG2000– Cloverleaf– SPEC (LAMMPS, POP2, TERA_TF, WRF2)
• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.• We will link these results with credits to you.
Applications-Level Tuning: Compilation of Best Practices
MVAPICH User Group Meeting (MUG) 2019 83Network Based Computing Laboratory
MVAPICH2 – Plans for Exascale• Performance and Memory scalability toward 1-10M cores• Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …)
• MPI + Task*• Enhanced Optimization for GPU Support and Accelerators• Taking advantage of advanced features of Mellanox InfiniBand
• Tag Matching*• Adapter Memory*
• Enhanced communication schemes for upcoming architectures• Intel Optane*• BlueField*• CAPI*
• Extended topology-aware collectives• Extended Energy-aware designs and Virtualization Support• Extended Support for MPI Tools Interface (as in MPI 3.0)• Extended FT support• Support for * features will be available in future MVAPICH2 Releases
MVAPICH User Group Meeting (MUG) 2019 84Network Based Computing Laboratory
• Supported through X-ScaleSolutions (http://x-scalesolutions.com)• Benefits:
– Help and guidance with installation of the library
– Platform-specific optimizations and tuning
– Timely support for operational issues encountered with the library
– Web portal interface to submit issues and tracking their progress
– Advanced debugging techniques
– Application-specific optimizations and tuning
– Obtaining guidelines on best practices
– Periodic information on major fixes and updates
– Information on major releases
– Help with upgrading to the latest release
– Flexible Service Level Agreements • Support provided to Lawrence Livermore National Laboratory (LLNL) for the last two years
Commercial Support for MVAPICH2, HiBD, and HiDL Libraries
MVAPICH User Group Meeting (MUG) 2019 85Network Based Computing Laboratory
• Has joined the OpenPOWER Consortium as a silver ISV member• Provides flexibility:
– To have MVAPICH2, HiDL and HiBD libraries getting integrated into the OpenPOWER software stack
– A part of the OpenPOWER ecosystem
– Can participate with different vendors for bidding, installation and deployment process
• Introduced two new integrated products with support for OpenPOWER systems (Presented yesterday at the OpenPOWER North America Summit)
– X-ScaleHPC
– X-ScaleAI
– Send an e-mail to [email protected] for free trial!!
Silver ISV Member for the OpenPOWER Consortium + Products
MVAPICH User Group Meeting (MUG) 2019 86Network Based Computing Laboratory
Funding AcknowledgmentsFunding Support by
Equipment Support by
MVAPICH User Group Meeting (MUG) 2019 87Network Based Computing Laboratory
Personnel AcknowledgmentsCurrent Students (Graduate)
– A. Awan (Ph.D.)
– M. Bayatpour (Ph.D.)
– C.-H. Chu (Ph.D.)
– J. Hashmi (Ph.D.)
– A. Jain (Ph.D.)
– K. S. Kandadi (M.S.)
Past Students – A. Augustine (M.S.)
– P. Balaji (Ph.D.)
– R. Biswas (M.S.)
– S. Bhagvat (M.S.)
– A. Bhat (M.S.)
– D. Buntinas (Ph.D.)
– L. Chai (Ph.D.)
– B. Chandrasekharan (M.S.)
– S. Chakraborthy (Ph.D.)
– N. Dandapanthula (M.S.)
– V. Dhanraj (M.S.)
– R. Rajachandrasekar (Ph.D.)
– D. Shankar (Ph.D.)– G. Santhanaraman (Ph.D.)
– A. Singh (Ph.D.)
– J. Sridhar (M.S.)
– S. Sur (Ph.D.)
– H. Subramoni (Ph.D.)
– K. Vaidyanathan (Ph.D.)
– A. Vishnu (Ph.D.)
– J. Wu (Ph.D.)
– W. Yu (Ph.D.)
– J. Zhang (Ph.D.)
Past Research Scientist– K. Hamidouche
– S. Sur
– X. Lu
Past Post-Docs– D. Banerjee
– X. Besseron
– H.-W. Jin
– T. Gangadharappa (M.S.)
– K. Gopalakrishnan (M.S.)
– W. Huang (Ph.D.)
– W. Jiang (M.S.)
– J. Jose (Ph.D.)
– S. Kini (M.S.)
– M. Koop (Ph.D.)
– K. Kulkarni (M.S.)
– R. Kumar (M.S.)
– S. Krishnamoorthy (M.S.)
– K. Kandalla (Ph.D.)
– M. Li (Ph.D.)
– P. Lai (M.S.)
– J. Liu (Ph.D.)
– M. Luo (Ph.D.)
– A. Mamidala (Ph.D.)
– G. Marsh (M.S.)
– V. Meshram (M.S.)
– A. Moody (M.S.)
– S. Naravula (Ph.D.)
– R. Noronha (Ph.D.)
– X. Ouyang (Ph.D.)
– S. Pai (M.S.)
– S. Potluri (Ph.D.)
– Kamal Raj (M.S.)
– K. S. Khorassani (Ph.D.)
– P. Kousha (Ph.D.)
– A. Quentin (Ph.D.)
– B. Ramesh (M. S.)
– S. Xu (M.S.)
– J. Lin
– M. Luo
– E. Mancini
Past Programmers– D. Bureddy
– J. Perkins
Current Research Specialist– J. Smith
– S. Marcarelli
– J. Vienne
– H. Wang
Current Post-doc– M. S. Ghazimeersaeed
– A. Ruhela
– K. ManianCurrent Students (Undergraduate)
– V. Gangal (B.S.)
– N. Sarkauskas (B.S.)
Past Research Specialist– M. Arnold
Current Research Scientist– H. Subramoni– Q. Zhou (Ph.D.)
MVAPICH User Group Meeting (MUG) 2019 88Network Based Computing Laboratory
Thank You!
Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/
The MVAPICH2 Projecthttp://mvapich.cse.ohio-state.edu/