+ All Categories
Home > Documents > CINECA, an European HPC infrastructure: State of the art ... · Coulomb’11 Bologna November 4-5,...

CINECA, an European HPC infrastructure: State of the art ... · Coulomb’11 Bologna November 4-5,...

Date post: 12-Jun-2018
Category:
Upload: buibao
View: 213 times
Download: 0 times
Share this document with a friend
16
www.cineca.it Coulomb’11 Bologna November 4-5, 2011 CINECA, an European HPC infrastructure: State of the art and its evolution Giovanni Erbacci, Supercomputing, Applications and Innovation Department, CINECA, Italy [email protected]
Transcript

www.cineca.it

Coulomb’11 Bologna November 4-5, 2011

CINECA, an European HPC infrastructure:

State of the art and its evolution

Giovanni Erbacci,

Supercomputing, Applications and Innovation Department, CINECA, Italy

[email protected]

www.cineca.it 2

CINECA

CINECA non profit Consortium, made

up of 50 Italian universities*, The National

Institute of Oceanography and

Experimental Geophysics - OGS, the

CNR (National Research Council), and the

Ministry of Education, University and

Research (MIUR).

CINECA is the largest Italian computing centre, one of the most important worldwide.

The HPC department

- manages the HPC infrastructure,

- provide support end HPC resources to Italian and European researchers,

- promote technology transfer initiatives for industry.

www.cineca.it 3

1969: CDC 6600 1st system for scientific computing 1975: CDC 7600 1st supercomputer 1985: Cray X-MP / 4 8 1st vector supercomputer 1989: Cray Y-MP / 4 64 1993: Cray C-90 / 2 128 1994: Cray T3D 64 1st parallel supercomputer

1995: Cray T3D 128 1998: Cray T3E 256 1st MPP supercomputer 2002: IBM SP4 512 1 Teraflops 2005: IBM SP5 512 2006: IBM BCX 10 Teraflops 2009: IBM SP6 100 Teraflops 2012: IBM BG/Q > 1 Petaflops

The Story

www.cineca.it 4

CINECA and Top 500 www.top500.org

10 PF

1 PF

100 TF

10 TF

1 TF

100 GF

10 GF

1 GF

>2 PF

www.cineca.it 5

HPC Infrastructure for Scientific computing

Logical Name SP6 (Sep 2009) BGP (jan 2010) PLX (2011)

Model IBM P575 IBM BG / P IBM IDATAPLEX

Architecture SMP MPP Linux Cluster

Processor IBM Power 6 4.7 Ghz IBM PowerPC 0,85 GHz Intel Westmere Ec 2.4 Ghz

# of core 5376 4096 3288 + 548 GPGPU Nvidia Fermi

M2070

# of node 168 32 274

# of rack 12 1 10

Total RAM 20 Tera Byte 2 Tera Byte ~ 13 Tera Byte

Interconnection Qlogic Infiniband DDR 4x IBM 3D Torus Qlogiq QDR 4x

Operating System AIX Suse RedHat

Total Power ~ 800 Kwatts ~ 80 Kwatts ~ 200 Kwatts

Peak Performance > 101 Tera Flops ~ 14 Tera Flops ~ 300 Tera Flops

www.cineca.it 6

SP Power6 @ CINECA

- 168 compute nodes IBM p575 Power6 (4.7GHz)

- 5376 compute cores (32 core / node)

- 128 Gbyte RAM / node (21Tbyte RAM)

- IB x 4 DDR (double data rate)

Peak performance 101 TFlops

Rmax 76.41 Tflop/s

Efficiency (workload) 75.83 %

N. 116 Top500 (June 11)

- 2 login nodes IBM p560

- 21 I/O + service nodes IBM p520

- 1.2 PByte Storage row:

500 Tbyte working area High Performance

700 Tbyte data repository

www.cineca.it 7

BGP @ CINECA

Model: IBM BlueGene / P Architecture: MPP Processor Type: IBM PowerPC 0,85 GHz Compute Nodes: 1024 (quad core, 4096 total) RAM: 4 GB/compute node (4096 GB total) Internal Network: IBM 3D Torus OS: Linux (login nodes) CNK (compute nodes)

Peak Performance: 14.0 TFlop/s

www.cineca.it 8

PLX @ CINECA

IBM Server dx360M3 – Compute node

2 x processori Intel Westmere 6c X5645 2.40GHz 12MB Cache, DDR3 1333MHz 80W

48GB RAM su 12 DIMM 4GB DDR3 1333MHz

1 x HDD 250GB SATA

1 x QDR Infiniband Card 40Gbs

2 x NVIDIA m2070 (m2070q su 10 nodi)

Peak performance 32 TFlops

(3288 cores a 2.40GHz)

Peak performance 565 TFlops Single Precision o 283 TFlops Double Precision (548 Nvidia M2070)

N. 54 Top500 (June 11)

www.cineca.it 9

Visualisation system

Visualisation and computer graphycs

Virtual Theater

6 video-projectors BARCO SIM5

Audio surround system

Cylindric screen 9.4x2.7 m, angle 120°

Ws + Nvidia cards

RVN nodes on PLX system

www.cineca.it 10

Science @ CINECA

Scientific Area Chemistry Physics Life Science Engineering Astronomy Geophysics Climate Cultural Heritage

National Institutions INFM-CNR SISSA INAF INSTM OGS INGV ICTP Academic Institutions

Main Activities Molecular Dynamics Material Science Simulations Cosmology Simulations Genomic Analysis Geophysics Simulations Fluid dynamics Simulations Engineering Applications Application Code development/ parallelization/optimization Help desk and advanced User support Consultancy for scientific software Consultancy and research activities support Scientific visualization support

www.cineca.it 11

The HPC Model at CINECA

From agreements with National Institutions to National HPC Agency in an European context

- Big Science – complex problems

- Support Advanced computational science projects - HPC support for computational sciences at National and European level - CINECA calls for advanced National Computational Projects

ISCRA Italian SuperComputing Resource Allocation

http://iscra.cineca.it

Objective: support large-scale, computationally intensive projects not possible or productive without terascale, and in future petascale, computing.

Class A: Large Projects (> 300.000 CPUh x project): two calls per Year

Class B: Standard Projects. two calls per Year

Class C: Test and Development Projects (< 40.000 CPU h x project):

continuous submission scheme; proposals reviewed 4 times per year,

www.cineca.it 12

ISCRA: Italian SuperComputing Resource

Allocation

iscra.cineca.it

Sp6, 80TFlops (5376 core) N. 116 Top500, June 2011

BGP, 17 TFlops (4096 core) PLX, 142TFlops (3288 core + 548 nVidia M2070) N. 54 Top500 June 2011

- National scientific committee

- Blind National Peer review system - Allocation procedure

www.cineca.it 13

CINECA and Industry

CINECA provides HPC service to Industry: – ENI (geophysics)

– BMW-Oracle (American cup, CFD structure)

– Arpa (weather forecast, Meteoclimatology)

– Dompé (pharmaceutical)

CINECA hosts the ENI HPC system:

HP ProLiant SL390s G7 Xeon 6C X5650, Infiniband,

Cluster Linux HP, 15360 cores

N. 60 Top 500 (June 2011) 163.43 Tflop/s Peak, 131.2 Linpack

www.cineca.it 14

CINECA Summer schools

www.cineca.it 15

PRACE

PRACE Research Infrastructure

(www.prace-ri.eu): the top level of

the European HPC ecosystem

CINECA:

- represents Italy in PRACE

- hosting member in PRACE

- Tier-1 system

> 5 % PLX + SP6

- Tier-0 system in 2012

BG/Q 2 Pflop/s

- involved in PRACE 1IP, 2IP

- PRACE 2IP prototype EoI

European (PRACE)

Local

Tier 0

Tier 1

Tier 2

National

The European HPC-Ecosystem

Creation of a European HPC ecosystem involving all stakeholders

HPC service providers on all tiers

Scientific and industrial user communities

The European HPC hw and sw industry

www.cineca.it 16

HPC-Europa 2: Providing access to HPC

resources

HPC-Europa 2

- consortium of seven European HPC infrastructures

- integrated provision of advanced computational services to the

European research community

- Provision of transnational access to some of the most powerful HPC

facilities in Europe

- Opportunities to collaborate with scientists working in related fields at a relevant local research institute

http://www.hpc-europa.eu/ HP-Europa 2: 2009 – 2012

(FP7-INFRASTRUCTURES-2008-1)


Recommended