Date post: | 13-Jan-2016 |
Category: |
Documents |
Upload: | jesse-stephens |
View: | 213 times |
Download: | 0 times |
4.3.1 Pioneering Applications: Priority Research Directions
Criteria for Consideration
What will Pioneering Apps do to address the barriers & gaps in associated Priority Research Directions (PRD’s)?
(1) Demonstrated need for Exascale
(2) Significant Scientific Impact in: basic physics, environment, engineering, life sciences, materials
(3) Realistic Productive Pathway (over 10 years) to Exploitation of Exascale
What new software capabilities will result?
What new methods and tools will be developed?
How will this realistically impact the research advances targeted by pioneering applications that may benefit from exascale systems?
What’s the timescale in which that impact may be felt?
Summary of Barriers & Gaps
Potential Impact on Software Potential impact on user community(usability, capability, etc.)
4.3.1 PIONEERING APPLICATIONS
New capability 1Integrated Plasma
core-edge simulations
Single hadron physicsRegional decadal climate
Global coupled climate processes
Multi-hadron physicsElectroweak symmetry breaking
Whole system burning plasma simulations
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
Science Milestones
1 PF 10 PF 100 PF 1 EF
Pioneering Applications with demonstrated need for Exascale to have significant scientific impact on associated priority research directions (PRD’s) with a productive pathway to exploitation of computing at the extreme scale
PIONEERING APPS
• Technology drivers – Advanced architectures with greater capability but with
formidable software development challenges• Alternative R&D strategies
– Choosing architectural platform(s) capable of addressing PRD’s of Pioneering Apps on path to exploiting Exascale
• Recommended research agenda– Effective collaborative alliance between Pioneering Apps,
CS, and Applied Math with an associated strong V&V effort• Crosscutting considerations
– Identifying possible common areas of software development need among the Pioneering Apps
– Addressing common need to attract, train, and assimilate young talent into this general research arena
4.3.1 Pioneering Applications: High Energy Physics
Key challenges
• Achieving the highest possible sustained applications performance for the lowest cost• Exploiting architectures with imbalanced node performance and inter-node communications • Developing multi-layered algorithms and implementations to exploit on-chip (heterogeneous) capabilities, fast memory, and massive system parallelism• Tolerance to and recovery from system faults at all levels over long runtimes
Generic software components required:• Performance analysis tools• Highly parallel, high bandwidth I/O• Efficient compilers for multi-layered parallel algorithms targeting heterogeneous architectures• Automatic recovery from hardware/system errors• Robust global file system and metadata standards
• Stress testing and verification of exascale hardware and system software• Development of new stochastic and linear solver algorithms• Reliable fault-tolerant massively parallel systems• Global data sharing and interoperability
Summary of research direction
Potential impact on software component Potential impact on usability, capability, and breadth of community
The applications community will develop:• Multi-layer, multi-scale algorithms and implementations• Optimised single-core/single-chip complex linear algebra routines• Mixed precision arithmetic for fast memory access and off-chip communications• Algorithms that tolerate hardware without error detection/correction• Verification of all components (algorithms, software and hardware)• Data management and standards for shared data use
• Technology drivers– massive parallelism– heterogeneous microprocessor architectures
• Alternative R&D strategies– optimisation of computationally demanding kernels– algorithms targeting all levels of hardware parallelism
• Recommended research agenda– co-design of hardware and software
• Cross-cutting considerations– automated fault tolerance at all levels– multi-level algorithm implementation and optimisation
4.3.1 Pioneering Applications: High Energy Physics
6
Industrial challenges in the Oil & Gas industry: Depth Imaging roadmap
Algorithmic complexity Vs. corresponding computing power
3-18 Hz
3-35 Hz
3-55 Hz
RTM
9.5 PF
900 TF
56 TF
1015 flops
0,1
1
10
1000
100
1995 2000 2005 2010 2015 2020
0,5
Algorithm complexity
Visco elastic FWIpetro-elastic inversion
elastic FWI visco elastic modeling
isotropic/anisotropic FWI elastic modeling/RTM
isotropic/anisotropic RTM isotropic/anisotropic modeling
Paraxial isotropic/anisotropic imaging
Asymptotic approximation imaging
Substained performance for different frequency content over a 8 day processing duration
courtesy
2004 2005 2006 2007 2008 2009
0
50
100
150
200
250
300
350
400
450
TeraFlops
HPC Power PAU (TF)
High Performance Computing as key-enabler
1980 1990 2000 2010 2020 2030
Capacity: # of Overnight
Loads cases run
Capacity: # of Overnight
Loads cases run
Available Computational
Capacity [Flop/s]
Available Computational
Capacity [Flop/s]
CFD-basedLOADS
& HQ
CFD-basedLOADS
& HQ
Aero Optimisation& CFD-CSM
Aero Optimisation& CFD-CSM Full MDOFull MDO
Real time CFD based
in flight simulation
Real time CFD based
in flight simulation
x106
1 Zeta (1021
)
1 Peta (1015
)
1 Tera (1012
)
1 Giga (109
)
1 Exa (1018
)
102
103
104
105
106
LES
CFD-basednoise
simulation
CFD-basednoise
simulation
RANS Low Speed
RANS High Speed
HS Design
Data Set
UnsteadyRANS
“Smart” use of HPC power:• Algorithms• Data mining• knowledge
Capability achieved during one night batch Capability achieved during one night batch Courtesy AIRBUS France
Computational Challenges and Needs for Academic and Industrial Applications Communities
BACKUP
IESP/Application Subgroup
2003 2010 201520072006
Consecutive thermal fatigue event
Computations enable to better understand the wall thermal loading in an injection.
Knowing the root causes of the event define a new design to avoid this problem.
Part of a fuel assembly3 grid assemblies
Computation with an L.E.S. approach for turbulent modelling
Refined mesh near the wall.
9 fuel assemblies
No experimental approach up to now
Will enable the study of side effects implied by the flow around neighbour fuel assemblies.
Better understanding of vibration phenomena and wear-out of the rods.
The whole vessel reactor
106 cells3.1013 operations
108 cells1016 operations
1010 cells5.1018 operations
109 cells3.1017 operations
107 cells6.1014 operations
Fujistu VPP 5000
1 of 4 vector processors
2 month length computation
Cluster, IBM Power5
400 processors
9 days
# 1 Gb of storage
2 Gb of memory
IBM Blue Gene/L
20 Tflops during 1 month600 Tflops during 1 month
# 15 Gb of storage
25 Gb of memory
# 10 Tb of storage
25 Tb of memory
# 1 Tb of storage
2,5 Tb of memory
# 200 Gb of storage
250 Gb of memory
Power of the computer Pre-processing not parallelized Pre-processing not parallelized
Mesh generation
… ibid. …
… ibid. …
Scalability / Solver
… ibid. …
… ibid. …
… ibid. …
Visualisation
10 Pflops during 1 month
Computations with smaller and smaller scales in larger and larger geometries a better understanding of physical phenomena a more effective help for decision making
A better optimisation of the production (margin benefits)
From sequences to structures : HPC Roadmap2015 and beyond20112009
1 family 5.103 cpu/~week
1 family5.104 cpu/~week
1 family ~ 104*KP cpu/~week
CSP : proteins structurally characterized ~ 104
# 25 Gb of storage
500 Gb of memory
# 5*CSP Tb of storage
5*CSP Tb of memory
# 5 Tb of storage
5 Tb of memory
Computations using more and more sophisticated bio-informatical and physical modelling approaches Identification of protein structure and function
Identify all protein sequences using public resources and metagenomics data, and systematic modelling of proteins belonging to the family (Modeller software).
Improving the prediction of protein structure by coupling new bio-informatics algorithm and massive molecular dynamics simulation approaches.
Systematic identification of biological partners of proteins.
Grand Challenge GENCI/CCRT
QuickTime™ and aTIFF (Uncompressed) decompressor
are needed to see this picture.
QuickTime™ et undécompresseur
sont requis pour visionner cette image.
Proteins 69 (2007) 415
PIONEERING APPS: Fusion Energy Sciences
Criteria for Consideration
(1) Demonstrated need for Exascale
-- FES applications currently utilize LCF’s at ORNL and ANL, demonstrating scalability of key physics with increased computing capability
(2) Significant Scientific Impact: (identified at DOE Grand Challenges Workshop)
-- high physics fidelity integration of multi-physics, multi-scale FES dynamics
-- burning plasmas/ITER physics simulation capability
(3) Productive Pathway (over 10 years) to Exploitation of Exascale
-- ability to carry out confinement simulations (including turbulence-driven transport) demonstrates ability to include higher physics fidelity components with increased computational capability
-- needed for both of the areas identified in (2) as priority research directions
PIONEERING APPS: Fusion Energy SciencesSummary of Barriers & Gaps
(1) high physics fidelity integration of multi-physics, multi-scale FES dynamics
-- FES applications for macroscopic stability, turbulent transport, edge physics (where atomic processes important), etc. have demonstrated at various levels of efficiency the capability of using existing LCF’s
-- Associated Barrier & Gap: need to integrate/couple improved versions of such large scale simulations to produce an experimentally validated integrated simulation capability for scenario modeling of the whole device
(2) burning plasmas/ITER physics simulation capability
-- As FES enters new era of burning plasma experiments on the reactor scale, require capabilities for addressing the larger spatial and longer energy-confinement time
-- Associated Barrier & Gap: scales spanning the small gyroradius of the ions to the radial dimension of the plasmas will need to be addressed • an order of magnitude greater spatial resolution is needed to account for the larger plasmas of interest • major increase expected in the plasma energy confinement time (~1 second in the ITER device) together with the longer pulse of the discharges in these superconducting systems • will demand simulations of unprecedented aggregate floating point operations
PIONEERING APPS: Fusion Energy SciencesPotential Impact on Software
(1) What new software capabilities will result?
-- For each science driver and each exascale-appropriate application the approach for developing new software capabilities will involve:
• Inventory current codes with respect to mathematical formulations, data structures, current scalability of algorithms and solvers (e.g. Poisson solves) with associated identification of bottlenecks to scaling, current libraries used, and
“complexity” with respect to memory, flops, and communication
• Inventory current capabilities for workflows, frameworks, V&V, uncertainty quantification, etc. with respect to: tight vs. loose code coupling schemes for integration; mgt. of large data sets from experiments & simulations; etc.
• Inventory expected software developmental tasks for the path to exascale (concurrency, memory access, etc.)
• Inventory and carry out work-force assessment needs with respect to computer scientists, applied mathematician, and FES applications scientists.
(2) What new methods and tools will be developed?
-- Outcome from above inventory/assessment exercises should lead to development of corresponding exascale relevant tools and capabilities.
PIONEERING APPS: Fusion Energy Sciences
Potential impact on user community(usability, capability, etc.)
(1)How will this realistically impact the research advances targeted by FES that may benefit from exascale systems?
-- The FES PRD’s for (1) high physics fidelity integrated simulations and for addressing (2) burning plasmas/ITER challenges will potentially be able to demonstrate how the application of exascale computing capability can enable the accelerated delivery of much needed modeling tools.
(2) What’s the timescale in which that impact may be felt?
-- As illustrated on Pioneering Apps Roadmap (earlier slide): • 10 to 20 PF (2012) integrated plasma core-edge coupled simulations • 1 EF (2018) whole-system burning plasma simulations applicable to ITER
PIONEERING APPS: Fusion Energy Sciences
• Technology drivers – Advanced architectures with greater capability but with formidable
software development challenges (e.g., scalable algorithms and solvers; workflows & frameworks; etc.)
• Alternative R&D strategies– Choosing architectural platform(s) capable of addressing PRD’s of FES
on path to exploiting Exascale• Recommended research agenda
– Effective collaborative alliance between FES, CS, and Applied Math (e.g., SciDAC activities) with an associated strong V&V effort
• Crosscutting considerations– Identifying possible common areas of software development needs with
Pioneering Apps (climate, etc.)– Critical need in FES to attract, train, and assimilate young talent into this
field – in common with general computational science research arena