Simulating materials with atomic detail at IBM: From biophysical to high-tech
applications
Application TeamGlenn J. Martyna, Physical Sciences Division, IBM ResearchDennis M. Newns, Physical Sciences Division, IBM ResearchJason Crain, School of Physics, Edinburgh UniversityAndrew Jones, School of Physics, Edinburgh UniversityRazvan Nistor, Physical Sciences Division, IBM ResearchAhmed Maarouf, Physical Sciences Division, IBM Research and EGNCMarcelo Kuroda, Physical Sciences Division, IBM and CS, UIUC
Methods/Software Development TeamGlenn J. Martyna, Physical Sciences Division, IBM ResearchMark E. Tuckerman, Department of Chemistry, NYU.Laxmikant Kale, Computer Science Department, UIUCRamkumar Vadali, Computer Science Department, UIUCSameer Kumar, Computer Science, IBM ResearchEric Bohm, Computer Science Department, UIUCAbhinav Bhatele*, Computer Science Department, UIUCRamprasad Venkataraman, Computer Science Department, UIUC
Funding : NSF, IBM Research, ONRL
*George Michael Memorial High Performance Computing Fellows for 2009
IBM’s Blue Gene/L network torus supercomputer
Our fancy apparatus.
Goal : The accurate treatment of complex heterogeneous systems to gain physical insight.
Characteristics of current models
Empirical Models: Fixed charge, non-polarizable, pair dispersion.
Ab Initio Models: GGA-DFT, Self interaction present, Dispersion absent.
Problems with current models (empirical)
Dipole Polarizability : Including dipole polarizability changes solvation shells of ions and drives them to the surface.
Higher Polarizabilities: Quadrupolar and octapolar polarizabilities are NOT SMALL.
All Manybody Dispersion terms: Surface tensions and bulk properties determined using accurate pair potentials are incorrect. Surface tensions and bulk properties are both recovered using manybody dispersion and an accurate pair potential. An effective pair potential destroys surface properties but reproduces the bulk.Force fields cannot treat chemical reactions:
Problems with current models (DFT)• Incorrect treatment of self-interaction/exchange: Errors in electron affinity, band gaps …
• Incorrect treatment of correlation: Problematic treatment of spin states. The ground state of transition metals (Ti, V, Co) and spin splitting in Ni are in error. Ni oxide incorrectly predicted to be metallic when magnetic long-range order is absent.
• Incorrect treatment of dispersion : Both exchange and correlation contribute.
• KS states are NOT physical objects : The bands of the exact DFT are problematic. TDDFT with a frequency dependent functional (exact) is required to treat excitations even within the Born-Oppenheimer approximation.
Conclusion : Current Models
• Simulations are likely to provide semi-quantitative accuracy/agreement with experiment.
• Simulations are best used to obtain insight and examine physics .e.g. to promote understanding.
Nonetheless, in order to provide truthful solutions of themodels, simulations must be performed to long time scales!
Limitations of Limitations of ab initioab initio MDMD ((despite our despite our efforts/improvements!)efforts/improvements!)Limited to small systems (100-1000 Limited to small systems (100-1000 atoms)atoms)**..
Limited to short time dynamics and/or Limited to short time dynamics and/or sampling times.sampling times.
Parallel scaling only achieved for Parallel scaling only achieved for # processors <= # electronic # processors <= # electronic statesstates
until recent efforts by ourselves and until recent efforts by ourselves and others.others.
Improving this will allow us to sample Improving this will allow us to sample longer and learn new physics.longer and learn new physics.
*The methodology employed herein scales as O(N3) with system size due to the orthogonality constraint, only.
Solution: Fine grained parallelization of Solution: Fine grained parallelization of CPAIMD.CPAIMD. Scale small systems to 10Scale small systems to 1055 processors!!processors!! Study long time scale Study long time scale phenomena!!phenomena!!
(The charm++ QM/MM application is work in progress.)
IBM’s Blue Gene/L network torus supercomputer
The worlds fastest supercomputer!Its low power architecture requires fine grain parallel algorithms/software to achieve optimal performance.
Density Functional Theory : Density Functional Theory : DFTDFT
Electronic states/orbitals of water
Removed by introducing a non-local electron-ion interaction.
Plane Wave Basis Plane Wave Basis Set:Set:
The # of states/orbital ~ N where N is # of atoms. The # of pts in g-space ~N.
Plane Wave Basis Set: Plane Wave Basis Set: Two Spherical cutoffs in G-Two Spherical cutoffs in G-spacespace
gxgy
gz (g)
(g) : radius gcut
gxgy
n(g)
n(g) : radius 2gcut
g-space is a discrete regular grid due to finite size of system!!
gz
Plane Wave Basis Set: Plane Wave Basis Set: The dense discrete real space The dense discrete real space mesh.mesh.
xy
z (r)
(r) = 3D-FFT{ (g)}
xy
n(r)
n(r) = k|k(r)|2
n(g) = 3D-IFFT{n(r)} exactly!
z
Although r-space is a discrete dense mesh, n(g) is generated exactly!
Simple Flow Chart : Scalar Simple Flow Chart : Scalar OpsOps
Object : Comp : MemStates : N2 log N : N2
Density : N log N : NOrthonormality : N3 : N2.33
Memory penalty
Flow Chart : Data Flow Chart : Data StructuresStructures
Parallelization under Parallelization under charm++charm++
Transpose
Transpose
Transpose Transpose
RhoR
Transpose
Effective Parallel Strategy:Effective Parallel Strategy:
The problem must be finely discretized.The problem must be finely discretized.The discretizations must be deftly chosen toThe discretizations must be deftly chosen to
–Minimize the communication between Minimize the communication between processors.processors.
–Maximize the computational load on the Maximize the computational load on the processors.processors.
NOTE , PROCESSOR AND DISCRETIZATION NOTE , PROCESSOR AND DISCRETIZATION ARE ARE
SEPARATE CONCEPTS!!!!SEPARATE CONCEPTS!!!!
Ineffective Parallel StrategyIneffective Parallel Strategy
The discretization size is controlled by the The discretization size is controlled by the number of physical processors.number of physical processors.
The size of data to be communicated at a The size of data to be communicated at a given step is controlled by the number of given step is controlled by the number of physical processors.physical processors.
For the above paradigm : For the above paradigm : –Parallel scaling is limited to # Parallel scaling is limited to # processors=coarse grained parameter in processors=coarse grained parameter in the model.the model.
THIS APPROACH IS TOO LIMITED TO THIS APPROACH IS TOO LIMITED TO ACHIEVE FINE GRAINED PARALLEL ACHIEVE FINE GRAINED PARALLEL SCALING.SCALING.
Virtualization and Charm++Virtualization and Charm++Discretize the problem into a large number Discretize the problem into a large number of very fine grained parts.of very fine grained parts.
Each discretization is associated with some Each discretization is associated with some amount of computational work and amount of computational work and communication.communication.
Each discretization is assigned to a light Each discretization is assigned to a light weight thread or a ``virtual processor'' weight thread or a ``virtual processor'' (VP).(VP).
VPs are rolled into and out of physical VPs are rolled into and out of physical processors as physical processors become processors as physical processors become available (Interleaving!)available (Interleaving!)
Mapping of VPs to processors can be Mapping of VPs to processors can be changed easily.changed easily.
The Charm++ middleware The Charm++ middleware provides the provides the data structures and controls required to data structures and controls required to choreograph this complex dance.choreograph this complex dance.
Parallelization by over partitioning of parallel objects : The charm++ middleware choreography!
Decomposition of work
Available Processors
On a torus architecture, load balance is not enough!The choreography must be ``topologically aware’’.
Charm++ middleware maps work to processors dynamically
Challenges to scaling:Challenges to scaling:Multiple concurrent 3D-FFTs to generate the states in real space require AllToAll communication patterns. Communicate N2 data pts. Reduction of states (~N2 data pts) to the density (~N data pts) in real space.
Multicast of the KS potential computed from the density (~N pts) back to the states in real space (~N copies to make N2 data).
Applying the orthogonality constraint requires N3 operations.
Mapping the chare arrays/VPs to BG/L processors in a topologically aware fashion.
Scaling bottlenecks due to non-local and local electron-ion interactions removed by the introduction of new methods!
Topologically aware mapping for CPAIMD
•The states are confined to rectangular prisms cut from the torus to minimize 3D-FFT communication. •The density placement is optimized to reduced its 3D-FFT communication and the multicast/reduction operations.
~N1/2
(~N1/3)
States ~N1/2
Gspace
Density N1/12
Topologically aware mapping for CPAIMD : Details
Distinguished Paper Award at Euro-Par 2009
Improvements wrought by topological aware mappingon the network torus architecture
Density (R) reduction and multicast to State (R) improved.State (G) communication to/from orthogonality chares improved.‘’Operator calculus for parallel computing”, Martyna and Deng (2009) in preparation.
Parallel scaling of liquid water* as a function of system size on the Blue Gene/L installation at YKT:
•Weak scaling is observed!•Strong scaling on processor numbers up to ~60x the number of states!
*Liquid water has 4 states per molecule.
Scaling Water on Blue Gene/L
Software : SummarySoftware : Summary
Fine grained parallelizationFine grained parallelization of the Car-of the Car-Parrinello Parrinello ab initioab initio MD method demonstrated MD method demonstrated on thousands of processors : on thousands of processors :
# processors >> # electronic states# processors >> # electronic states..
Long time simulations of small systems are Long time simulations of small systems are now possible on large massively parallel now possible on large massively parallel supercomputers.supercomputers.
Instance parallelization • Many simulation types require fairly uncoupled instances
of existing chare arrays.
• Simulation types is this class include: 1) Path Integral MD (PIMD) for nuclear quantum effects. 2) k-point sampling for metallic systems. 3) Spin DFT for magnetic systems. 4) Replica exchange for improved atomic phase space
sampling.
• A full combination of all 4 simulation is both physical and interesting
Replica Exchange : M classical subsystems each at a different temperature acting indepently
Replica exchange uber index active for all chares. Nearest neighbor communication required to exchange temperatures and energies
PIMD : P classical subsystems connect by harmonic bonds
Classical particle
Quantum particlePIMD uber index active for all chares. Uber communication required to compute harmonic interactions
K-points : N-states are replicated and given a different phase.
The k-point uber index is not active for atoms and electron density.Uber reduction communication require to form the e-density and atom forces.
k0
k1
Atoms are assumed to be part of a periodic structure andare shared between the k-points (crystal momenta).
Spin DFT : States and electron density are given a
spin-up and spin-down index.
The spin uber index is not active for atoms.Uber reduction communication require to form the atom forces
Spin up
Spin dn
``Uber’’ charm++ indices
• Chare arrays in OpenAtom now posses 4 uber ``instance’’ indices.
• Appropriate section reductions and broadcasts across the ‘’Ubers’’ have been enabled.
• We expect full results for July 2010.
Goal : The accurate treatment of complex heterogeneous systems to gain physical insight.
Transparent Conducting Electrodes (TCEs) Transparent Conducting Electrodes (TCEs) for (inexpensive) amorphous silicon solar cellsfor (inexpensive) amorphous silicon solar cells
Conventional TCEs:Conventional TCEs:• Indium Tin Oxide (ITO)• Zinc Oxide (ZnO)
Graphene TCEs:Graphene TCEs:• 1 – 8 atomic layers
Performance:Performance:• Transparency 95%• Sheet resistance 10
Manufacturing:Manufacturing:
Performance:Performance:• Transparency 85%• Sheet resistance 100
Manufacturing:Manufacturing:cm X cm size sheetscm X cm size sheets
Science, 324, p. 1312 (2009).
Graphene – single atomic layer of carbonGraphene – single atomic layer of carbon
Chemical Doping:
Transparent – 2% loss per layer
http://www.rsc.org/
High Mobility – 2 x 105 (cm2 / Vs)
Goal
Engineering goalEngineering goal
Experimental collaborators G. Tulevski (IBM), A. Kasry (EGNC), A. Boll (IBM)
# layers
Plane wave based DFT CODES: Plane wave based DFT CODES: 1)1) OpenAtom OpenAtom 2)2) Quantum Espresso Quantum Espresso 3)3) AbinitAbinit
Graphene + intercalates:Graphene + intercalates:
Explore various Explore various nn and and pp dopants on layers dopants on layers
Orientation of Layers – Marcelo, Ahmed talksOrientation of Layers – Marcelo, Ahmed talks
Graphene + substrate:Graphene + substrate:
Explore interface and transport from Explore interface and transport from semiconducting layer to graphenesemiconducting layer to graphene
Entire system:Entire system:
Calculate electronic properties of entire +500 Calculate electronic properties of entire +500 atom systems using full atom systems using full ab initioab initio
aSi + Graphene on BG/LaSi + Graphene on BG/L
PZ functional800 eV (60 Ry.) Ecut100ps quench aSi10ps relaxation
Graphene on aSi:H – Structural resultsGraphene on aSi:H – Structural results
0.2 eV
Ideal single layer:
Relaxed single layer:
Relaxed multi layer:
1% strain 0.1eV gap
See also Zhen Hua Ni et al. ACS Nano, 2008, 2 (11), pp 2301
Graphene IntercalatesGraphene Intercalates
Electron donors:Electron donors:Alkali group elements – Li, K
Hole donors:Hole donors:HNO3 AlCl3, FeCl3, SbCl5
EFermi
Graphene Intercalates – HNOGraphene Intercalates – HNO33
EFermi
Graphene Intercalates – HNOGraphene Intercalates – HNO33
EFermi
No shift – no doping
Graphene Intercalates – HNOGraphene Intercalates – HNO33
EFermi
No shift – no doping
HNOHNO33 decomposition decomposition
Graphene Intercalates – HNOGraphene Intercalates – HNO33
Graphene layers facilitate decomposition
HNOHNO33 decomposition decomposition
Graphene Intercalates – HNOGraphene Intercalates – HNO33
(movie) - Top view with invisible graphene
Decomposition product - NODecomposition product - NO33 anion anion
EFermi
Decomposition product - NODecomposition product - NO33 anion anion
EFermi
Old EFermi
Integrate 1 hole per molecule
HNOHNO33 decomposition products decomposition products
NONO33--
N
O2O1
s p
O3
From graphene
1 / 3 1 / 3 1 / 3
e from graphene inoxygen p-orbitals
See also:Wehling et. al, Nano Letters, 8, 173 (2008)Gierz et. al, Nano Letters, 8, 4603 (2008)
AlClAlCl33 intercalate (1 unit per 6.3 C) intercalate (1 unit per 6.3 C)
Cl
Al
s p
Ionic compound
AlClAlCl33 intercalate (1 unit per 6.3 C) intercalate (1 unit per 6.3 C)
EFermi
No shift – no doping
AlClAlCl33 intercalate (1 unit per 6.3 C) intercalate (1 unit per 6.3 C) – with Defects– with Defects
Cl
Induce defect by removing Al atom:Induce defect by removing Al atom:
Cls p
From graphene
SbClSbCl66 intercalate (1 unit per 14 C) intercalate (1 unit per 14 C)
EFermi
Old EFermi
Integrate 1 hole per defect
HNO3 intercalate:- Charge transfer complexes: NO2, NO3
AlCl3 crystals:- Charge transfer complexes: AlCl4 or AlCl2.3
SbCl5 crystals:- Relaxed pseudo-crystal configuration with density (1unit / 14C).- Charge transfer complexes: SbCl4, SbCl6
Review of Review of pp – dopant studies – dopant studies
SummarySummary
Structure:Structure:Graphene layers on aSi:H
pp dopants: dopants:~1 hole per defect or decomposition product:NO2, NO3, AlCl4, SbCl4, SbCl6
doping:doping:Classic rigid band shifts observed from various intercalate compounds
Future WorkFuture Work
- Quantify transparency of intercalate layers- Investigate charge transfer to graphene layers from substrate- Large scale electronic structure calculations
Conclusions
• Important physical insight can be gleaned from high quality, large scale computer simulation studies.
• The parallel algorithm development required necessitates cutting edge computer science.
• New methods must be developed hand-in-hand with new parallel paradigms.
• Using clever hardware with better methods and parallel algorithms shows great promise to impact both science and technology.