Al KellieAssociate Director, National Center for Atmospheric Research (NCAR)
Director, Computation and Information Systems Lab (CISL)([email protected])
Al KellieAssociate Director, National Center for Atmospheric Research (NCAR)
Director, Computation and Information Systems Lab (CISL)([email protected])
1313thth ECMWF WorkshopECMWF Workshopon the Use of on the Use of
HPC in MeteorologyHPC in Meteorology
OUTLINEOUTLINE
A look at how NCAR & CISL are organized more of CISLHPC facilityArchival storage facilityResearch data facilityMetrics
WyomingScience efforts
A look at how NCAR & CISL are organized more of CISLHPC facilityArchival storage facilityResearch data facilityMetrics
WyomingScience efforts
ECMWF Workshop Nov 6, 2008
NCARNCARNCARNCAR - a federally funded research and development center sponsored by theNational Science Foundation. • Established in 1960 by 14 universities• Managed by the University Corporation for Atmospheric Research (UCAR)• UCAR: non-profit private corporation
• Composed of 73 Member Universities• 18 Academic Affiliate• 46 International Affiliate Institutions
Principle Objectives:• Partners with universities and research centers,• Dedicated to exploring and understanding the Earth’s atmosphere and its interactions with the Sun, the oceans, the biosphere, and human society.
ECMWF Workshop Nov 6, 2008
CISL OrganizationCISL Organization
Technology Development Rich Loft, Director
5 employees
Technology Development Rich Loft, Director
5 employeesOperations & ServicesTom Bettge, Director
6 employees
Operations & ServicesTom Bettge, Director
6 employees
Data Center Project Office
Krista Laursen, Director
Data Center Project Office
Krista Laursen, Director
Computational MathematicsPoitr Smolarkiewics
5 employees
Computational MathematicsPoitr Smolarkiewics
5 employees
Turbulence Numerics TeamGeophysical Turbulence
Annick Pouquet5 employees
Turbulence Numerics TeamGeophysical Turbulence
Annick Pouquet5 employees
Data Assimilation ResearchJeff Anderson4 employees
Data Assimilation ResearchJeff Anderson4 employees
Geophysical Statistics ProjectSteve Sain
7 employees
Geophysical Statistics ProjectSteve Sain
7 employees
Visualization & EnablingTechnologiesDon Middleton11 employees
Visualization & EnablingTechnologiesDon Middleton11 employees
Computer ScienceHenry Tufo
6 employees
Computer ScienceHenry Tufo
6 employees
Earth System ModelingInfrastructureCecelia DeLuca6 employees
Earth System ModelingInfrastructureCecelia DeLuca6 employees
Network Engineering& Telecommunications
Marla Meehl26 employees
Network Engineering& Telecommunications
Marla Meehl26 employees
High‐end ServicesGene Harano21 employees
High‐end ServicesGene Harano21 employees
IMAGeDoug Nychka, Director
5 employees
IMAGeDoug Nychka, Director
5 employees
Laboratory DirectorateAl Kellie
Associate Director of NCAR
5 employees
Laboratory DirectorateAl Kellie
Associate Director of NCAR
5 employees
Data SupportSteven Worley9 employees
Data SupportSteven Worley9 employees
Enterprise Services Aaron Andersen36 employees
Enterprise Services Aaron Andersen36 employees
Laboratory Administration& Outreach Services
Janice Kauvar, Administrator7 employees
Laboratory Administration& Outreach Services
Janice Kauvar, Administrator7 employees
ECMWF Workshop Nov 6, 2008
CISL at a GLANCECISL at a GLANCE
Bluefire (commissioned in June 2008)• 4,064 IBM Power6 processors, 4.7 GHz, quadrupled NCAR’s sustained computing
capacity• 76 teraflops peak • Hydro‐cluster ‐ water‐cooled doors and processors 33% more energy efficient than
traditional air‐cooled, each cabinet weighs 3600 pounds (midsize car)• 3X more energy efficient than P5+• Chips run around 140° F compared to 180° F for air‐cooled systems • Runs climate models, atmospheric chemistry, high‐resolution forecasts• LSF job scheduling and queuing system• 12 TB memory, 150 TB storage• InfiniBand switch (four QLogic Model 9240 288‐port switch chassis)• Peak bandwidth 6 GB/sec; latency=1.27 microseconds• 740 kilowatts (60% of our overall computing power)• Sustained performance: 6‐16% of peak for our job mix
Bluefire (commissioned in June 2008)• 4,064 IBM Power6 processors, 4.7 GHz, quadrupled NCAR’s sustained computing
capacity• 76 teraflops peak • Hydro‐cluster ‐ water‐cooled doors and processors 33% more energy efficient than
traditional air‐cooled, each cabinet weighs 3600 pounds (midsize car)• 3X more energy efficient than P5+• Chips run around 140° F compared to 180° F for air‐cooled systems • Runs climate models, atmospheric chemistry, high‐resolution forecasts• LSF job scheduling and queuing system• 12 TB memory, 150 TB storage• InfiniBand switch (four QLogic Model 9240 288‐port switch chassis)• Peak bandwidth 6 GB/sec; latency=1.27 microseconds• 740 kilowatts (60% of our overall computing power)• Sustained performance: 6‐16% of peak for our job mix
ECMWF Workshop Nov 6, 2008
• IBM POWER6 • 76.4 TeraFLOPs peak • Each batch nodes has 32 4.7GHz P6 (dual core chips)
– 120 batch nodes– 69 with 64 GB memory (2 GB/CPU)– 48 with 128 GB memory (4Gb/CPU)– 2 interactive, 2 share‐queue, 4 GPFS and 2 system nodes
• Infiniband switch QLogic 9240 (8 links per node)• 150 Terabytes disk.• Sustained Computational Capacity
– 3.88x that of former P5+• Computational Capability
– 1.65x per processor over P5+ for typical NCAR code
• IBM POWER6 • 76.4 TeraFLOPs peak • Each batch nodes has 32 4.7GHz P6 (dual core chips)
– 120 batch nodes– 69 with 64 GB memory (2 GB/CPU)– 48 with 128 GB memory (4Gb/CPU)– 2 interactive, 2 share‐queue, 4 GPFS and 2 system nodes
• Infiniband switch QLogic 9240 (8 links per node)• 150 Terabytes disk.• Sustained Computational Capacity
– 3.88x that of former P5+• Computational Capability
– 1.65x per processor over P5+ for typical NCAR code
BluefireBluefire
ECMWF Workshop Nov 6, 2008
CISL at a GLANCISL at a GLANCECECooling• Liebert air handlers cool and humidify the air, pulling hot air from the
ceiling through a large water‐cooled radiator which blows cool air into the raised floor
• 30% relative humidity to reduce static electricity• Two 450 ton chillers cool the water• Two 1500 gallon tanks act as thermal sink; store 44° F chilled water;
provides 18 min window for chiller failovers (55 seconds without battery)
Power• 2 megawatt facility• 1.2 megawatts for computing• 2 Excel feeds of 13,200V each• $55K monthly power bill• 60% computing, 40% mechanical• PowerWare UPS gives us 15 min of 1.2 megawatts• 2 diesel power generators (1.5 megawatts and 8 hours of diesel fuel
each)
Cooling• Liebert air handlers cool and humidify the air, pulling hot air from the
ceiling through a large water‐cooled radiator which blows cool air into the raised floor
• 30% relative humidity to reduce static electricity• Two 450 ton chillers cool the water• Two 1500 gallon tanks act as thermal sink; store 44° F chilled water;
provides 18 min window for chiller failovers (55 seconds without battery)
Power• 2 megawatt facility• 1.2 megawatts for computing• 2 Excel feeds of 13,200V each• $55K monthly power bill• 60% computing, 40% mechanical• PowerWare UPS gives us 15 min of 1.2 megawatts• 2 diesel power generators (1.5 megawatts and 8 hours of diesel fuel
each)
ECMWF Workshop Nov 6, 2008
CISL at a GLANCECISL at a GLANCE
Frost• IBM BlueGene/L supercomputer• 2,048 PowerPC 440 processors, 700 Mhz ,5.7 teraflops
peak• Architecture uses densely packed lower speed 700 Mhz
processors, with increased bandwidth between processor and memory
• each node in the cluster runs a microkernel rather than a complete operating system
• runs models and code that are optimized for massively parallel computing
• 109 TB storage
Frost• IBM BlueGene/L supercomputer• 2,048 PowerPC 440 processors, 700 Mhz ,5.7 teraflops
peak• Architecture uses densely packed lower speed 700 Mhz
processors, with increased bandwidth between processor and memory
• each node in the cluster runs a microkernel rather than a complete operating system
• runs models and code that are optimized for massively parallel computing
• 109 TB storage
ECMWF Workshop Nov 6, 2008
CDC 3600CDC 6600
CDC 7600Cray 1-A S/N 3 (C1)
Cray 1-A S/N 14 (CA)Cray X-MP/4 (CX)TMC CM2/8192 (capitol)
Cray Y-MP/8 (shavano)Cray Y-MP/2 (castle)
IBM RS/6000 Cluster (CL)TMC CM5/32 (littlebear)
IBM SP1/8 (eaglesnest)CCC Cray 3/4 (graywolf)
Cray Y-MP/8I (antero)Cray T3D/64 (T3)
Cray T3D/128 (T3)Cray J90/16 (paiute)Cray J90/20 (aztec)
Cray J90se/24 (ouray)Cray C90/16 (antero)
HP SPP-2000/64 (sioux)Cray J90se/24 (chipeta)SGI Origin2000/128 (ute)
Linux Networx Pentium-II/16 (tevye)
IBM p3 WH1/296 (blackforest)IBM p3 WH2/604 (blackforest)
IBM p3 WH2/1308 (blackforest)Compaq ES40/36 (prospect)
SGI Origin 3800/128 (tempest)IBM p4 p690-C/1216 (bluesky)
IBM p4 p690-C/1600 (bluesky)IBM p4 p690-F/64 (thunder)
IBM e1350/264 (lightning)IBM e1350/140 (pegasus)
IBM BlueGene-L/2048 (frost)Aspen Nocona-IB/40 (coral)IBM p5 p575/624 (bluevista)
IBM p5+ p575/1744 (blueice)
IBM p6 Power 575/4064 (bluefire)
1960 1965 1970 1975 1980 1985 1990 1995 2000 2005
S u p e r c o m p u t i n g a t N C A R
Production Systems
Experimental and Non-Production Systems
Blue text indicates those systems that are currently in operation within theNCAR Computational and Information Systems Laboratory's computing facility.
3 Sep '08
ECMWF Workshop Nov 6, 2008
0.0
1.0
2.0
3.0
4.0
5.0
Jan-00 Jan-01 Jan-02 Jan-03 Jan-04 Jan-05 Jan-06 Jan-07 Jan-08 Jan-09 Jan-10
Estimated Sustained TFLOPs at NCAR (All Systems)IBM POWER6/Power575/IB (bluefire)
IBM POWER5+/p575/HPS (blueice)
IBM POWER5/p575/HPS (bluevista)
IBM BlueGene/L (frost)
IBM Opteron/Linux (pegasus)
IBM Opteron/Linux (lightning)
IBM POWER4/Federation (thunder)
IBM POWER4/Colony (bluesky)
IBM POWER4 (bluedawn)
SGI Origin3800/128
IBM POWER3 (blackforest)
IBM POWER3 (babyblue)
lightning/pegasus
blueskyblackforest
ARCS Phase 3
ARCS Phase 2
ARCS Phase 4
Linux
frost
bluevista
ICESS Phase 1
blueice
frost
bluefire
ICESS Phase 2
ARCS Phase 1
ECMWF Workshop Nov 6, 2008
0
5
10
15
20
25
Jan-98 Jan-99 Jan-00 Jan-01 Jan-02 Jan-03 Jan-04 Jan-05 Jan-06 Jan-07 Jan-08 Jan-09 Jan-10
Power Consumption (sustained MFLOP per Watt) IBM POWER6 (bluefire)
IBM POWER5+ (blueice)
IBM POWER5 (bluevista)
IBM BlueGene/L
IBM AMD/Opteron Linux (lightning, pegasus)IBM p690 (bluesky, thunder, bluedawn)
SGI Origin3800 (chinook, tempest)
Compaq ES40 (prospect)
IBM POWER3 (blackforest, babyblue)
SGI Origin2000 (dataproc)
SGI Origin2000 (ute)
HP SPP-2000 (sioux)
Cray J90's (aztec, paiute, ouray, chipeta)
Cray T3D
Cray C90/16 (antero)
CISL at a GLANCECISL at a GLANCEArchival Storage facility (MSS)– 5 silos, 6,000 slots per silo, 30,000 tapes total– 200 GB tapes , max capacity of 6 PB has been reached– Library of Congress print holdings, > 30 million books, were all digitized,
it is estimated to be 20 TB (less than 1% of MSS)– Growth rate increasing with computational rate– 48 TB disk cache speeds repeated accesses of popular files – ~ 60% disk cache hit rate for files up to 1 GB – Massive keeps track of over 50 million files– MSS software is built in‐house at NCAR
Manual Tapes Area– devices for reading old tapes and media– tapes found in data warehouses with unique historical data which we
read and archive
Archival Storage facility (MSS)– 5 silos, 6,000 slots per silo, 30,000 tapes total– 200 GB tapes , max capacity of 6 PB has been reached– Library of Congress print holdings, > 30 million books, were all digitized,
it is estimated to be 20 TB (less than 1% of MSS)– Growth rate increasing with computational rate– 48 TB disk cache speeds repeated accesses of popular files – ~ 60% disk cache hit rate for files up to 1 GB – Massive keeps track of over 50 million files– MSS software is built in‐house at NCAR
Manual Tapes Area– devices for reading old tapes and media– tapes found in data warehouses with unique historical data which we
read and archive
ECMWF Workshop Nov 6, 2008
ECMWF Workshop Nov 6, 2008
NCAR MSS - Total Data in Archive
0
1000
2000
3000
4000
5000
6000
Jan-
97
Jan-
98
Jan-
99
Jan-
00
Jan-
01
Jan-
02
Jan-
03
Jan-
04
Jan-
05
Jan-
06
Jan-
07
Jan-
08
Jan-
09
Tera
byte
s
Total
Unique
40 years for thefirst petabyte - Nov '02
20 months for thesecond petabyte - Jul '04
24 months for thethird petabyte - Jul '06
11 months for thefourth petabyte - Jun '07
7.5 months for thefifth petabyte - Feb '08
Augmentation of the Mass Storage Augmentation of the Mass Storage Tape Archive Resources Tape Archive Resources
(AMSTAR)(AMSTAR)Predicted MSS at full capacity by 26 Sept 2008
Actual, 6PB crossed 27 Sept 2008
Initiated an procurement for a 4 year contract to augment and/or replace the STK Powderhorn Silos with new robotic tape storage technology, plus developmental HPSS
AMSTAR Contract signed in early Sept 2008.
Installations and ATPs underway.
Predicted MSS at full capacity by 26 Sept 2008
Actual, 6PB crossed 27 Sept 2008
Initiated an procurement for a 4 year contract to augment and/or replace the STK Powderhorn Silos with new robotic tape storage technology, plus developmental HPSS
AMSTAR Contract signed in early Sept 2008.
Installations and ATPs underway.
ECMWF Workshop Nov 6, 2008
AMSTAR AMSTAR ProgressionProgression
20082008Phase 1 – Production Library #1
• (1) 4,000‐slot SL8500 Library • (30) T10000B tape drives,• (4,000) T10000 Tapes, • (40) T10000 cleaning tapes
•Phase 1a – Development Library
•(1) 1,448‐slot SL8500 Library•(5) T10000B tape drives switch,
• (1,000) T10000 Tapes,• (5) T10000 cleaning tapes,
Phase 1 – Production Library #1
• (1) 4,000‐slot SL8500 Library • (30) T10000B tape drives,• (4,000) T10000 Tapes, • (40) T10000 cleaning tapes
•Phase 1a – Development Library
•(1) 1,448‐slot SL8500 Library•(5) T10000B tape drives switch,
• (1,000) T10000 Tapes,• (5) T10000 cleaning tapes,
ECMWF Workshop Nov 6, 2008
AMSTAR AMSTAR ProgressionProgressionSept 2011Sept 2011
Phase 6 – 3 Production Libraries
•(3) 10,000‐slot SL8500 Libraries,
•(1) 1448‐slot development library
•(95) T10000B tape drives•(28,700) T10000 Tapes•(55) T10000 cleaning tapes
Phase 6 – 3 Production Libraries
•(3) 10,000‐slot SL8500 Libraries,
•(1) 1448‐slot development library
•(95) T10000B tape drives•(28,700) T10000 Tapes•(55) T10000 cleaning tapes
ECMWF Workshop Nov 6, 2008
Research DataResearch DataDistribution HighlightsDistribution Highlights
(2006/7)(2006/7)
• 5400 users, majority via Web (4700)
– MSS users 400
– Special orders 225
– TIGGE 50
• 102 TB data delivered
• MSS growth dominated by TIGGE (66TB)
– Other datasets increased 19 TB, up 200% from 2006
• Online availability > 18 TB
• 5400 users, majority via Web (4700)
– MSS users 400
– Special orders 225
– TIGGE 50
• 102 TB data delivered
• MSS growth dominated by TIGGE (66TB)
– Other datasets increased 19 TB, up 200% from 2006
• Online availability > 18 TB
ECMWF Workshop Nov 6, 2008
TIGGe UsageTIGGe UsageUnique Users that have downloaded data. ‐ Total Number of Registered Users = 142‐ Total volume downloaded 1.996 TB
Unique Users that have downloaded data. ‐ Total Number of Registered Users = 142‐ Total volume downloaded 1.996 TB
ECMWF Workshop Nov 6, 2008
ECMWF Workshop Nov 6, 2008
• Utilization ...
• ... average job queue‐wait times ( measured in minute to hours , not days)
• Utilization ...
• ... average job queue‐wait times ( measured in minute to hours , not days)
Aug’08 2008 2007 2006 2005
Bluefire (P6) 74.9% 62.8% - - -
Blueice (P5+) - 93.5% 88.2% - -
Bluevista (P5) 88.1% 89.8% 89.9% 89.1% -
Lightning(AMD) 24.6% 38.3% 47.3% 63.3% 61.5%
Bluesky 8-way LPARs (P4)
- - 90.4% 91.7% 92.5%
Bluesky 32-way LPARs (P4) - - 83.3% 92.9% 94.6%
Regular QueueAug’08
Average QueueWait Time
LifetimeAverage Queue
Wait Time
Bluefire (P6) 2m 3m
Blueice (P5+) - 37m
Bluevista (P5) 30m 1h40m
Lightning (AMD) 0m 16m
Servicing the DemandServicing the DemandCISL Computing FacilityCISL Computing Facility
ECMWF Workshop Nov 6, 2008
Computing Usage by Domain Computing Usage by Domain FY2008FY2008
• FY2008: as of 31 August ‘08
• Roughly 2/3 ofthat capacitywas used for climate simulation andanalysis
• FY2008: as of 31 August ‘08
• Roughly 2/3 ofthat capacitywas used for climate simulation andanalysis
NCAR FY2008 Computing Resource Usage by Discipline(FY2008 through 31 Aug 2008)
Climate59.8%
Miscellaneous0.4%
Weather Prediction10.8%
Oceanography4.7%
Atmospheric Chemistry3.4%
Astrophysics3.0%
Basic Fluid Dynamics3.1%Upper Atmosphere
0.5%
Cloud Physics1.9%
IPCC0.0%
Breakthrough Computations
3.9%
Accelerated Scientific Discovery
8.6%
Wyoming Gov Dave Freudenthalsigns Supplemental Budget Bill
March 2, 2007
Wyoming Gov Dave FreudenthalWyoming Gov Dave Freudenthalsigns Supplemental Budget Billsigns Supplemental Budget Bill
March 2, 2007 March 2, 2007
NCAR Supercomputing Center NCAR Supercomputing Center (NSC) Design(NSC) Design
• Preferred site covers 24 acres in the North Range Business Park• Modular facility design to be implemented, with initial size to be on the order of
100,000 sq. ft. with 15,000 sq. ft. of raised floor and 7MW• Initial power build-out to house 4-5MW of computing
• NCAR focused on comprehensive facility efficiency and sustainability, including:
– Adoption of viable energy efficient technologies to meet power and cooling needs
– Utilization of alternative energy (wind, solar, geothermal)
– LEED (Leadership in Energy and Environmental Design) certification
Computational and Information Systems Laboratory – NCARCopyright © 2008 - University Corporation for Atmospheric Research
Mechanical / Electrical Space A
44,000 sq. ft.
Mechanical / Electrical Space A
44,000 sq. ft.
Floor 1(15,000 sq. ft.)
Floor 1(15,000 sq. ft.)
Mechanical / Electrical Space B
44,000 sq. ft.
Mechanical / Electrical Space B
44,000 sq. ft.
Floor 3(15,000 sq. ft.)
Floor 3(15,000 sq. ft.)
MSS (6000 sq. ft.)MSS (6000 sq. ft.)
Office Space(20,000 sq. ft.)Office Space(20,000 sq. ft.)
Floor 2(15,000 sq. ft.)
Floor 2(15,000 sq. ft.)
Floor 4(15,000 sq. ft.)
Floor 4(15,000 sq. ft.)
New SC Build Out New SC Build Out Mechanical Space A
44,000 sq. ft.Mechanical Space A
44,000 sq. ft.
Floor 1(15,000 sq. ft.)
Floor 1(15,000 sq. ft.)
Mechanical Space B44,000 sq. ft.
Mechanical Space B44,000 sq. ft.
Floor 3(15,000 sq. ft.)
Floor 3(15,000 sq. ft.)
MSS (6000 sq. ft.)MSS (6000 sq. ft.)
Office Space(20,000 sq. ft.)Office Space(20,000 sq. ft.)
Floor 2(15,000 sq. ft.)
Floor 2(15,000 sq. ft.)
Floor 4(15,000 sq. ft.)
Floor 4(15,000 sq. ft.)
ECMWF Workshop Nov 6, 2008
Spatial Resolution
(x*y*z)
Ensemble size
Timescale(Years*time step)
TodayTerascale5
50
500
Climate Model
70
102010Petascale
1.4°
160km
0.2°
22kmAMR1000
400
1Km
Regular
10000
Earth System Model
100yr*
20min
1000yr*
3min1000yr * ?
Code Rewrite
Cost Multiplier
Data Assimilation
New Science Better Science
Dimensions of Climate ResearchDimensions of Climate Research
?
10
1010
10
10 10
10
2015Exascale
Lawrence Buja (NCAR) / Tim Palmer (ECMWF)ECMWF Workshop Nov 6, 2008
Climate Model Climate Model StructureStructure
Atmosphere Ocean
Coupler
Sea IceLand
C/NCycle
Dyn.Veg.
Ecosystem& BGCGas chem. Prognostic
AerosolsUpperAtm.
LandUse
IceSheets
ECMWF Workshop Nov 6, 2008
38
Advantages of High-Order Methods
• Algorithmic Advantages of High Order Methods– h-p element-based method on quadrilaterals (Ne x Ne)– Exponential convergence in polynomial degree (N)
• Computational Advantages of High Order Methods– Naturally cache-blocked N x N computations– Nearest-neighbor communication between elements
(explicit)– Well suited to parallel µprocessor systems
39
Geometry: CubeGeometry: Cube--SphereSphere• Sphere is decomposed into 6
identical regions using a central projection (Sadourny, 1972) with equiangular grid (Rancic et al., 1996).
• Avoids pole problems, quasi-uniform.
• Non-orthogonal curvilinear coordinate system with identical metric terms
Ne=16 Cube SphereShowing degree of
non-uniformity
Validating Atmospheric Models: Aqua-Planet Experiment (APE)
• Aqua-Planet is not a bad sci-fi movie starring Kevin Costner!
• APE compares idealized climates produced by global atmospheric models on a water covered world using idealized distributions of sea surface temperature.
• APE results are used to study the distribution and variability of convection in the tropics and of mid-latitudes storm-tracks.
4/29/08 41
Aquaplanet: HOMME vs Eulerian CAM Performance on Globally Averaged Observables
resolution Physics timestep
(min)
Del^4Diffusion
PrecipFrom
Convection (mm/day)
Large Scale Precip
(mm/day)
Total Cloud
Fraction(%)
Precipitable water(mm)
EUL T42 5 1e16 1.71 1.11 0.65 20.21
HOMME 1.9 5 1e16 1.76 1.14 0.66 20.09
EUL T85 5 1e15 1.59 1.38 0.60 19.63
HOMME 1.0 5 1e15 1.59 1.43 0.61 19.67
EUL T170 5 1.5e14 1.44 1.62 0.55 19.13
HOMME 0.5 5.5 1.5e14 1.47 1.63 0.55 19.21
EUL T340 5 1.5e13 1.36 1.75 0.50 18.75
Credit: Mark Taylor SNL and LLNL
4/29/08 42
Aqua-Planet CAM/HOMME DycoreFull CAM Physics/HOMME Dycore
Parallel I/O library used for physics aerosol input and input data
5 years/day