+ All Categories
Home > Documents > Fermilab’s Enterprise Quality Grid Computing...

Fermilab’s Enterprise Quality Grid Computing...

Date post: 17-Sep-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
2
Fermilab’s Enterprise Quality Grid Computing Center . GCC conversion (2004) Former Wide Band Lab (1997) Separate hot, cold aisles Cold aisle air containment Blanking and threshold panels on racks Elevated cold aisle temperatures Overhead cabling Cold air supply under raised floors with air conditioners ducted to hot air layer near ceiling Air conditioning mated to temperature sensor in front of computer racks UPS units with greater than 90 percent efficiency Energy Conservation Measures Enabling Scientific Research 24/7 web-based temperature monitoring. Thermocouple sensors provide readings. The GCC distributes and stores experiment data that’s accessible by collaborators worldwide, providing computing support for: Paul Mackenzie next to the Lattice QCD farm Enstore tape library Compact Muon Solenoid (CMS) experiment, running analysis jobs (typically 120-200k jobs/week), collecting 4-5 petabytes of data per year (CMS has a 200GB per second network allocation) and filling one tape robot every 2 years. Lattice Quantum Chromodynamics (LQCD) experiment, including high performance computing (Lattice Gauge calculations). CDF and Dzero, Run II experiments that utilize GCC resources for reconstruction and analysis (turning raw data into physics objects). Support continues for Monte-Carlo production and analyses. Dark Energy experiments, including massive surveys, simulations and high precision models. Accelerator modeling Energy-efficient overhead cabling White temperature probe positioned on a rack (thermocouple) Webserver box with thermocouple connections Temperature Monitoring
Transcript
Page 1: Fermilab’s Enterprise Quality Grid Computing Centerfop.fnal.gov/images/posters/FacilitiesSC11.pdf · 2011. 10. 19. · Fermilab’s Enterprise Quality Grid Computing Center Former

Fermilab’s Enterprise Quality Grid Computing Center

.

GCC conversion (2004) Former Wide Band Lab (1997)

Separate hot, cold aisles

Cold aisle air containment

Blanking and threshold panels on racks

Elevated cold aisle temperatures

Overhead cabling

Cold air supply under raised floors with air

conditioners ducted to hot air layer near ceiling

Air conditioning mated to temperature sensor in

front of computer racks

UPS units with greater than 90 percent

efficiency

Energy Conservation Measures

Enabling Scientific Research

• 24/7 web-based temperature monitoring.

• Thermocouple sensors provide readings.

The GCC distributes and stores experiment data that’s accessible by

collaborators worldwide, providing computing support for:

Paul Mackenzie next to the Lattice QCD farm

Enstore tape library

Compact Muon Solenoid (CMS) experiment, running analysis jobs (typically 120-200k

jobs/week), collecting 4-5 petabytes of data per year (CMS has a 200GB per second

network allocation) and filling one tape robot every 2 years.

Lattice Quantum Chromodynamics (LQCD) experiment, including high performance

computing (Lattice Gauge calculations).

CDF and Dzero, Run II experiments that utilize GCC resources for reconstruction and

analysis (turning raw data into physics objects). Support continues for Monte-Carlo

production and analyses.

Dark Energy experiments, including massive surveys, simulations and high precision

models.

Accelerator modeling

Energy-efficient overhead cabling

White temperature probe

positioned on a rack

(thermocouple)

Webserver box with

thermocouple connections

Temperature Monitoring

Page 2: Fermilab’s Enterprise Quality Grid Computing Centerfop.fnal.gov/images/posters/FacilitiesSC11.pdf · 2011. 10. 19. · Fermilab’s Enterprise Quality Grid Computing Center Former

Grid Computing Center Metrics

.

Computer Room Availability

Experiment Data Storage

GCC Capacity & Usage

High Speed Networking Mission

Provide researchers with the facilities to conduct research into the high

energy physics, intensity and cosmic frontiers. Maximize assets including

networking, data storage robotics, grid and cloud computing to optimally

arrange large clusters of computers and storage solutions to support the

production of scientific results.

Intensity Frontier Demand

0

5

10

15

20

25

30

FY07 FY08 FY09 FY10

Petabytes on tape at end of fiscal year

Other experiments

CMS

D0

CDF

0

500

1000

1500

2000

2500

3000

3500

2010 2011 2012 2013 2014

Tota

l GR

ID C

PU

(co

res)

Fiscal Year

g minus 2

Mu2e

LBNE

MicroBooNE

NOvA

Argoneut

MINERvA

SciBooNE

MINOS

MiniBooNE

0

200

400

600

800

1000

1200

1400

2010 2011 2012 2013 2014

Tota

l Dis

k (

TB)

Fiscal Year

g minus 2

Mu2e

LBNE

MicroBooNE

NOvA

Argoneut

MINERvA

SciBooNE

MINOS

MiniBooNE

2011 Facility Statistics

CPU Core Count for Science

High-speed networking to and from facilities at Fermilab and on to facilities across the world

enables the collection, archiving, processing, simulation and analysis of data from global

scientific programs.

By 2014:

Total Grid CPU cores for each experiment projected to

increase by 1100.

Total disk space expected to double.

Energy and Intensity experiments’ Tape storage demand

has tripled since 2007.

10,384 sq. ft. of raised floor data center

255 rack spaces for high-density computers

6,000 computers (multi-CPU, multi-core)

4 tape robots

Building consumes 2.5 megawatts of power

Computers consume 10 kilowatts per rack

More than 6,000 computers using 1.5

megawatts of power

1,000 tons of air conditioning removing heat

generated by computers

Capacity / demand is steadily increasing

Highly available for computer power usage

99.75% average up time (Since 2006)


Recommended